This study is conducted to gain
a maximum classification accuracy of the
image
of letters
and numbers. A high accuracy value is required for feature extraction stages to
produce good image features
. The method of
feature extraction
in this research
is performed by
using
image matrix segmentation. The image
s
of
numbers and
letters used as training and testing dataset
are
obtained from
the
result of segmentation of vehicle
lisence
plates. This research
which is included as a
research in the field of Vehicle License Plate Recognition (VLPR)
is aimed
to
recognize character images in recognizing the combination of letters and
numbers
stamped
in vehicle
lisence
plates. VLCR research has been done a
lot
as
it is
significantly
urgent
and
necessary
to
ease in
identifying
vehicle owners. There are three important
discussions
in
recognizing
vehicle license plates:
License Plate
Detection, Character Segmentation and Character Recognition
[1][2].
Recognition of the features
in
each number and letter on the license plate
is an important step in VLPR
study since
it influence
s the
thoroughness
in producing
maximum
accuracy and
the speed
in
processing overall recognition system [3].
This study
focuses on
the
method of segmentation and feature extraction using matrix segmentation to
produce a character features dataset of letters and numbers on
lisence
plates. Feature extraction
highly depends on
the results of image
segmentation, and the feature dataset from feature extraction
significantly
influences the value of accuracy
in
recognizing images
using
classification algorithms.
Accordingly,
the better the image as the region of interest produced
in segmentation process
is
, the
easier the feature extraction process is to the image, and the
produced
feature dataset will increase the
classification accuracy value.
A precise
feature extraction method in producing feature datasets is a fundamental
part of classification
since
feature
datasets produced from good feature extraction can maximize the accuracy of classification
results [3][4].
Non-linear multi-class
classification algorithm used in study is Support Vector Machine (SVM) with a
Radial Based Function (RBF) kernel. The results of previous studies said that
the classification accuracy value with SVM has a better success rate than the
Artificial Neural Network (ANN) [5] and Backpropagation Neural Network (BNN)
[6]. ANN algorithm is a combination of statistics and knowledge trees which
have a higher computational complexity than SVM. The results of non-linear
classification with BNN can not be consistent because there are differences in
the results of the model of decision boundary margins between classes on each
use of different feature test data, while the SVM can provide the same class
decision limit model (global optima). In general, the use of Neural Networks in
the classification will experience overfitting problems for large datasets.
Non-linear SVM
is chosen to
maximize the value of classification accuracy by
handling
overfitting with soft margins
. This is conducted
by replacing each dot
product
of
test feature with
non-linear kernel function matrix.
Consequently,
it is necessary to determine the best kernel for available datasets
in non-linear multi-class classification
with SVM method
.
RBF kernel is used in similar studies
because it is recommended to
gain
maximum non-linear classification results for a new dataset [7]-[11].
Classification using SVM for image features with high data dimensions can
significantly improve classification accuracy [9][12]. The
use of
SVM in classification is
advantageous as
it does not involve all
training
vector dataset of
image
feature in forming
hyperplane
and
margin
as class separators
;
only the contributing vector
dataset of
image feature (
support vector
)
is used in the
forming
hyperplane
and
margin
.
SVM has shown
a
high accuracy
value
to classify
lisence
plates and other images in previous studies. In
previous studies
,
the
authors
have optimized the use of multi-grade SVM in image
recognition
using
Geometric Decorative Motifs [13]. Lihong Zheng and Xiangjian He
from
University of
Technology Sydney have conducted
a
research on the use of Support Vector Machine
method for character recognition of Australian vehicle license plates [14].
The purposes of their study is to compare
two kernel methods, Linear Kernel and Radial Basis Kernel Function, while
the multi-class method
used in their study is “
One Against All
”
method. Kumar
and Subin conducted
a
study
on
vehicle
lisence
plates
recognition
using Support Vector Machine method [15],
and
the scope of
the study
is
to detect the location of
lisence
plates
, to
segment the
character of vehicle
lisence
plates and
to
test the accuracy of
the
method
using
multi-class
SVM
and
RBF kernel.
Such
efforts have
been
conducted
i
n
the
previous VLPR research
to develop
feature extraction to maximize the value of
classification accuracy using multi-class SVM. Andrej and Nikola
,
in his license
plate recognition study, increased
the
accuracy (83%)
as a
result
of
multiclass SVM classification
with Compressive Sensing
based
feature extraction [16]
.
Compressive
Sensing was used to minimize the use of sample data or training feature dataset.
In addition, to
improve
the quality of feature dataset in Fet Xie
,
et al
all [17],
feature extraction
is
developed
to produce features consisting of 3
characteristics: Vertical Traverse Density (VTD), Horizontal Traverse Density
(HTD), and the distance from the left edge to the
first
white pixel.
In other license plate detection studies [18] [19], classification using
Multiclass SVM
is
conducted to increase
the accuracy value by feature
extraction using Histogram of Oriented Gradient (HOG). Feature dataset
from
the extract
ed
feature vector
of each normalized candidate is used as training and testing dataset on
Multiclass SVM classification with RBF kernel [20]. In other studies
,
feature dataset
which is
produced by Enhanced geometrical
feature topological analysis (eGFTA)
is used
to improve the accuracy
value
of SVM
classification
result
[21].
In increasing the accuracy
value
of
multiclass SVM classification result, the
authors in this study,
create
a feature extraction method in generating feature dataset by
simplifying the determination of feature values
. Moreover,
image
matrix segmentation
is
used
to count
total
pixels per segment
n
th
line m
th
column
as feature vector
value for each
character of
letter or number. The methods were as follow: 1.
transformation of scanned image
t
o
ROI
Image
; 2.
image enhancement to eliminate unnecessary noise in ROI
image; 3. skeleton process for each image segmentation result; 4.
finding the value of
height and width in the image of letters/numbers
which
have
under
gone image
skeletonization
process; 5.
finding
the right matrix
segmentation; 6.
finding
total
pixels as a feature stored in each segment on the
matrix.
The method for producing letter and
number feature values carried out in this study has a simpler
computational complexity than the methods used in previous studies. Feature
extraction is generally carried out by statistical and spectral methods. In the
statistical method in determining the parameter features performed based on the
distribution of gray level frequency, the segmentation matrix is
used to determine the proximity between pixels as a function of
orientation and spatial distance in digital images. In the Spectral method
characteristic parameters are calculated by transforming the spatial domain
image into a signal domain, in this method the matrix is used to
describe the transformation scheme [22]. The use of Matrix Based Feature
Extraction Algorithms is applied to the principal component analysis (PCA) and
linear discriminant analysis (LDA) methods, to form a matrix covariance of the
matrix of the mean values of images in looking for eigenvalues
[23]. Searching for eigenvalue requires more time for images with
high resolution, for example, if images with a resolution of 256x256 will
produce a vector with a dimension value of 65526, so with a large dimension,
the calculation of covariance matrix in producing eigenvalues will
take longer. In this study, there is a process simplicity, because the
segmentation matrix always uses a 5x5 size and produces a 25-dimensional vector
as a feature, regardless of the magnitude of the image resolution. The use of
matrix segmentation is based on a survey conducted by Mingqiang et. al. [24],
in the survey, a comparative study was carried out on 40 techniques for
extraction of shape-base features. The results of the survey have concluded
that the ease of feature extraction processes with low computational complexity
and more efficient processes, is using the Shape Matrix which is a Spatial
Interrelation Feature method. The Shape Matrix is described by
the MxN matrix to represent the shape region of each image.
The data used in this study are digital
images obtained by scanning
vehicle
license plates in Indonesia (figure 1)
with black
background
and white characters
of
letter
s
/number
s
. The image
dataset consists of the vehicle license plate is 300 (three hundred) samples taken
under
random conditions and lighting on the
research
object
s and the
time
of
taking
/scanning the image is
between 10:00 to 15:00. License plate [ in
Indonesia is regulated based on the Article 68 Paragraph 4
Law Number
22
year
2009
on Traffic and
Public Transportation
.
The law explains
that license plate number must meet
the requirements of shape, size, material, color, and method of installation
. However, it does
not contain the standard rules for the
use
of font
type
. Thus,
the
character of letters and numbers
on lisence plate may
vary greatly.
This
diversity
in fact
creates
its own complexity in generating feature dataset.
Fig. 1. Image Example of Scanned Lisence Plate
This research focuses on how to do
segmentation process of the scanned image, and how feature extraction method
is
applied to the
segmented image with a variety of font types for each letter or number.
Moreover,
feature
extraction
results in
a feature dataset for numbers 0-9 and letters A
-
Z
which will be
used
for training and testing of non-linear multi-class SVM classification
using
RBF kernel
and one against-all method.
Finding
Region of interest (ROI) from the
scanned
image
s is conducted before
the stages in segmentation process. The pseudo code to generate ROI
can be described
as follows:
Description
[
line
,
column
]
size(
img
In);
StartingPoint
int16(
line
/3);
Implementation
Binarization
{ RGB
to binary image with pixel value
0
or
1
à
figure
2.}
for i=
StartingPoint
:-1:1
total
1
0;
for j= 1:
column
if(
im
gIn(i,j) > 0)
total
1
total
1+1;
end
end
{
if total pixel value 1 less than 1/8 column
}
if (
total
1< (int16((
column
/4)/2)))
break; {
repetation end
}
end
end
The
result of binarization and
ROI
is
seen in Figure 2 and 3.
(A)
|
(B)
|
|
Fig.
2.
RGB image
(A),
Binary Image
(B)
|
Fig.
3.
Result of transformation of scanned images
t
o
ROI
Image
The next step
prior to
segmentation is to do image enhancement
on
ROI image to
eliminate unnecessary noise
. This study used matlab
function: bwareaopen (
). Denoising process aims to simplify segmentation process in separating
letters and numbers on a
lisence
plate from
its
black
background. This process can be seen in this following figure 4.
(A)
|
(B)
|
|
Fig.
4. ROI
image with
noise
(A)
to ROI image without noise
(B)
|
ROI images
without noise
will
be
come
input
of
letters (A-Z) and numbers (1-9)
using
segmentation
process. Segmentation method proposed in this study
used
this
following
algorithm:
1.
Mark the connected
pixels
.
2.
Conduct looping or repetation starting
from 1 to
the marked
maximum pixel marked
.
3.
Find the
appropriate pixel label, if
it is
found
,
insert it
in the
row and column
matrix.
4.
Save the
image from min row to max row and min column
to
max
column.
5.
If it is not the
end of the loop
,
skip to command 3,
and
if
is
the end of the
loop
,
then the loop
has finished.
The
pseudo code
of the algorithm can be seen in
this following:
Description
Imgn=imageInput
{image input image
after
preprocessing};
[L Ne] =
bwlabel(imgn);
Implementation
for n=1:Ne
[r,c]
nl
find(L==n)
imgn(min(r):max(r),min(c):max(c));
img_r
nl;
image
in
img_r;
end
The results of the
segmentation method above is in the form
of image
pieces
of
letters and numbers
existing
in ROI image
.
Moreover, the
image
resulted
from
the segmentation
consisting of 4 numbers and 3 letters is as shown in Figure 5 below.
(A)
|
(B)
|
|
Fig. 5.
ROI
image
(A)
and image of segmentation result
(B)
|
As the
input
is in the form of lisence
plate images
which
vary
in
the type or
shape of each letter and number,
the method in this research used
5 variations
for each image letter or figure
from
segmentation results
and these
will be
used as training image
input
. Accordingly,
the
number of image in datasets
is
180 images
which
were
used in
trained
. This number
of
images comes from the number of
lisence plate
number
images
(0-9) x 5
= 50 images and
lisence
plate
letter images (A
-
Z) x 5 = 130.
Figure 6 shows the examples
of training image datasets that will be extracted to produce
training feature datasets
:
letter
/ number
(
class
)
|
Image 1
|
Image 2
|
Image 3
|
Image 4
|
Image 5
|
|
Fig.
6.
Example of
Training
Image
Dataset
Feature extraction method
used
in this study
is
by using
image matrix segmentation to find
the
features in
each letter (A
-
Z) and number (0-9) that will be used as training
and testing
feature dataset in classification
. The followings are the stages of
feature
extraction
:
1.
Finding
the
height and width value of each character
which
has been
transformed
in
skeletonization
process.
In
matlab,
skeletonization process
can use bwmorph () function.
Skeleton
process
causes
the
letters or numbers become like a skeleton as
seen
in
Figure 7. To simplify feature extraction process,
skeleton image
is
standardized to 40 x 40 pixels.
2.
Fig.
7.
Result of Skeletonization Process
The height and width of the letters or
numbers
on the
image
resulted
from
skeletonization process
is then
calculated.
Furthermore, it is
mapped into the matrix n x n
based on the
height and width of the image.
Figure 8 shows an
example of skeleton
image which is
mapped
in
to 5 x 5 matrix segmentation.
3.
Counting
total
pixels per matrix segment as a feature.
Thus,
each image
number or letter has n x n
total
features stored
in the feature vector. In Figure 8, letter A image is mapped to a 5 x 5 matrix
segmentation
possessing
25
matrix segments
of which
each segment is counted by
total
number of pixels
and
total
pixels
in each segment
shows
its
feature value.
The
examples
of feature extraction result
ed
by
counting
total
pixels
of each matrix segment as feature values
are shown
in
table 1.
Fig.
8. 5x5
Image Matrix
Segmentation
The
following is the
pseudo
code calculation of
total
pixels per matrix segment
on
n x n matrix
segmentation
:
Description
Line
the number of image lines
Column
the
number of image columns
Average pixel per
line
int16(
line
/n)
Ave pixel per col
int16(
column
/n)
Total Pixel
[ ];
Implementation
{
line
repetation
}
For b
1 :
average pixel per line
:
Line
– 1
{
column
repetation
}
For k
1:
average pixel per column
:
column
– 1
Total
1
0
Total
0
0
{
Line and
Column Repetation in each n x n matrix room
}
for bb
b:1:b+
average pixel Line
-1
for kk
k:1:k+
average pixel column
-1
if(image(bb,kk)=
=0)
total
0
total
0+1
else
total
1
total
l1+1
end
end
end
Total
Pixel
1
Vertcat
(
Total
Pix1,
Total
1/10)
end
end
output
[
Total
Pix1];
Table 1. Feature Vector from Feature extraction for 5 image types of letter A
Class
|
Feature
Vector of
Image
1
|
Feature
Vector of
Image
2
|
Feature
Vector of
Image
3
|
Feature
Vector of
Image
4
|
Feature
Vector of
Image
5
|
A
|
0
|
0
|
0
|
0
|
0
|
0
|
0
|
0
|
0
|
0
|
0.70
|
1.20
|
0.90
|
0.50
|
0
|
0
|
0
|
0
|
0
|
0.30
|
0
|
0
|
0
|
0
|
0.80
|
0
|
0
|
0
|
0
|
0
|
0.70
|
0.80
|
0.70
|
0.60
|
0
|
0.40
|
1.10
|
0.90
|
0.50
|
1.00
|
0.50
|
0.70
|
0
|
0.50
|
0.70
|
0
|
0
|
0
|
0
|
0.80000
|
0
|
0
|
0
|
0
|
0
|
0.80
|
0.80
|
0.80
|
0.80
|
0.10
|
0
|
0
|
0.20
|
0
|
0.70
|
0.80
|
0.80
|
0.60
|
0.80
|
0
|
0
|
0
|
0
|
0
|
0.80
|
0.10
|
0.40
|
0.20
|
0
|
0
|
1.00
|
0.90
|
1.10
|
1.20
|
0.80
|
0.80
|
0.80
|
0.80
|
0.80
|
0.80
|
1.10
|
1.00
|
1.00
|
1.10
|
0.80
|
0.10
|
0.30
|
0
|
0
|
0.90
|
0.80
|
0.90
|
1.30
|
0.90
|
0.90
|
0
|
0
|
0.40
|
1.0
|
0.10
|
0
|
0
|
0
|
0
|
0
|
0
|
0
|
1.20
|
0.20
|
0
|
0.90
|
0.80
|
0.80
|
0.70
|
0.60
|
Feature
extraction
testing is conducted
in
this study
with value
ranging from n=2 (4 features) to n=8 (64
features)
for n
x
n matrix
segmentation. The datasets with 300
images of
vehicle
lisence
plate
which are
randomly
divided into 200 images as training dataset and 100 images as test
ing
dataset. This
test is to find the best
n
value
,
so n x n matrix
can
create
the best feature vector to get maximum classification accuracy
value
. Segmentation
matrix used in this study is 5 x 5 (n = 5) consisting of 25 segments
. The number of each
segment is calculated and
total
pixel of
each segment
means
one feature value
. Thus,
each image has 25 features stored in
the feature vector.
The
use
n = 5
is
based on
the trial
test
with value from
n
= 2 to n = 8 in feature extraction method with image matrix segmentation
. The trial shows
the
value of classification result decreases if the number of features
declines
on the
feature vector <25 (n <5) and the number of features
increases
at
feature vector
>25 (n> 5). From the previous
authors
research
[4], Wavelet decomposition levels analysis for feature extraction with SVM
classification, it
shows
that the use of the number of features in feature
vector
obtained
from wavelet decomposition
influences
the
results of SVM classification
. consequently,
it is necessary to use the
right level of decomposition. This
case
is similar in this study, where the number
of features in the feature vector
resulted
from feature extraction using matrix
segmentation
affects
the results of SVM classification
.
I
n conclusion, it requires
testing with some
n
values, and the results of the
trials (table 2) show that
maximum accuracy value
occur at n = 5.
Table 2. Effects of n Value on Accuracy Value with SVM-RBF C = 15 and γ = 0.8.
N
Value
|
Matrix segmentation
|
Number of
segment/
feature
|
Over All Accuracy (AC)
|
2
|
2x2
|
4
|
0,58
|
3
|
3x3
|
9
|
0.64
|
4
|
4x4
|
16
|
0.83
|
5
|
5x5
|
25
|
0.92
|
6
|
6x6
|
36
|
0.79
|
7
|
7x7
|
49
|
0,70
|
8
|
8x8
|
64
|
0,62
|
Non-linear multi-class classification
in this study consists of 36 classes consisting of 26 classes for the image of
letter A-Z and 10 classes for numbers 0-9. Confusion matrix (AC = (TP + FP) /
(TP + FP + TN + FN))
is used to calculate the
accuracy value
for measuring
the
success ratio of the classification result.
The right
value of
RBF kernel parameter
cost
function
/ C
and gamma / γ is
required
in the classification using non-linear multi-class
SVM-RBF with the one-against-all method. The correct value of parameters C and
γ
can maximize the function
of
classification (
hyperplane
)
and
can
adjust the equilibrium
of
margin
distance (class
+1 and -1) with the
right
hyperplane
[9].
From feature dataset
of
100 images
resulted from
feature extraction features with
5 x 5 (n = 5)
matrix segmentation, classification
is conducted
with
the highest accuracy value of 0.92 (90%)
using
parameter
values C = 15 and γ = 0.8 as
seen
in table 3.
Table 3. Measurement of Accuracy Level
|
|
True
|
False
|
True
|
False
|
Over All
|
SVM
Parameter
|
Positive
|
Positive
|
Negative
|
Negative
|
Accuracy
|
|
|
(TP)
|
(FP)
|
(TN)
|
(FN)
|
(AC)
|
|
C = 0
|
0
|
93
|
7
|
0
|
0,07
|
γ = 0.4
|
C = 5
|
32
|
61
|
7
|
0
|
0,39
|
|
C=10
|
32
|
61
|
7
|
0
|
0.39
|
|
C = 15
|
35
|
58
|
7
|
0
|
0,42
|
C = 20
|
34
|
59
|
7
|
0
|
0,41
|
γ
= 0.6
|
C = 0
|
0
|
93
|
7
|
0
|
0,07
|
C = 5
|
69
|
24
|
7
|
0
|
0,76
|
C = 10
|
70
|
23
|
7
|
0
|
0,77
|
C = 15
|
72
|
21
|
7
|
0
|
0,79
|
C = 20
|
71
|
22
|
7
|
0
|
0,78
|
γ = 0.8
|
C = 0
|
0
|
93
|
7
|
0
|
0,07
|
C = 5
|
81
|
12
|
7
|
0
|
0,88
|
C = 10
|
82
|
11
|
7
|
0
|
0,89
|
C = 15
|
85
|
8
|
7
|
0
|
0,92
|
C = 20
|
81
|
12
|
7
|
0
|
0,88
|
γ
= 1.0
|
C = 0
|
0
|
93
|
7
|
0
|
0,07
|
C = 5
|
76
|
17
|
7
|
0
|
0,83
|
C = 10
|
77
|
16
|
7
|
0
|
0,84
|
C = 15
|
78
|
15
|
7
|
0
|
0,85
|
C = 20
|
76
|
17
|
7
|
0
|
0,83
|
γ
= 1.2
|
C = 0
|
0
|
93
|
7
|
0
|
0,07
|
C = 5
|
72
|
21
|
7
|
0
|
0,79
|
C = 10
|
74
|
19
|
7
|
0
|
0,81
|
C = 15
|
74
|
19
|
7
|
0
|
0,81
|
C = 20
|
73
|
20
|
7
|
0
|
0,8
|
Table 2
describes
that C value from 0 to 15
causes
the
accuracy value increase, and C value more than 15
causes
the
accuracy value decrease. This is
due to
the fact that
an increase in
variance
at
C > 15 causes over fitting.
Value of
parameter γ > 0.8 increases
the
bias
so
that
misclassification increases due to under fitting, and
there is an increase of
variance
in
value
of
parameter γ < 0.8
, thus there are more
misclassification
because of
over fitting.
Testing and training
feature dataset resulted from
feature extraction method
s
with 5 x 5 (n = 5)
image matrix
segmentation is a feature dataset with the best
n
value for
classification of letters and numbers
images
using
non-linear multi-class SVM-RBF method with
value of
parameter
C = 15 and γ = 0.8. Image segmentation method of vehicle
lisence plate
and
feature extraction using image matrix segmentation in this study is very easy
to apply, and
is
successfully applied for multi-class classification with a total of
36 classes, 26 classes for A
-
Z letter images and 10 classes for
0-9
number
image
s
with
92%
accuracy
value. The feature dataset produced by feature extraction method in this study
with more classification classes
needs
further research
to test
based on
processing time and the
result of
classification accuracy
value
.
Matrix
segmentation with n = 5
results in
25 features on
each image feature vector.
However, it
still
requires
a
trial to add feature value to the features vector of letters and
numbers
images
which
are in the form of centroid values and
the value of
letters’
height
and width.
1.
Shuang Qiao, ”Research of Improving the Accuracy
of License Plate Character Segmentation”, Fifth International Conference on
Frontier of Computer Science Technology, 2010.
2.
Choudhury A. Rahman, Wael Badawy,”A Real Time Vehicle License Plate
Recognition System”, Proceedings of the IEEE Conference on Advance Video and
Signal Based Surveillance (AVSS’03), 2003.
3.
Xiaojuan Ma, Renlong Pan, Lin Wang, “License
Plate Character Recognition Based on Gaussian-Hermite Moments”, 2010 Second
International Workshop on Education Technology and Computer Science, Guizhou
Key Lab of Pattern Recognition & Intelligent Control Guizhou University for
Nationalities Guiyang, China, 2010
4.
Budiman F.,
Suhendra A., Agushinta D., Tarigan A., “Wavelet Decomposition Levels Analysis
For Indonesia Traditional Batik Classification”. Journal of Theoretical &
Applied Information Technology,
92(2):389-394.
2016.
5.
Ahmad M. Sarhan,
“Wavelet-based Feature Extraction For DNA Microarray Classification”,
Springer Science+Business Media B.V. Artif
Intell Rev, 39:237-249. 2013.
6.
Yuan, Qing-Ni, Lu, Jian,
Huang, Haisong, dan Pan, Weiji, 2014, ”Research of Batik Image Classification
Based On Support Vector Machine”, Computer Modelling & New Technologies
18(12B) 193-196, 2014.
7.
Hofmann, Martin. ”Support Vector Machine-Kernel
and The Kernel Trick”, Bamberg University, 2006.
8.
Hsu, Chih-Wei, Chang, Chih-Chung, Lin Chih-Jen,
“A Practice Guide to Support Vector Classification”. Department of Computer
Science, National Taiwan University, 2010.
9.
Renukadevi, N.T., Thangaraj, P., “Performance Evaluation Of Svm-Rbf
Kernel For Medical Image Classification”, Global Journal of Computer Science
and Technology Graphics & Vision, vol.13 issue 4, Global Journal Inc, USA,
2013.
10.
Rosales-Perez,
Alejandro, Escalante, Hugo Jair, Gonzales, Jesus A., Reyes-Garcia, Carlos A.,
“Bias And Variance Optimization For Svms Model Selection”, Proceedings of the
Twenty-Sixth International Florida Artificial Intelligence Research Society
Conference, 2013.
11.
Budiman F., Suhendra A., Agushinta D.,
& Tarigan A.,
“
Determination Of SVM-RBF Kernel Space
Parameter To Optimize Accuracy Value Of Indonesian Batik Images Classification
”
, Journal of
Computer Science.
13(11):590-599, 2017.
12.
Azhar, Ryfial,Tuwohingide, Desmin, Kamudi, Dasrit,
Sarimuddin., dan Suciati, Nanink,”Batik Image Classification Using SIFT Feature
Extraction, Bag of Features and Support Vector Machine”, Procedia Computer
Science 72(2015)24-30, Elsevier, 2015.
13.
Budiman F. “SVM-RBF Parameters Testing
Optimization Using Cross Validation and Grid Search to Improve Multiclass
Classification”, Scientific Visualization, vol.11(1): 80-90, 2019.
14.
Lihong
Zheng, Xiangjian He,”Number Plate Recognition Based on Support Vector Machine”,
Proceeding of the IEEE International Conference on Video and Signal Based
Surveillance, 2006.
15.
Kumar
Parasuraman, “SVM Based License Plate Recognition System”, IEEE International
Conference on Computational Intelligence and Computing Research, 2010.
16.
Jokic A, Vukovic N. “License Plate
Recognition with Compressive Sensing Based Feature Extraction”. arXiv preprint
arXiv:1902.05386. 2019.
17.
Xie F, Zhang M., Zhao J., Yang J., Liu Y.,
Yuan X. A., Robust License Plate Detection and Character Recognition Algorithm
Based on a Combined Feature Extraction Model and BPNN. Journal of Advanced
Transportation. 2018.
18.
Astawa IN, Caturbawa IG, Sajayasa IM,
Atmaja IM. “Detection of License Plate using Sliding Window, Histogram of
Oriented Gradient, and Support Vector Machines Method”. In Journal of Physics:
Conference Series 2018 Jan (Vol. 953, No. 1, p. 012062). IOP Publishing.
19.
Negri P. “A MATLAB SMO Implementation to
Train a SVM Classifier: Application to Multi-Style License Plate Numbers
Recognition”. Image Processing On Line. 22;8:51-70. 2018.
20.
Kumar
Parasuraman, “SVM Based License Plate Recognition System”, IEEE International
Conference on Computational Intelligence and Computing Research, 2010.
21.
Abdullah, Siti Norul Huda Sheikh,
Khairuddin Omar, Shahnorbanun Sahran, and Marzuki Khalid. "License plate
recognition based on support vector machine." In 2009 International
Conference on Electrical Engineering and Informatics, vol. 1, pp. 78-82. IEEE,
2009.
22.
Lekhana, G. C., Srikantaswamy, R. “Real
time license plate recognition system”, International Journal of Advanced
Technology & Engineering Research 2.4: 5-9, 2012.
23.
Lee, Jin-Ki, et al. A Study on
Recognition of Both of PCA and LDA Using Types of Vehicle Plate. The
Journal of Korea Institute of Information, Electronics, and Communication
Technology 6.1
: 6-17, 2013.
24.
Mingqiang, Yang, Kpalma Kidiyo, and
Ronsin Joseph. "A survey of shape feature extraction techniques", Pattern
recognition 15.7: 43-90, 2008.