0 ratings0% found this document useful (0 votes) 47 views7 pagesThis Is
Tutorial on Support Vector Machine (SVM)
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here.
Available Formats
Download as PDF or read online on Scribd
Tutorial on Support Vector Machine (SVM)
Vikramaditya Jakkula,
School of ECS,
Washington State University,
Pullman 99164.
Abstract: fn this tutorial we present a brief introduction to SVM, and we discuss about SVM from
‘published papers, workshop materials & material collected from books and material available online on
the World Wide Web. In the beginning we try to define SVM and try to talk as why SVM, with a brief
overview of statistical learning theory. The mathematical formulation of SVM is presented, and theory far
the implementation of SVM is briefly discussed. Finally some conclusions on SVM and application areas
are included. Support Vector Machines (SVMs) are competing with Neural Networks as tools for solving
‘pattern recognition problems. This tutorial assumes you are familiar with concepts of Linear Aigebra, real
‘analysis and also understand the working of neural networks and have some background in Al.
Introduction
Machine Learning is considered as a subfield of Artificial Intelligence and it is concerned with
the development of techniques and methods which enable the computer to learn, In simple terms
development of algorithms which enable the machine to learn and perform tasks and activities.
Machine learning overlaps with statistics in many ways. Over the period of time many techniques
and methodologies were developed for machine learning tasks [1]
Support Vector Machine (SVM) was first heard in 1992, introduced by Boser, Guyon, and
Vapnik in COLT-92. Support vector machines (SVMs) are a set of related supervised learning
methods used for classification and regression [1]. They belong to a family of generalized linear
classifiers. In another terms, Support Vector Machine (SVM) is a classification and regression
prediction tool that uses machine leaning theory to maximize predictive accuracy while
automatically avoiding over-fit to the data. Support Vector machines can be defined as systems
which use hypothesis space of a linear functions in a high dimensional feature space, trained with
learning algorithm from optimization theory that implements a learning bias derived from
statistical learning theory. Support vector machine was initially popular with the NIPS
community and now is an active part of the machine learning research around the world. SVM
becomes famous when, using pixel maps as input; it gives accuracy comparable to sophisticated
neural networks with elaborated features in a handwriting recognition task [2]. It is also being
used for many applications, such as hand writing analysis, face analysis and so forth, especially
for pattern classification and regression based applications. ‘The foundations of Support Vector
Machines (SVM) have been developed by Vapnik [3] and gained popularity due to many
promising features such as better empirical performance. The formulation uses the Structural Risk
Minimization (SRM) principle, which has been shown to be superior, [4], to traditional Empirical
Risk Minimization (ERM) principle, used by conventional neural networks. SRM minimizes an
upper bound on the expected risk, where as ERM minimizes the error on the training data. It is
this difference which equips SVM with a greater ability to generalize, which is the goal in
statistical learning. SVMs were developed to solve the classification problem, but recently they
have been extended to solve regression problems [5]Introduction to SVM: Why SVM?
Firstly working with neural networks for supervised and unsupervised learning showed
good results while used for such learning applications. MLP’s uses feed forward and
recurrent networks. Multilayer perceptron (MLP) properties include universal
approximation of continuous nonlinear functions and include learning with input-output
pattems and also involve advanced network architectures with multiple inputs and
outputs [10]
Figure 2: Simple Neural Network b/Mulilayer Pereptoa (10)11) 7
‘tral wctwon Boks Ee,
rsp vsaiations junto have a overview a bow
‘There can be some issues noticed. Some of them are having many local minima and also
finding how many neurons might be needed for a task is another issue which determines
whether optimality of that NN is reached. Another thing to note is that even if the neural
network solutions used tends to converge, this may not result in a unique solution [11]
Now let us look at another example where we plot the data and try to classify it and we
see that there are many hyper planes which can classify it. But which one is better?
%
A _
£14,346) = sign(ve. x b)
x:[a}IFY= +1; wx, +521
[b]IfYir-1; wx tbs
[e] Forall i; yi(wi+b)>1
In this equation x is a vector point and w is weight and is also a vector. So to separate the
data [a] should always be greater than zero. Among all possible hyper planes, SVM
selects the one where the distance of hyper plane is as large as possible. If the training
data is good and every test vector is located in radius r from training vector. Now if the
chosen hyper plane is located at the farthest possible from the data [12]. This desired
hyper plane which maximizes the margin also bisects the lines between closest points on
convex hull of the two datasets. Thus we have [a), [b] & [¢]
Figure 5: Representation of Hyper planes [9]
Distance of closest point on hyperplane to origin can be found by maximizing the x as x
is on the hyper plane. Similarly for the other side points we have a similar scenario. Thus
solving and subtracting the two distances we get the summed distance from the
separating hyperplane to nearest points. Maximum Margin = M = 2 /|jw/|
Now maximizing the margin is same as minimum [8]. Now we have a quadratic
optimization problem and we need to solve for w and b. To solve this we need to
optimize the quadratic function with linear constraints. The solution involves
constructing a dual problem and where a Langlier’s multiplier a, is associated. We need
to find w and b such that © (w) = |w’ jw is minimized;
And for all {(xi, y))}: yz (W * xi + B)> 1.
Now solving: we get that w =Ea, + x:, b= yi- w *X, for any xy such that aks 0Kernal Trick
Let's first look at few definitions as what is a kernel and what does feature space mean?
Kernel: If data is linear, a separating hyper plane may be used to divide the data
However it is often the case that the data is far from linear and the datasets are
inseparable. To allow for this kernels are used to non-linearly map the input data to a
high-dimensional space. The new mapping is then linearly separable [1]. A very simple
illustration of this is shown below in figure 7 [9] [11] [20]
Figure 7; Why use Kerels?(1119] 20]
‘This mapping is defined by the Kernel: Klx,y) = (x): Oy)
Feature Space: Transforming the data into feature space makes it possible to define a
similarity measure on the basis of the dot product. If the feature space is chosen suitably,
pattern recognition can be easy [1]
Gi) — Ko) = (OC,
Figure & Feature Space Representation [1119]
Note the legend is not described as they are sample poting to make understand the concept invaved
Now getting back to the kernel trick, we see that when w,b is obtained the problem is
solved for a simple linear scenario in which data is separated by a hyper plane. The
Kenral trick allows SVM’s to form nonlinear boundaries. Steps involved in kernel trick
are given below [12] [24].
[a] The algorithm is expressed using only the inner products of data sets. This is also
called as dual problem.
{b] Original data are passed through non linear maps to form new data with respect to
new dimensions by adding a pair wise product of some of the original data dimension to
each data vector.
Ic] Rather than an inner product on these new, larger vectors, and store in tables and later
do a table lookup, we can represent @ dot product of the data after doing non linear‘
(in2') + 1)
2] Gaussian Radial Basis Function: Radial basis functions most commonly with a
Gaussian form
Clea") <6: Iz =z?
Kier!) ~ on ( =
3] Exponential Radial Basis Function: A radial basis function produces a piecewise
linear solution which can be attractive when discontinuities are acceptable.
K(x.2) -er(+ al
4] Multi-Layer Perceptron: The long established MLP, with a single hidden layer, also
has a valid kernel representation.
K(x, 2’) = tanh (ptz,z"} +2)
‘There are many more including Fourier, splines, B-splines, additive kemels and tensor
products [8]. If you want to read more on kernel functions you could read the book [8
Controlling Complexity in SVM: Trade-offs
SVM is powerful to approximate any training data and generalizes better on given
datasets. The complexity in terms of kernel affects the performance on new datasets [8].
‘SVM supports parameters for controlling the complexity and above all SVM does not tell
us how to set these parameters and we should be able to determine these Parameters by
Cross-Validation on the given datasets [2] [11]. The diagram given below gives a better
illustration.
Reel) ound on est evr
\ Ex.
Classifica
training error
0 Overfitting * complexity
Underfit
Copyriht 2001, 2003, Ave Moore
Figure 9: How to contol compleiy [2] [9 Note the legend isnot deseied as they ae sample plating to make understand the
involved.
SVM for Classification
SVM is a useful technique for data classification. Even though it’s considered that Neural
Networks are easier to use than this, however, sometimes unsatisfactory results are
obtained. A classification task usually involves with training and testing data which
consist of some data instances [21]. Each instance in the training set contains one targetexperts working in the specific application domain have already identified valid
similarity measures, particularly in areas such as information retrieval and generative
models [25] [27]
Traditional classification approaches perform poorly when working directly because of
the high dimensionality of the data, but Support Vector Machines can avoid the pitfalls of
very high dimensional representations [12]. A very similar approach to the techniques
described for text categorization can also be used for the task of image classification, and
as in that case linear hard margin machines are frequently able to generalize well [8]. The
first real-world task on which Support Vector Machines were tested was the problem of
hand-written character recognition. Furthermore, multi-class SVMs have been tested on
these data. It is interesting not only to compare SVMs with other classifiers, but also to
compare different SVMs amongst themselves [23]. They turn out to have approximately
the same performance, and furthermore to share most of their support vectors,
independently of the chosen kernel. The fact that SVM can perform as well as these
systems without including any detailed prior knowledge is certainly remarkable [25]
Strength and Weakness of SV!
The major strengths of SVM are the training is relatively easy. No local optimal, unlike
in neural networks. It scales relatively well to high dimensional data and the trade-off
between classifier complexity and error can be controlled explicitly. The weakness
includes the need for a good kernel funetion [2] [4] [8] [12] [241
Conelu:
nm
‘The tutorial presents an overview on SVM in parallel with a summary of the papers
collected from the World Wide Web. Some of the important conclusions of this tutorial
are summarized as follows. SVM are based on statistical learning theory. They can be
used for learning to predict future data [25]. SVM are trained by solving a constrained
quadratic optimization problem. SVM, implements mapping of inputs onto a high
dimensional space using a set of nonlinear basis functions. SVM can be used to lea a
variety of representations, such as neural nets, splines, polynomial estimators, ete, but
there is a unique optimal solution for each choice of the SVM parameters [4]. This is
different in other leaming machines, such as standard Neural Networks trained using
back propagation [26]. In short the development of SVM is an entirely different from
normal algorithms used for learning and SVM provides a new insight into this learning.
The four most major features of SVM are duality, kernels, convexity and sparseness [24].
‘Support Vector Machines acts as one of the best approach to data modeling. They
combine generalization control as a technique to control dimensionality. The kernel
mapping provides a common base for most of the commonly employed model
architectures, enabling comparisons to be performed [8]. In classification problems
generalization control is obtained by maximizing the margin, which corresponds to
minimization of the weight vector in a canonical framework. The solution is obtained as a22] B. Osuna, R, Freund, and F, Girosi, An improved training algorithm for support vector machines. In J
Principe, L. Gile, N. Morgan, and E. Wilson, editors, Neural Networks for Signal Processing VIL —
Proceedings of the 1997 IEEE Workshop, pages 276 — 285, New York, 1997. IBEE.
(23] M. 0. Stitson and J. A. E, Weston, Implementational issues of support vector machines. Technical
Report CSD-TR-96-18, Computational Intelligence Group, Royal Holloway, University of London, 1996.
[24] Burges B.~Scholkopf, editor, “Advances in Kernel Methods--Support Vector Leaming”. MIT press,
T9938.
[25] Osuna E., Freund R., and Girosi F., “Support Vector Machines: Training and Applications”, A.L
Memo No. 1602, Artificial Intelligence Laboratory, MIT, 1997.
[26] Trafalis T., "Primal-dual optimization methods in neural networks and support vector machines
training", ACAIS9,
[27] Veropoutos K., Cristianini N., and Campbell C., "The Application of Support Vector Machines to
Medical Decision Support: A Case Study", ACAI99