Support Vector is one of the strongest but mathematically complex supervised learning algorithms used for both regression and Classification. It is strictly based on the concept of decision planes (most commonly called hyperplanes) that define decision boundaries for the classification. A decision plane is one that separates between a set of data having different class memberships.
It performs classification by finding the optimal hyperplane that maximizes the margin between the two classes with the help of support vectors.
Hyperplanes are the decision boundaries that divide the data points into different classes. Data points falling on either side of the hyperplane can be attributed to different classes. The dimension of the hyperplane depends upon the number of features or attributes. If the number of input features is 2, then the hyperplane comes to be a line. It means to say it can be separated linearly. If the number of input features is 3, then the hyperplane becomes a two-dimensional plane. As the number of input features goes on increasing from 3, it becomes harder to imagine the hyperplane.
Support vectors are data points that are located closer to the hyperplane. They affect the position and orientation of the hyperplane. With the help of these support vectors, we tend to maximize the margin of the classifier. Any change in the support vectors will change the position of the hyperplane. These are the points are the vital points that help us build our SVM classifier.
For linearly separable data, classification is done by finding an optimal hyperplane between the classes. The distance between the hyperplane and the nearest data point(Support vector) from either set is known as the margin. The goal is to choose a hyperplane with the greatest possible margin between the hyperplane and any point within the training set, giving a greater chance of new data being classified correctly.
For non-linearly separable data, kernel functions are used. Kernels can be defined as functions that take non-linear data as input and transform it into required form(i.e. separable form). There are several Kernal functions like linear kernel, Polynomial kernel, RBF Kernel, Sigmoid kernel.
In the SVM algorithm, the kernel SVM takes a kernel function and transforms into the required form that maps data to a higher dimension that can be separated.
Some of the most common types of kernel function are:
Kernel trick uses the kernel function to transform the data into a higher dimensional feature space to make it possible to perform the linear separation for classification.
So, it is better to use linear SVMs for linear problems, and non-linear kernels such as the sigmoid kernel, Radial Basis Function kernel for non-linear problems.
Post a Comment
No Comments