Lecture 9: Support Vector Machines

Applied Machine Learning

Volodymyr Kuleshov
Cornell Tech

Part 1: Classification Margins

In this lecture, we are going to cover Support Vector Machines (SVMs), one the most successful classification algorithms in machine learning.

We start the presentation of SVMs by defining the classification margin.

Review: Components of A Supervised Machine Learning Problem

At a high level, a supervised machine learning problem has the following structure:

$$ \underbrace{\text{Training Dataset}}_\text{Attributes + Features} + \underbrace{\text{Learning Algorithm}}_\text{Model Class + Objective + Optimizer } \to \text{Predictive Model} $$

Review: Machine Learning Models

A machine learning model is a function $$ f : \mathcal{X} \to \mathcal{Y} $$ that maps inputs $x \in \mathcal{X}$ to targets $y \in \mathcal{Y}$.

Review: Binary Classification

Consider a training dataset $\mathcal{D} = \{(x^{(1)}, y^{(1)}), (x^{(2)}, y^{(2)}), \ldots, (x^{(n)}, y^{(n)})\}$.

We distinguish between two types of supervised learning problems depnding on the targets $y^{(i)}$.

  1. Regression: The target variable $y \in \mathcal{Y}$ is continuous: $\mathcal{Y} \subseteq \mathbb{R}$.
  2. Binary Classification: The target variable $y$ is discrete and takes on one of $K=2$ possible values.

In this lecture, we assume $\mathcal{Y} = \{-1, +1\}$.

Review: Linear Model Family

In this lecture, we will work with linear models of the form: \begin{align*} f_\theta(x) & = \theta_0 + \theta_1 \cdot x_1 + \theta_2 \cdot x_2 + ... + \theta_d \cdot x_d \end{align*} where $x \in \mathbb{R}^d$ is a vector of features and $y \in \{-1, 1\}$ is the target. The $\theta_j$ are the parameters of the model.

We can represent the model in a vectorized form \begin{align*} f_\theta(x) = \theta^\top x + \theta_0. \end{align*}

Notation and The Iris Dataset

In this lecture, we are going to again use the Iris flower dataset.

As we just mentioned, we make two additional assumptions:

Comparing Classification Algorithms

We have seen different types approaches to classification.

When fitting a model, there may be many valid decision boundaries. How do we select one of them?

Consider the following three classification algorithms from sklearn. Each of them outputs a different classification boundary.

Classification Scores

Most classification algorithms output not just a class label but a score.

The score is an estimate of confidence; it also represents how far we are from the decision boundary.

The Max-Margin Principle

Intuitively, we want to select boundaries with high margin.

This means that we are as confident as possible for every point and we are as far as possible from the decision boundary.

Several of the separating boundaries in our previous example had low margin: they came too close to the boundary.

Below, we plot a decision boundary between the two classes (solid line) that has a high margin. The two dashed lines that lie at the margin.

Points that are the margin are highlighted in black. A good decision boundary is as far away as possible from the points at the margin.

The Functional Classification Margin

How can we define the concept of margin more formally?

We can try to define the margin $\tilde \gamma^{(i)}$ with respect to a training example $(x^{(i)}, y^{(i)})$ as $$ \tilde \gamma^{(i)} = y^{(i)} \cdot f(x^{(i)}) = y^{(i)} \cdot \left( \theta^\top x^{(i)} + \theta_0 \right). $$

We call this the functional margin. Let's analyze it.

We defined the functional margin as $$ \tilde\gamma^{(i)} = y^{(i)} \cdot \left( \theta^\top x^{(i)} + \theta_0 \right).$$

Thus higher margin means higher confidence at each input point.

However, we have a problem.

It doesn't make sense that the same classification boundary can have different margins when we rescale it.

The Geometric Classification Margin

We define the geometric margin $\gamma^{(i)}$ with respect to a training example $(x^{(i)}, y^{(i)})$ as $$ \gamma^{(i)} = y^{(i)}\left( \frac{\theta^\top x^{(i)} + \theta_0}{||\theta||} \right). $$

Let's again make sure our intuition about the margin holds. $$ \gamma^{(i)} = y^{(i)}\left( \frac{\theta^\top x^{(i)} + \theta_0}{||\theta||} \right). $$

Geometric Intuitions

The margin $\gamma^{(i)}$ is called geometric because it corresponds to the distance from $x^{(i)}$ to the separating hyperplane $\theta^\top x + \theta_0 = 0$ (dashed line below).

Suppose that $y^{(i)}=1$ ($x^{(i)}$ lies on positive side of boundary). Then:

  1. The points $x$ that lie on the deicision boundary are those for which $\theta^\top x + \theta_0 = 0$ (score is precisely zero, and between 1 and -1).
  1. The vector $\frac{\theta}{||\theta||}$ is perpedicular to the hyperplane $\theta^\top x + \theta_0$ and has unit norm (fact from calculus).
  1. Let $x_0$ be the point on the boundary closest to $x^{(i)}$. Then by definition of the margin $x^{(i)} = x_0 + \gamma^{(i)} \frac{\theta}{||\theta||}$ or $$ x_0 = x^{(i)} - \gamma^{(i)} \frac{\theta}{||\theta||}. $$
  1. Since $x_0$ is on the hyperplane, $\theta^\top x_0 + \theta_0 = 0$, or $$\theta^\top \left(x^{(i)} - \gamma^{(i)} \frac{\theta}{||\theta||} \right) + \theta_0 = 0.$$
  1. Solving for $\gamma^{(i)}$ and using the fact that $\theta^\top \theta = ||\theta||^2$, we obtain $$ \gamma^{(i)} = \frac{\theta^\top x^{(i)} + \theta_0}{||\theta||}. $$

Which is our geometric margin. The case of $y^{(i)}=-1$ can also be proven in a similar way.

We can use our formula for $\gamma$ to precisely plot the margins on our earlier plot.

Part 2: The Max-Margin Classifier

We have seen a way to measure the confidence level of a classifier at a data point using the notion of a margin.

Next, we are going to see how to maximize the margin of linear classifiers.

Review: Linear Model Family

In this lecture, we consider classification with linear models of the form: \begin{align*} f_\theta(x) & = \theta_0 + \theta_1 \cdot x_1 + \theta_2 \cdot x_2 + ... + \theta_d \cdot x_d \end{align*} where $x \in \mathbb{R}^d$ is a vector of features and $y \in \{-1, 1\}$ is the target. The $\theta_j$ are the parameters of the model.

We can represent the model in a vectorized form \begin{align*} f_\theta(x) = \theta^\top x + \theta_0. \end{align*}

Review: Geometric Margin

We define the geometric margin $\gamma^{(i)}$ with respect to a training example $(x^{(i)}, y^{(i)})$ as $$ \gamma^{(i)} = y^{(i)}\left( \frac{\theta^\top x^{(i)} + \theta_0}{||\theta||} \right). $$ This also corresponds to the distance from $x^{(i)}$ to the hyperplane.

Maximizing the Margin

We want to define an objective that will result in maximizing the margin. As a first attempt, consider the following optimization problem. \begin{align*} \max_{\theta,\theta_0,\gamma} \gamma \; & \\ \text{subject to } \; & y^{(i)}\frac{(x^{(i)})^\top\theta+\theta_0}{||\theta||}\geq \gamma \; \text{for all $i$} \end{align*}

This is maximies the smallest margin over the $(x^{(i)}, y^{(i)})$. It guarantees each point has margin at least $\gamma$.

Maximizing the Margin

This problem is difficult to optimize because of the division by $||\theta||$ and we would like to simplify it. First, consider the equivalent problem: \begin{align*} \max_{\theta,\theta_0,\gamma} \gamma \; & \\ \text{subject to } \; & y^{(i)}((x^{(i)})^\top\theta+\theta_0)\geq \gamma ||\theta|| \; \text{for all $i$} \end{align*}

Note that this problem has an extra degree of freedom:

To enforce uniqueness, we add another constraint that doesn't change the minimizer: $$ ||\theta|| = \frac{1}{\gamma}. $$ This ensures we cannot rescale $\theta$ and also asks our linear model to assign each $x^{(i)}$ a score of at least $\pm 1$: $$ y^{(i)}((x^{(i)})^\top\theta+\theta_0)\geq 1 \; \text{for all $i$} $$

Maximizing the Margin

If we constraint $||\theta|| = \frac{1}{\gamma}$ holds, then we know that $\gamma = 1/\theta$ and we can replace $\gamma$ in the optimization problem to obtain: \begin{align*} \max_{\theta,\theta_0} \frac{1}{||\theta||} \; & \\ \text{subject to } \; & y^{(i)}((x^{(i)})^\top\theta+\theta_0)\geq 1 \; \text{for all $i$} \end{align*}

The solution of this problem is still the same.

Maximizing the Margin: Final Version

Finally, instead of maximizing $1/\theta$, we can minimize $\theta$, or equvalently we can minimize $\frac{1}{2}||\theta||^2$. \begin{align*} \min_{\theta,\theta_0} \frac{1}{2}||\theta||^2 \; & \\ \text{subject to } \; & y^{(i)}((x^{(i)})^\top\theta+\theta_0)\geq 1 \; \text{for all $i$} \end{align*}

This is now a quadratic program that can be solved using off-the-shelf optimization algorithms!

Algorithm: Linear Support Vector Machine Classification

Later, we will see several other versions of this algorithm.

Part 3: Soft Margins and the Hinge Loss

Let's continue looking at how we can maximize the margin.

Review: Maximizing the Margin

We saw that maximizing the margin amounts to solving the following optimization problem. \begin{align*} \min_{\theta,\theta_0} \frac{1}{2}||\theta||^2 \; & \\ \text{subject to } \; & y^{(i)}((x^{(i)})^\top\theta+\theta_0)\geq 1 \; \text{for all $i$} \end{align*}

This is now a quadratic program that can be solved using off-the-shelf optimization algorithms.

Non-Separable Problems

So far, we have assume that a linear hyperplane exists. However, what if the classes are non-separable? Then our optimization problem does not have a solution and we need to modify it.

Our solution is going to be to make each constraint "soft", by introducing "slack" variables, which allow the constraint to be violated. $$ y^{(i)}((x^{(i)})^\top\theta+\theta_0)\geq 1 - \xi_i. $$

In the optimization problem, we assign a penalty $C$ to these slack variables to obtain: \begin{align*} \min_{\theta,\theta_0, \xi}\; & \frac{1}{2}||\theta||^2 + C \sum_{i=1}^n \xi_i \; \\ \text{subject to } \; & y^{(i)}\left((x^{(i)})^\top\theta+\theta_0\right)\geq 1 - \xi_i \; \text{for all $i$} \\ & \xi_i \geq 0 \end{align*}

Towards an Unconstainted Objective

Let's further modify things. Moving around terms in the inequality we get: \begin{align*} \min_{\theta,\theta_0, \xi}\; & \frac{1}{2}||\theta||^2 + C \sum_{i=1}^n \xi_i \; \\ \text{subject to } \; & \xi_i \geq 1 - y^{(i)}\left((x^{(i)})^\top\theta+\theta_0\right) \; \xi_i \geq 0 \; \text{for all $i$} \end{align*}

If $0 \geq 1 - y^{(i)}\left((x^{(i)})^\top\theta+\theta_0\right)$, we classified $x^{(i)}$ perfectly and $\xi_i = 0$

If $0 < 1 - y^{(i)}\left((x^{(i)})^\top\theta+\theta_0\right)$, then $\xi_i = 1 - y^{(i)}\left((x^{(i)})^\top\theta+\theta_0\right)$

Thus, $\xi_i = \max\left(1 - y^{(i)}\left((x^{(i)})^\top\theta+\theta_0\right), 0 \right)$.

We simplify notation a bit by using the notation $(x)^+ = \max(x,0)$.

This yields: $$\xi_i = \max\left(1 - y^{(i)}\left((x^{(i)})^\top\theta+\theta_0\right), 0 \right) := \left(1 - y^{(i)}\left((x^{(i)})^\top\theta+\theta_0\right)\right)^+$$

Towards an Unconstainted Objective

Since $\xi_i = \left(1 - y^{(i)}\left((x^{(i)})^\top\theta+\theta_0\right)\right)^+$, we can take \begin{align*} \min_{\theta,\theta_0, \xi}\; & \frac{1}{2}||\theta||^2 + C \sum_{i=1}^n \xi_i \; \\ \text{subject to } \; & \xi_i \geq 1 - y^{(i)}\left((x^{(i)})^\top\theta+\theta_0\right) \; \xi_i \geq 0 \; \text{for all $i$} \end{align*}

And we turn it into the following by plugging in the definition of $\xi_i$: $$ \min_{\theta,\theta_0}\; \frac{1}{2}||\theta||^2 + C \sum_{i=1}^n \left(1 - y^{(i)}\left((x^{(i)})^\top\theta+\theta_0\right)\right)^+ $$

Since it doesn't matter which term we multiply by $C>0$, this is equivalent to $$ \min_{\theta,\theta_0, \xi}\; \sum_{i=1}^n \left(1 - y^{(i)}\left((x^{(i)})^\top\theta+\theta_0\right)\right)^+ + \frac{\lambda}{2}||\theta||^2 $$ for some $\lambda > 0$.

An Unconstrained Objective

We have now turned our optimizatin problem into an unconstrained form: $$ \min_{\theta,\theta_0}\; \sum_{i=1}^n \underbrace{\left(1 - y^{(i)}\left((x^{(i)})^\top\theta+\theta_0\right)\right)^+}_\text{hinge loss} + \underbrace{\frac{\lambda}{2}||\theta||^2}_\text{regularizer} $$

The Hinge Loss

Consider again our new loss term for a label $y$ and a prediction $f$: $$ L(y, f) = \max\left(1 - y \cdot f, 0\right). $$

Let's visualize a few losses $L(y=1,f)$, as a function of $f$, including hinge.

Properties of the Hinge Loss

The hinge loss is one of the best losses in machine learning!

Part 4: Optimization for SVMs

We have seen a new way to formulate the SVM objective. Let's now see how to optimize it.

Review: Linear Model Family

In this lecture, we consider classification with linear models of the form: \begin{align*} f_\theta(x) & = \theta_0 + \theta_1 \cdot x_1 + \theta_2 \cdot x_2 + ... + \theta_d \cdot x_d \end{align*} where $x \in \mathbb{R}^d$ is a vector of features and $y \in \{-1, 1\}$ is the target. The $\theta_j$ are the parameters of the model.

We can represent the model in a vectorized form \begin{align*} f_\theta(x) = \theta^\top x + \theta_0. \end{align*}

Review: The Hinge Loss

The hinge loss for a label $y$ and a prediction $f$ is: $$ L(y, f) = \max\left(1 - y \cdot f, 0\right). $$

Review: SVM Objective

Maximizing the margin can be done in the following form: $$ \min_{\theta,\theta_0, \xi}\; \sum_{i=1}^n \underbrace{\left(1 - y^{(i)}\left((x^{(i)})^\top\theta+\theta_0\right)\right)^+}_\text{hinge loss} + \underbrace{\frac{\lambda}{2}||\theta||^2}_\text{regularizer} $$

We can easily implement this objective in numpy.

First we define the model.

And then we define the objective.

Review: Gradient Descent

If we want to optimize $J(\theta)$, we start with an initial guess $\theta_0$ for the parameters and repeat the following update: $$ \theta_i := \theta_{i-1} - \alpha \cdot \nabla_\theta J(\theta_{i-1}). $$

As code, this method may look as follows:

theta, theta_prev = random_initialization()
while norm(theta - theta_prev) > convergence_threshold:
    theta_prev = theta
    theta = theta_prev - step_size * gradient(theta_prev)

A Gradient for the Hinge Loss?

What is the gradient for the hinge loss with a linear $f$? $$ J(\theta) = \max\left(1 - y \cdot f_\theta(x), 0\right) = \max\left(1 - y \cdot \theta^\top x, 0\right). $$

Here, you see the linear part of $J$ that behaves like $1 - y \cdot f_\theta(x)$ (when $y \cdot f_\theta(x) < 1$) in orange:

When $y \cdot f_\theta(x) < 1$, we are in the "line" part and $J(\theta)$ behaves a like $1 - y \cdot f_\theta(x)$

Our objective is $$ J(\theta) = \max\left(1 - y \cdot f_\theta(x), 0\right) = \max\left(1 - y \cdot \theta^\top x, 0\right). $$ Hence the gradient in this regime is: $$\nabla_\theta J(\theta) = -y \cdot \nabla f_\theta(x) = -y \cdot x$$ where we used $\nabla_\theta \theta^\top x = x$.

A Gradient for the Hinge Loss?

What is the gradient for the hinge loss with a linear $f$? $$ J(\theta) = \max\left(1 - y \cdot f_\theta(x), 0\right) = \max\left(1 - y \cdot \theta^\top x, 0\right). $$

A Gradient for the Hinge Loss?

What is the gradient for the hinge loss with a linear $f$? $$ J(\theta) = \max\left(1 - y \cdot f_\theta(x), 0\right) = \max\left(1 - y \cdot \theta^\top x, 0\right). $$

When $y \cdot f_\theta(x) = 1$, we are in the "kink", and the gradient is not defined!

A Steepest Descent Direction for the Hinge Loss

We can define a "gradient" like function $\tilde \nabla_\theta J(\theta)$ for the hinge loss $$ J(\theta) = \max\left(1 - y \cdot f_\theta(x), 0\right) = \max\left(1 - y \cdot \theta^\top x, 0\right). $$ It equals: $$\tilde \nabla_\theta J(\theta) = \begin{cases} -y \cdot x & \text{ if $y \cdot f_\theta(x) > 1$} \\ 0 & \text{ otherwise} \end{cases} $$

Subgradient Descent for SVM

Putting this together, we obtain a complete learning algorithm, based on an optimization procedure called subgradient descent.

theta, theta_prev = random_initialization()
while abs(J(theta) - J(theta_prev)) > conv_threshold:
    theta_prev = theta
    theta = theta_prev - step_size * approximate_gradient

Let's implement this algorithm.

First we implement the approximate gradient.

And then we implement subgradient descent.

We can visualize the results to convince ourselves we found a good boundary.

Algorithm: Linear Support Vector Machine Classification