Lecture 16: Unsupervised Learning

Applied Machine Learning

Volodymyr Kuleshov
Cornell Tech

Part 1: What is Unsupervised Learning?

Let's start by understanding what is unsupervised learning at a high level, starting with a dataset and an algorithm.

Unsupervised Learning

We have a dataset without labels. Our goal is to learn something interesting about the structure of the data:

Components of Unsupervised Learning

At a high level, an unsupervised machine learning problem has the following structure:

$$ \text{Dataset} + \text{Algorithm} \to \text{Unsupervised Model} $$

The unsupervised model describes interesting structure in the data. For instance, it can identify interesting hidden clusters.

An Unsupervised Learning Dataset

As a first example of an unsupervised learning dataset, we will use our Iris flower example, but we will discard the labels.

We start by loading this dataset.

We can visualize this dataset in 2D. Note that we are no longer using label information.

An Unsupervised Learning Algorithm

We can use this dataset as input to a popular unsupervised learning algorithm, $K$-means.

Running $K$-means on this dataset identifies three clusters.

These clusters correspond to the three types of flowers found in the dataset, which we obtain from the labels.

Applications of Unsupervised Learning

Unsupervised learning has numerous applications:

Application: Discovering Structure in Digits

Unsupervised learning can discover structure in digits without any labels.

Application: DNA Analysis

Dimensionality reduction applied to DNA reveal the geography of European countries:

Application: Human Faces

Modern unsupervised algorithms based on deep learning uncover structure in human face datasets.

Unsupervised Learning in This Course

We will explore several types of unsupervised learning problems.

Next, we will start by setting up some notation.

Part 2: The Language of Unsupervised Learning

Next, let's look at how to define an unsupervised learning problem more formally.

Components of an Unsupervised Learning Problem

At a high level, an unsupervised machine learning problem has the following structure:

$$ \underbrace{\text{Dataset}}_\text{Attributes} + \underbrace{\text{Learning Algorithm}}_\text{Model Class + Objective + Optimizer } \to \text{Unsupervised Model} $$

The unsupervised model describes interesting structure in the data. For instance, it can identify interesting hidden clusters.

Unsupervised Dataset: Notation

We define of size $n$ a dataset for unsupervised learning as $$\mathcal{D} = \{x^{(i)} \mid i = 1,2,...,n\}$$

Each $x^{(i)} \in \mathbb{R}^d$ denotes an input, a vector of $d$ attributes or features.

Data Distribution

We will assume that the dataset is sampled from a probability distribution $\mathbb{P}$, which we will call the data distribution. We will denote this as $$x \sim \mathbb{P}.$$

The dataset $\mathcal{D} = \{x^{(i)} \mid i = 1,2,...,n\}$ consists of independent and identicaly distributed (IID) samples from $\mathbb{P}$.

Data Distribution: IID Sampling

The key assumption in that the training examples are independent and identicaly distributed (IID).

Example: Flipping a coin. Each flip has same probability of heads & tails and doesn't depend on previous flips.

Counter-Example: Yearly census data. The population in each year will be close to that of the previous year.

Components of an Unsupervised Learning Algorithm

We can think of a unsupervised learning algorithm as consisting of three components:

Model: Notation

We'll say that a model is a function $$ f : \mathcal{X} \to \mathcal{S} $$ that maps inputs $x \in \mathcal{X}$ to some notion of structure $s \in \mathcal{S}$.

Structure can have many definitions (clusters, low-dimensional representations, etc.), and we will see many examples.

Often, models have parameters $\theta \in \Theta$ living in a set $\Theta$. We will then write the model as $$ f_\theta : \mathcal{X} \to \mathcal{S} $$ to denote that it's parametrized by $\theta$.

Model Class: Notation

Formally, the model class is a set $$\mathcal{M} \subseteq \{f \mid f : \mathcal{X} \to \mathcal{S} \}$$ of possible models that map input features to structural elements.

When the models $f_\theta$ are paremetrized by parameters $\theta \in \Theta$ living in some set $\Theta$. Thus we can also write $$\mathcal{M} = \{f_\theta \mid f : \mathcal{X} \to \mathcal{S}; \; \theta \in \Theta \}.$$

Objective: Notation

To capture this intuition, we define an objective function (also called a loss function) $$J(f) : \mathcal{M} \to [0, \infty), $$ which describes the extent to which $f$ "fits" the data $\mathcal{D} = \{x^{(i)} \mid i = 1,2,...,n\}$.

When $f$ is parametrized by $\theta \in \Theta$, the objective becomes a function $J(\theta) : \Theta \to [0, \infty).$

Optimizer: Notation

An optimizer finds a model $f \in \mathcal{M}$ with the smallest value of the objective $J$. \begin{align*} \min_{f \in \mathcal{M}} J(f) \end{align*}

Intuitively, this is the function that bests "fits" the data on the training dataset.

When $f$ is parametrized by $\theta \in \Theta$, the optimizer minimizes a function $J(\theta)$ over all $\theta \in \Theta$.

An Example: $K$-Means

As an example, let's use the $K$-Means algorithm that we saw earlier.

Recall that:

The $K$-Means Model

We can think of the model returned by $K$-Means as a function $$f_\theta : \mathcal{X} \to \mathcal{S}$$ that assigns each input $x$ to a cluster $s \in \mathcal{S} = \{1,2,\ldots,K\}$.

The parameters $\theta$ of the model are $K$ centroids $c_1, c_2, \ldots c_K \in \mathcal{X}$. The class of $x$ is $k$ if $c_k$ is the closest centroid to $x$.

The $K$-Means Objective

How do we determine whether $f_\theta$ is a good clustering of the dataset $\mathcal{D}$?

We seek centroids $c_k$ such that the distance between the points and their closest centroid is minimized: $$J(\theta) = \sum_{i=1}^n || x^{(i)} - \text{centroid}(f_\theta(x^{(i)})) ||,$$ where $\text{centroid}(k) = c_k$ denotes the centroid for cluster $k$.

The $K$-Means Optimizer

We can optimize this in a two stop process, starting with an initial random cluster assignment $f(x)$.

Repeat until convergence:

  1. Set each $c_k$ to be the center of the its cluster $\{x^{(i)} \mid f(x^{(i)}) = k\}$.
  2. Update clustering $f(x)$ such that $x^{(i)}$ is in the cluster of its closest centroid.

This is best illustrated visually (from Wikipedia):

Algorithm: K-Means

Part 3: Unsupervised Learning in Practice

We will now look at some practical considerations to keep in mind when applying supervised learning.

Review: Data Distribution

We will assume that the dataset is sampled from a probability distribution $\mathbb{P}$, which we will call the data distribution. We will denote this as $$x \sim \mathbb{P}.$$

The dataset $\mathcal{D} = \{x^{(i)} \mid i = 1,2,...,n\}$ consists of independent and identicaly distributed (IID) samples from $\mathbb{P}$.

Review: Generalization

In machine learning, generalization is the property of predictive models to achieve good performance on new, heldout data that is distinct from the training set.

How does generalization apply to unsupervised learning?

Generalization in Unsupervised Learning

We can think of the data distribution as being the sum of two distinct components $\mathbb{P} = F + E$

  1. A signal component $F$ (hidden clusters, speech, low-dimensional data space, etc.)
  2. A random noise component $E$

A machine learning model generalizes if it fits the true signal $F$; it overfits if it learns the noise $E$.

An Unsupervised Learning Dataset

Consider the following dataset, consisting of a mixture of Gaussians.

We know the true labels of these clusers, and we can visualize them.

Underfitting in Unsupervised Learning

Underfitting happens when we are not able to fully learn the signal hidden in the data.

In the context of $K$-Means, this means not capturing all the clusters in the data.

Let's run $K$-Means on our toy dataset.

The centroids find two distinct components in the data, but they fail to capture the true structure.

Consider now what happens if we further increase the number of clusters.

Overfitting in Unsupervised Learning

Overfitting happens when we fit the noise, but not the signal.

In our example, this means fitting small, local noise clusters rather than the true global clusters.

We can see the true structure given enough data.

The Elbow Method

The Elbow method is a way of tuning hyper-parameters in unsupervised learning.

In our example, the decrease in objective values slows down after $K=4$, and after that the curve becomes just a line.

Detecting Overfitting and Underfitting

In unsupervised learning, overfitting and underfitting are more difficult to quantify than in supervised learning.

If our model is probabilistic, we can detect overfitting without labels by comparing the log-likelihood between the training set and a holdout set (next lecture!).

Reducing Overfitting

There are multiple ways to control for overfitting:

  1. Reduce model complexity (e.g., reduce $K$ in $K$-Means)
  2. Penalize complexity in objective (e.g., penalize large $K$)
  3. Use a probabilistic model and regularize it.

Summary

The concept of generalization applies to both supervised and unsupervised learning.