Jekyll2022-12-07T13:17:37+00:00https://chinmayhegde.github.io/dl-notes/feed.xmlDeep LearningCourse notes for Deep Learning (@NYU Tandon).Course NotesLecture 13: Self-supervised learning2021-11-21T00:00:00+00:002021-11-21T00:00:00+00:00https://chinmayhegde.github.io/dl-notes/notes/lecture13<p><em>In which we introduce the concepts of meta-learning and self-supervision.</em></p>
<p>Supervised (deep) learning has mainly gone after datasets such as MNIST or CIFAR-10/100, which have a small number of classes, and many samples per class.</p>
<p>But humans can generalize really well even with a very small number of examples per class! Think of the last time you saw the picture of an unknown animal. You clearly don’t need hundreds of examples in order to learn a concept.</p>
<p>Even worse: in several applications, you <em>can’t</em> get hundreds of examples anyway. Think of building an AI assistant to assist doctors in diagnosis: every test example may be new, critical cases are correlated with how rare they are, and large datasets are hard to find.</p>
<p>Therefore, it is of crucial importance moving forward to devise DL techniques that succeed with relatively few data points. An interesting early test bed is the <em>Omniglot</em> dataset, popularized in ML by Brendan Lake at NYU CDS, which can be thought of as “Transpose-MNIST” – lots of classes and very few samples per class. How do we effectively learn in this type of scenario?</p>
<p><img src="/dl-notes/assets/figures/omniglot.png" alt="Omniglot dataset" /></p>
<p>Such problems fall into the realm of “few-shot” learning where “shots” here refer to the number of examples. For example, an <em>n-class k-shot</em> classification task requires learning to classify between $n$-classes using only $k$ (potentially $\ll n$) examples per class.</p>
<p>If an ML agent were given a $k$-shot dataset, how should it solve such a challenging task? The rapidly growing field of <em>meta-learning</em> advocates the following principles:</p>
<ul>
<li>each learning agent trying to solve a new task is guided by a higher-level <em>meta-learner</em></li>
<li>the meta-learner possesses <em>meta-knowledge</em> (in the form of features, or pre-trained nets, or other quantities) which is imparted to the learning agents when they are being trained.</li>
<li>(here is the crux) the meta-learner <em>itself</em> can be trainable, and is able to learn from experience as it teaches different agents.</li>
</ul>
<h2 id="transfer-learning">Transfer learning</h2>
<p>Let us be concrete. A canonical example of the above approach iss <em>transfer learning</em>. We have actually already discussed transfer learning before (and implemented it for the case of R-CNN type object detection).</p>
<p>The high level idea is that given an ML task with a limited-sample dataset, one starts with a pre-trained <em>base network</em> that has already been trained on perhaps a bigger dataset (like ImageNet for images, or Wikipedia for NLP), and uses the given dataset to fine-tune to any new given task.</p>
<p>The problem, of course, is that one requires a good enough base model to start with. In the examples seen so far, the base model has been pre-trained using a massive dataset. The essence of pre-training is to get “good enough” features which generalize well for the given task, and it is not entirely clear if such “good enough” features could be learned in the few-shot setting. Below we will address more principled ways of performing transfer learning in the few-shot setting.</p>
<h2 id="model-agnostic-meta-learning-maml">Model-agnostic meta learning (MAML)</h2>
<p>Back to transfer learning. A different way of thinking about the few-shot learning problem is to visualize the tasks as different points in the parameter space. In this scenario, transfer learning/fine-tuning can be viewed as a souped-up initialization procedure where we initialize the weights at some known, excellent point in the parameter space, and use the available few-shot data to move the weights to some point better suited to the task.</p>
<p><img src="/dl-notes/assets/figures/maml.png" alt="Model-agnostic meta learning (MAML)" /></p>
<p>Of course, this assumes that the new task we are trying to learn is somehow <em>close enough</em> to the base model that it can be trained via a few steps of gradient descent. As different tasks are solved, can the meta-learner update the base model itself? If trained over sufficiently many tasks, then perhaps the base model is no longer required to be trained using a <em>specific</em>, large dataset – it can be a general model whose only goal is to be “fine-tunable” to different tasks using a few number of gradient descent steps. In that sense, this approach would be <em>model-agnostic</em>.</p>
<p>Let us formalize this idea (which is called model-agnostic meta-learning, or MAML). There is a base model $\phi$. There are $J$ different tasks. We use the base model $f_\phi$ – $\phi$ are the weights – as initialization. For each new task dataset $T_j$, we form a loss function (based on few-shot samples) $L(f_\phi, T_j)$ – this stands for the model $f$ with weights $\phi$ evaluated on training dataset $T_j$ – and fine-tune these weights using gradient descent. In the simplest case, if we used <em>one</em> step of gradient descent, this could be written as:</p>
\[\phi_j \leftarrow \phi - \alpha \nabla_\phi L(f_\phi, T_j) .\]
<p>If we used two steps of gradient descent, we would have to iterate the above equation twice. And so on. We use the final weights $\phi_j$ to solve task $T_j$.</p>
<p>The hope is that if the base model is good enough then the overall cumulative loss across different tasks at the adapted parameters is small as well. This is the <em>meta-loss function</em>:</p>
\[M(\phi) = \sum_{j=1}^J L(f_{\phi_j},T_j) .\]
<p>Notice the interesting nested structure:</p>
<ul>
<li>The meta-loss function $M(\phi)$ depends on the adapted weights $\phi_j$</li>
<li>which in turn depend on the base weights $\phi$ via one or more steps of gradient descent.</li>
</ul>
<p>So we can update the base weights themselves by summing up the gradients computed during the adaptation:</p>
\[\begin{aligned}
\phi &\leftarrow \phi - \beta \nabla_\phi M(\phi) \\
&= \phi - \beta \sum_j \nabla_\phi L(f_{\phi_j},T_j) \\
&= \phi - \beta \sum_j \nabla_\phi L(\phi - \alpha \nabla_\phi L(f_\phi, T_j)) .
\end{aligned}\]
<p>Some further observations:</p>
<ul>
<li>the “samples” in the above update <em>correspond to different tasks</em>. One could use stochastic methods here to speed things up: the meta-learner <em>samples</em> a new learning agents, “teaches” them how to update their weights by giving them the base model, and “learns” a new set of base model weights.</li>
<li>“generalization” here corresponds to the fact that after a while, MAML learns parameters that can be adapted to new, unseen tasks via fine-tuning.</li>
<li>the above equation is specific to the learning agents in MAML using <em>one step of gradient descent</em>. But one could use any other optimization method here – $k$-steps of gradient descent, SGD, Adam, Hessian methods, whatever – call this method $\text{Alg}$. Then a general form of MAML is:</li>
</ul>
\[\phi \leftarrow \phi - \beta \sum_j \nabla L(\text{Alg}_j)\]
<p>The only requirement is that there is some way to take the derivative of $\text{Alg}$ in the chain rule – i.e., MAML <em>works by taking the gradient of gradient descent</em>!</p>
<p>One last point: The above gradient updates in MAML can be quite complicated. In particular, the meta-gradient update requires a gradient-of-a-gradient (due to the chain rule) and already needs tons of computations. If we want to increase this to $k$-steps of gradient descent, then we need higher-order gradients. A series of algorithmic improvements have improved this computational dependency on the complexity of the optimizer, but we won’t cover it here.</p>
<h2 id="metric-embeddings">Metric embeddings</h2>
<p>An alternative family of meta-learning approaches is learning <em>metric embeddings</em>. The high level idea is to learn embeddings (or latent representations) of all data points in a given dataset (similar to how we learned word embeddings in NLP). If the embeddings are meaningful, then the geometry of the embedding space should tell us class information (and we should be able to use simple geometric methods such as nearest neighbors or perceptrons to classify points).</p>
<p>An early approach (pioneered by LeCun and collaborators in the nineties and revived a few years ago) is <em>Siamese Networks</em>. The goal was to solve one-shot image classification tasks, where we are given a database of exactly one image in each class.</p>
<p>Imagine a training dataset $(x_1, x_2, \ldots, x_n)$. The label indices don’t matter here since all the points are of distinct classes. Siamese nets work as follows.</p>
<ul>
<li>
<p>set up a Siamese network (pair of identical, weight-tied feedforward convnets, followed by a second network). The first part (pair of identical networks) $f_\theta$ consists of a standard convnet mapping data points to some latent set of features; we use this to evaluate every pair of data points and get outputs $f_\theta(x_i)$ and $f_\theta(x_j)$.</p>
</li>
<li>
<p>compute the coordinate-wise distances</p>
</li>
</ul>
\[g(x_i, x_j) = |f_\theta(x_i) - f_\theta(x_j) | .\]
<p>This gives a vector which is a measure of similarity between the embeddings.</p>
<ul>
<li>Feed it through a second network that gives probabilities of matches, i.e., whether the two images are from the same class. A simple such network would :</li>
</ul>
\[p(x_i, x_j) = \sigma(W g(x_i, x_j))\]
<ul>
<li>Apply standard data augmentation techniques (noise, distortion, etc) and train the network using SGD.</li>
<li>Given a test image, match it with every point in the dataset. The final predicted class is the one with the max matching probability.</li>
</ul>
\[c(x) = \arg \max_{i \in S} P(x,x_i) .\]
<p>This idea was refined to the $k$-shot case via <em>Matching Networks</em> by Vinyals and coauthors in 2016. The steps are similar to the ones above, except that we don’t compute distances in the middle, and use a trainable <em>attention</em> mechanism (instead of a standard MLP) to declare the final class:</p>
\[c(x) = \arg \max_{i \in S} \sum_{i=1}^n \sum_{j = 1}^k a(x_{nk}, x) y_{nk} .\]
<p>Other attempts along this line of work include:</p>
<ul>
<li>Triplet networks (where we use three identical networks and train with triples of samples $(x’, x’’, x^{-})$ – the first two from the same class and the last from a different class.</li>
<li>Prototypical Networks</li>
<li>Relation Networks</li>
</ul>
<p>among several others.</p>
<h2 id="contrastive-self-supervision">Contrastive self-supervision</h2>
<p>This idea of using Siamese networks to learn useful embedding features for unlabeled/few-shot datasets is rather similar to the <em>next-sentence-prediction</em> task that we used to learn BERT-style representations in NLP.</p>
<p>We can use similar techniques for other data types too! For example, imagine that we were trying to learn embeddings for image- or video- data. The Siamese network idea works here too – as long as we develop a <em>contrastive pretext</em> task that enables us to devise embeddings and compare pairs (or triples) of inputs. The above example of Siamese networks corresponded to a “same-class-classification” pretext task. But we could think of others:</p>
<ul>
<li>
<p>for images, one candidate pretext task could be to predict relative transformations: given two images, predict whether one is a rotation/crop/color transformation of the second, or not.</p>
</li>
<li>
<p>for video, one candidate pretext task could be <em>shuffle-and-learn</em> where given three frames, the goal is to shuffle the order back to a temporally coherent manner.</p>
</li>
<li>
<p>For audio-video, a candidate pretext task could be match whether the given audio corresponds to the video, or not.</p>
</li>
<li>
<p>Jigsaw puzzles: the input is a bunch of shuffled tiles, and the goal is to predict a permutation.</p>
</li>
</ul>
<p>All these methods have been applied to varying degrees of success, culminating in <a href="https://arxiv.org/pdf/2002.05709.pdf">SIMCLR</a> (by Hinton and co-authors) which reached AlexNet-level performance on image classification using 100X fewer labeled samples.</p>
<p>The idea in SimCLR is surprisingly simple: given an image $x$, a good model must be able to distinguish “positive” examples (e.g. all natural geometic transformations of this image, say $T(x)$), from “negative” ones (e.g. other images from a minibatch not related to this one.)</p>
<p>The way it implements this is via learning features by optimizing a <em>contrastive</em> loss. Given an image $x$, a positive example $x_+$ sampled from $T(x)$, and a set of negative examples $N$, SIMCLR does two things:</p>
<ul>
<li>
<p>Apply an encoder $g$ to all data points. Think of this encoder as (say) a standard ResNet.</p>
</li>
<li>
<p>Apply a small “projection head” $h$ to map the encoder features to a space where the loss can be applied. This can be, for example, a shallow MLP similar to what we used for Siamese networks above.</p>
</li>
<li>
<p>Let $z = h(g(x)), z_+ = h(g(x_+)), z_n = h(g(x_n))$. Take a gradient step that optimizes a cross-entropy style <em>contrastive loss</em>; here, $\tau$ is a learnable temperature parameter and $\sim$ is the cosine similarity between two vectors.</p>
</li>
</ul>
\[l = - \log \frac{\exp\left(\text{sim}(z,z_+)/\tau\right)}{\sum_{n \in N \cup \{z_+\}} \exp\left(\text{sim}(z,z_n)/\tau\right)}\]
<p>Why does this loss make sense? Intuitively, starting from a totally unlabeled dataset we are setting up a classification problem where there are as many “classes” as samples in a minibatch. Therefore, a good feature embedding $g$ should learn to sufficiently separate out the different “classes”, i.e., group features corresponding to the same root image together while the rest far apart. In this sense, SIMCLR can be viewed as a considerable simplification of the Siamese Net idea.</p>
<p>Once feature embeddings have been learned, we can drop the project head, and just use the feature encoder $g$. For any new downstream task (even with a small number of examples), we can then throw a linear classifier on top and either freeze the encoder weights/learn only the top layer, or fine-tune all the weights, depending on how much data is present. Here is a visualization of comparisons with other self-supervised baselines:</p>
<p><img src="/dl-notes/assets/figures/simclr.png" alt="SimCLR performance" /></p>
<p>For a nice illustrated summary, see <a href="https://amitness.com/2020/03/illustrated-simclr/">here</a>.</p>
<h2 id="contrastive-language-image-pretraining">Contrastive Language-Image Pretraining</h2>
<p>Simple ideas that work (like SIMCLR) usually lead to good things. One (surprisingly powerful) offshoot is <em>language-image</em> feature learning.</p>
<p>Say we have a large, unstructured dataset of <em>captioned images</em>. This dataset can be acquired (say) by doing a search of FlickR or Instagram or something else.</p>
<p>Using this dataset, we can use a SIMCLR-style approach to learn a <em>multi-modal</em> feature encoder, having one tower of weights for the language part (call it $g_l()$), and one tower of weights (call it $g_v()$) for the image part. These weights should satisfy two properties:</p>
<ul>
<li>features learned by applying the image tower to an image, and the language to the <em>corresponding</em> caption, should be similar;</li>
<li>image features should be far away from the caption of features of other images in the minibatch.</li>
</ul>
<p>So, basically SIMCLR, except that the “transformation” backbone is removed, and replaced by one that processes a natural language caption! This approach is called CLIP (Contrastive Language-Image Pretraining), and this model serves as the feature extraction bedrock of more exciting, subsequent developments in unsupervised generative models (such as Dall-E 2 and Stable Diffusion). See the [CLIP paper] for details; this figure is a great illustration.</p>
<p><img src="/dl-notes/assets/figures/clip-illustration.png" alt="CLIP" /></p>
<p>A few more CLIP-isms:</p>
<ul>
<li>
<p>Just as SIMCLR, we can do transfer learning by taking the image feature tower, throwing a linear layer on top, and finetuning to a new (given, small-size) dataset.</p>
</li>
<li>
<p>A beautiful benefit of CLIP is the ability to do <em>zero-shot transfer</em>. Say we wanted to build an animal classifier, but our training dataset had zero images of (say) a woolly mammoth. This would be an entirely new class label (outside the set of known concepts), and in the standard supervised setup, there would be no way of recognizing this new concept.</p>
<p>However, CLIP circumvents this as follows. To recognize the woolly mammoth, all we would have to provide is a <em>caption in natural language describing the picture of a mammoth</em>, something like “a photo of an animal that looks like an elephant but has brown fur and big tusks”.</p>
<p>Why would this work? Notice that we not only learned good image features via CLIP; we <em>also aligned the features with text descriptions</em>. Therefore, generalization to new, totally unseen categories can effectively happen, if somehow we were able to map the new category to a string of already-seen language tokens.</p>
<p>Technically speaking, this is not a fully unsupervised approach (there is the issue of coming with a suitable caption, or prompt), so this method can be viewed as <em>weak</em> language supervision.</p>
</li>
</ul>
<h2 id="generative-pre-training">Generative Pre-Training</h2>
<p><em>Under construction</em>.</p>Course NotesIn which we introduce the concepts of meta-learning and self-supervision.Lecture 11: Applications of Deep RL2021-11-21T00:00:00+00:002021-11-21T00:00:00+00:00https://chinmayhegde.github.io/dl-notes/notes/lecture11-old<p><em>In which we discuss success stories of deep RL, and the road ahead.</em></p>
<h2 id="alphago">AlphaGo</h2>
<p>A major landmark in deep learning research was the demonstration of AlphaGo in 2015, which was one of the success stories of deep RL in real(istic) applications.</p>
<p>Go is a two-player board game where the players take turns placing black “stones” on a 19x19 grid, and the goal is to surround the opponents’ pieces and “capture” territory. The winner is declared by counting each player’s surrounded territory.</p>
<p><img src="/dl-notes/assets/figures/go.png" alt="The game of Go" /></p>
<p>The classical way to solve such two player games (and other like Chess) via AI is to search a <em>game tree</em>, where each node in the tree is a game state (or snapshot) and children nodes are results of possible actions taken by each player. The leaves of the tree denote end states, and the goal of the AI is to discover paths to valuable/winning leaves while avoiding bad paths. Leaving aside the definition of “value”, this is obviously a very large tree in both Chess and Go with leaves the number of leaves being exponential in the depth (i.e., the number of moves in the game).</p>
<p>[An aside: in Chess after sufficiently many moves there is a particular phase called the <em>Endgame</em>, after which the winning sequence of moves are more or less well understood, and can be hard coded. Computer chess heavily relied on this particular trick; unfortunately, endgames in Go are way more complicated, and solving Go via computer was viewed as a major bottleneck.]</p>
<p>One way to reduce the number of possible paths is to perform <em>Monte Carlo Tree Search</em>, which was a crude form of estimating the <em>Value function</em> $V(s)$ of each state (i.e., each node in the tree) via random search.</p>
<p>The beauty of DeepMind’s AlphaGo (which was introduced in 2016) is that it completely eschews a tree-based data structure for representing the game. Instead, the state of the game is represented by a 19x19 black/white/gray <em>image</em>, which is fed into a deep neural network – just like how we would classify an MNIST grayscale image. The output of the network is the instantaneous policy, i.e., distribution over possible next moves. The architecture is a vanilla 13-layer convnet.</p>
<p>In fact, just this network is enough to do well in Go. One can train this in a standard supervised learning manner using an existing database of game-state/next-move pairs, and beat computer Go players based on tree search nearly 99% of the time! But top human players were able to beat this model.</p>
<p>But AlphaGo leverages the fact that we can do even better with RL. We can update the above network using <em>self-play</em>, where we create new games by sampling rollouts using the predicted distribution, measure rewards at the end of the game, and use the REINFORCE algorithm for further updating the weights.</p>
<p>In addition to the policy network trained above, AlphaGo also constructs a second network (called the <em>value</em> network) which, for a given state, predicts which player has advantage. In some sense, one can view this analogous to how we motivated GANs: the policy network proposes actions to take, and the value network evaluates how good different actions are in terms of expected return. [Such an approach is called an <em>actor-critic method</em>, which discuss below.] There were other additional hacks thrown on top to make everything work, but this is the gist. Read the (very enjoyable) <a href="https://www.nature.com/articles/nature16961">paper</a> if you would like to learn more.</p>
<h2 id="actor-critic-methods">Actor-critic methods</h2>
<p>The high level idea in actor-critic methods is to <em>combine</em> elements from both policy gradients as well as Q-learning. Recall that the key idea in policy gradients was the computation of the update rule using the log-derivative trick:</p>
\[\frac{\partial}{\partial \theta} \mathbb{E}_{\pi(\tau)} R(\tau) \approx R(s,a) \frac{\partial}{\partial \theta} \log \pi_\theta(a | s).\]
<p>Here, for simplicity we have ignored trajectories and assumed that policies only depend on the current state of the world, and rewards only depend on the current action that we take. The policy network $\pi_\theta$ outputs a distribution over actions; favorable actions are associated with higher probabilities. We call this the <em>actor</em> network.</p>
<p>Instead of using the reward $R$ directly (which could be sparse, or non-informative), we instead replace this via the <em>expected</em> discounted reward, which is essentially the <em>Q-function</em> $Q(s,a)$. But where does this value come from? To compute this, we use a <em>second</em> auxiliary neural network (call it $Q_\phi$ where $\phi$ denotes this auxiliary network’s weights). We call this the <em>critic</em> network.</p>
<p>This sets up an interesting game theoretic interpretation. The actor learns to play the game, and picks the best moves at each time step. The critic learns to estimate values of different actions by the actor, and keeps track of long-term future rewards. There are other concepts involved here (such as <em>advantage</em>) and two time-scale learning which we won’t get into here; best left for a detailed course on RL.</p>
<p>The overall algorithm proceeds as follows:</p>
<ol>
<li>
<p>Initialize $\theta, \phi, s$</p>
</li>
<li>
<p>At each time step:</p>
<table>
<tbody>
<tr>
<td>a. Sample $a’ \sim \pi_\theta(a’</td>
<td>s_t)$</td>
</tr>
</tbody>
</table>
<p>b. Update actor network weights $\theta$ according to log-derivative trick</p>
<p>c. Compute Bellman error $\delta_t$</p>
<p>d. Update critic network weights $\phi$ according to Q-learning updates.</p>
<p>e. Update state to $s_{t+1}$ and repeat!</p>
</li>
</ol>
<h2 id="alphafold2">AlphaFold2</h2>
<tba>
</tba>Course NotesIn which we discuss success stories of deep RL, and the road ahead.Lecture 11: Generative Adversarial Networks2021-04-12T00:00:00+00:002021-04-12T00:00:00+00:00https://chinmayhegde.github.io/dl-notes/notes/lecture12<p><em>In which we introduce the concept of generative models and two common instances encountered in deep learning.</em></p>
<p>Much of what we have discussed in the first part of this course has been in the context of making <em>deterministic, point</em> predictions: given image, predict cat vs dog; given sequence of words, predict next word; given image, locate all balloons; given a piece of music, classify it; etc. By now you should be quite clear (and confident) in your ability to solve such tasks using deep learning (given, of course, the usual caveats on dataset size, quality, loss function, etc etc).</p>
<p>All of the above tasks have a well defined <em>answer</em> to whatever question we are asking, and deep networks trained with suitable supervision can find them. But modern deep networks can be used for several other interesting tasks that conceivably fall into the purview of “artificial intelligence”. For example, think about the following tasks (that humans can do quite well), that do not cleanly fit into the supervised learning:</p>
<ul>
<li>
<p>find underlying laws/characteristics that are salient in a given corpus of data.</p>
</li>
<li>
<p>given a topic/keyword (say “water lily”), draw/synthesize a new painting (or 250 paintings, all different) based on the keyword.</p>
</li>
<li>
<p>given a photograph of a face (with the left half blacked out), mentally visualize how the rest would look like.</p>
</li>
<li>
<p>be able to quickly adapt to new tasks.</p>
</li>
<li>
<p>be able to memorize and recall objects.</p>
</li>
<li>
<p>be able to plan ahead in the face of uncertain and changing environments;</p>
</li>
</ul>
<p>among many others.</p>
<p>In the latter part of the course we will focus on solving such tasks. Somewhat fortunately, the main ingredients of deep learning (feedforward/recurrent architectures, gradient descent/backpropagation, data representations) will remain the same – but we will put them together into novel formulations.</p>
<h2 id="gans">GANs</h2>
<p>We will now discuss families of generative models that are able to accurately reproduce very “realistic” data, even in high dimensions (such as high resolution face images).</p>
<p>Certain tasks in ML have well defined objective functions. (For example, classification; the obvious metric here is the 0/1 loss, and cross-entropy is its natural continuous relaxation.)</p>
<p>Certain tasks don’t have a well-defined objective function. For example, if we ask a neural net to “draw a painting”, the loss function is not well-defined.</p>
<p>However, we can provide <em>examples</em> of paintings and hope to reproduce more of those. Mathematically, if there is a sub-manifold of all image data that correspond to paintings, we can think of it as a distribution, learn its parameters, and then sample from it. (This is roughly the philosophy we used last time, but note that we are not necessarily assigning likelihoods here.)</p>
<p>Let us use a different approach this time, and work backwards. Let’s say our generative model (which is a neural network) was able to generate a sample painting. Let’s say an oracle (or human) is available, who can eyeball the painting and returns YES if the sample painting is realistic enough, and NO if not. This piece of information can be viewed as a rough error signal — and if there was some way to “backpropagate” this error, we can use gradient descent to iteratively adjust the parameters of the network, and generate more and more samples until the sample output always passes the eye test.</p>
<p>Sounds like a good idea, except, having an actual human to check each sample is not feasible.</p>
<p>To resolve this issue, let us now assume that the oracle with a <em>second</em> neural network. We call this the <em>discriminator</em> or the <em>critic</em>, which — in principle — should be able to tell the difference between “real” data samples, obtained from nature, and “synthetic” data samples produced by the generator.</p>
<p>But this discriminator network itself needs to be trained in order to learn to distinguish between real and fake samples. The insight used in GANs is a clever <em>bootstrapping</em> technique, where the samples from the generator serve as the fake data samples and compared with a training dataset of real samples.</p>
<p>Moreover, the bootstrapping technique enables us to iteratively improve <em>both</em> the generator and the the discriminator. In the beginning, the discriminator does its job easily: the generator produces noise, and the discriminator quickly learns to figure out real vs fake. As training progresses, the generator begins to catch up, and the discriminator needs to adjust its parameters to keep up. In this way, GAN training can be viewed as a two-player game, where the goal of Player 1 (the generator) is to fool the discriminator, and the goal of Player 2 (the discriminator) is to <em>not</em> be fooled by the discriminator. This is called <em>adversarial training</em>, and hence the name “GAN”.</p>
<p>Somewhat interestingly, this type of learning procedure seems to achieve state-of-the-art generative modeling results. The results are impressive: can you figure out which of these dog images are fake and which are real?</p>
<p><img src="/dl-notes/assets/figures/gan.png" alt="Spot the fake dog. Taken from BigGAN, ICLR 2019." /></p>
<h3 id="mathematics-of-gans">Mathematics of GANS</h3>
<p>Let us now cast the above discussion into a typical 3-step ML framework (representations, objective function, and optimization algorithm.)</p>
<p>We denote $G_\Theta(\cdot)$ to be the generator. Here, $\Theta$ represents all the weights/biases of the generator network. As mentioned above, unlike regular neural networks used for classification/regression, the network is architecture is “reversed” – it takes in as input a low-dimensional latent code vector $z$, and produces a high dimensional data vector (such as an image) as output. Recall that in a regular network, dimensionality is successively reduced through the layers (via pooling/striding); in a GAN generative network, dimensionality is successively expanded via upsampling or dilated/transpose convolutions.</p>
<p>We denote $D_\Psi(\cdot)$ to be the discriminator. This is a regular feedforward or convnet architecture, and produces an output probability of an input data sample being real or fake.</p>
<p>Let $y$ be the label where $y=1$ denotes real data and $y=0$ denotes fake data. For a given input, we will train the discriminator to minimize the cross-entropy loss:</p>
\[L(\Psi) = - y \log D_\Psi(x) - (1-y) \log (1 - D_\Psi(x))\]
<p>The first term disappears if $x$ is fake ($y=0$), and the second term disappears if $x$ is real ($y=1$). Fake data samples can be produced by sampling $z \sim \text{Normal}(0,I)$ and passing it through the generator network to produce $G_\Theta(z)$. So the loss function now becomes:</p>
\[L(\Theta,\Psi) = - E_{x \sim \text{real}} \log D_\Psi(x) - E_{z \sim \text{Normal}(0,I)} \log (1 - D_\Psi(G_\Theta(z))) ,\]
<p>where now the goal of the generator is to fool the discriminator as much as possible (i.e., maximize $L$). So the two-player game now becomes:</p>
\[\max_\Theta \min_\Psi L(\Theta,\Psi) .\]
<p>In the literature, it is conventional to flip min- and max-, and negate the loss function. So the standard GAN objective now becomes:</p>
\[L(\Theta,\Psi) = E_{x \sim \text{real}} \log D_\Psi(x) + E_{z \sim \text{Normal}(0,I)} \log (1 - D_\Psi(G_\Theta(z)))\]
<p>We now discuss how to train this network. In each iteration, we sample two minibatch of real data samples and fake data samples. Then, we form the above objective function and take gradients. The gradient with respect to the discriminator is used to update the weights $\Psi$ (note that since we are minimizing with respect to $\Theta$ and maximizing with respect to $\Psi$, this is an algorithm called gradient <em>descent-ascent</em>):</p>
\[\begin{aligned}
\Theta &\leftarrow \Theta - \eta \nabla_\Theta L(\Theta,\Psi) \\
\Psi &\leftarrow \Psi + \eta \nabla_\Psi L(\Theta,\Psi)
\end{aligned}\]
<p>In practice, other updates (such as Adam) may be used.</p>
<p>Note that due to all the hacks above, we cannot quite calculate likelihoods the way we do in the case of flow-models. For this reason, GANs are instances of <em>likelihood-free</em> generative models.</p>
<h3 id="challenges-extensions-and-examples">Challenges, extensions, and examples</h3>
<p>There are a couple of issues with GAN training that we need to keep in mind.</p>
<p>One issue is the form of the loss itself. Observe above that the generator weights <em>only</em> get updated by the gradients of the second term:</p>
\[\log (1 - D_\Psi(G_\Theta(z)))\]
<p>since they do not appear in the first. The problem with this is that if the generator sample is really bad (as is typically the case in the beginning of training), then the discriminator’s prediction is close to zero, and since $\log (1- D)$ is very flat when $D \approx 0$ there is not enough ‘signal’ to move the generator weights meaningfully. Increasing learning rates do not seem to help. This is called the <em>saturation problem</em> in GANs.</p>
<p>To fix this, while updating generator weights, it is common to <em>heuristically replace</em> the second term in the GAN loss with:</p>
\[- \log D_\Psi(G_\Theta(z))\]
<p>A comparison of the two losses are shown below. This solves the saturation problem, but note that now the gradients close to zero are suddenly <em>very</em> high and training becomes unstable. Stably training GANs was a challenge faced by the community for quite some time (and continues to be a challenge), and a common resolution is to use <em>Wasserstein GANs</em>. We won’t get into the details here, but the high level idea is that the above GAN loss function can be viewed as a specific form of distance between probability distributions (called the <em>Jensen-Shannon divergence</em>), and this can be generated to other distances. A common alternative is the <em>Earth-mover</em> or <em>Wasserstein</em> distance, leading to a different type of GAN model called <em>Wasserstein GAN</em>. There is a lengthy derivation involved, but the loss function becomes:</p>
\[L^{WGAN}_(\Theta,\Psi) = E_{x \sim \text{real}} f(D_\Psi(x)) + E_{z \sim \text{Normal}(0,I)} - f(D_\Psi(G_\Theta(z))) .\]
<p>where $f$ is a monotonic function that is <a href="https://en.wikipedia.org/wiki/Lipschitz_continuity">1-Lipschitz</a>. In practice, this property can be implemented via a procedure called gradient-clipping, but let’s not get into the weeds.</p>
<p>A third issue is something called <em>mode collapse</em>. If we stare closely, suppose that the network $G_\Theta(z)$ is accidentally trained such that it always produces a fixed output $\hat{x}$ no matter what the $z$ is (i.e., the range of $G$ collapses to a single point), <em>and</em> that the output $\hat{x}$ exactly matches a sample from the real dataset. This leads to zero loss, and hence is an optimal solution! So in some sense, the network has memorized exactly one data sample from the training dataset – so it has not really learned the distribution – but the GAN loss function does not really distinguish between the two regimes.</p>
<p>This is actually not an isolated occurrence. Even if the generator does not memorize a given data point, it could just memorize a set of weights to produce <em>fake</em> data points that somehow the discriminator does not do very well on. This is a consequence of the two-player game; the generator can “win” by finding a “cheat code” set of weights that is over-optimized to fool the particular discriminator, and not necessarily actually solving the game (of learning the probability distribution).</p>
<p>Mode collapse can be viewed as a specific form of overfitting, and there are a few ways to avoid this: early stopping helps; so does changing the objective function to encourage diversity in mini-batches; and so does adding noise to the discriminator/generator outputs (a la dropout).</p>
<p>Lots more tricks to get GANs working (and we won’t get into all of them) here, but here are some representative images.</p>
<p><img src="/dl-notes/assets/figures/big-gan.png" alt="Taken from BigGAN, ICLR 2019." />{ width=100% }</p>
<h3 id="conditional-gans">Conditional GANs</h3>
<p>The above types of GAN models enable sampling from the data distribution: choose a random new latent code vector $z$ and generate a new sample $x = G(z)$.</p>
<p>In practice, however, it would be nice to have some kind of user control over the outputs. For example, the following applications:</p>
<ul>
<li>Category-dependent generation</li>
<li>Image style transfer</li>
</ul>
<p>Simple example: class-conditional GAN. Say MNIST digit. This is easy; we just augment the input $z$ with the class label $c$, and feed the same to the discriminator. So in some sense, a subset of features in the code vector fed to the generator are clearly interpretable as categorical input codes.</p>
\[L(\Theta,\Psi) = E_{x \sim \text{real}} \log D_\Psi(x | c) + E_{z \sim \text{Normal}(0,I)} \log (1 - D_\Psi(G_\Theta(z | c)))\]
<p>A harder problem is image style transfer. Say we want the content to remain the same but change the weather, or change night to day, or change artistic style. The issue with this kind of problem is that labels are hard to find (how do we get pairs of images with same content but different style?)</p>
<p>A way to achieve is this called <em>cycle consistency</em>. At a high level, the generative model consists of <em>three</em> networks simultaneously trained:</p>
<ul>
<li>Train two generative nets: $G_1$ for Style 1 to Style 2, and $G_2$ for style 2 back to Style 1.</li>
<li>Use a discriminator to ensure that samples from $G_1$ (Style 2) are indistinguishable from real data.</li>
<li>Use a reconstruction loss to make sure that $G_2$ learns to invert $G_1$.</li>
</ul>
<p>Examples:</p>
<p><img src="/dl-notes/assets/figures/cycle-gan.jpg" alt="Taken from BigGAN, ICLR 2019." /></p>
<h2 id="variational-autoencoders">Variational Autoencoders</h2>
<p>We won’t discuss VAEs in great detail. (The machinery is quite a bit involved, and they don’t work as well as GANs.) Autoencoders are fairly simple to understand. These consist of two networks $f_\theta$ and $g_\phi$, concatenated back-to-back trained using the reconstruction loss:</p>
\[L(\theta,\phi) = \frac{1}{n} \sum_{i=1}^n \|x^{i} - f_\theta(g_\phi(x^{i})) \|^2 .\]
<p>The simplest example of an autoencoder is when the functions $f_\theta$ and $g_\phi$ are single layers in a neural network with linear activation (i.e., linear mappings). Then the loss becomes:</p>
\[L(U,V) = \frac{1}{n} \sum_{i=1}^n \|x^{i} - U V^T x^{i}) \|^2 .\]
<p>which is equivalent to principal components analysis (PCA). The number of hidden units equals the number of principal components.</p>
<p>The output of $g_\phi$ can be viewed as a compressed representation of the input. This part is called the <em>encoder</em>, and the second part is called the <em>decoder</em>. Once this network is trained we can just take the decoder part and feed in different latent vectors to generate new samples, just like in GANs.</p>
<p>At a high level, variational autoencoders is an example of this approach. The architecture of a VAE looks like this:</p>
<p><img src="figures/vae-gaussian.png" alt="VAE architecture, taken from [here](https://lilianweng.github.io/lil-log/2018/08/12/from-autoencoder-to-beta-vae.html)" /></p>
<p>where both the encoder and decoder represent <em>probabilistic</em> mappings. The loss function used to train this pair of network resembles the log-likelihood of the data samples (the same as that used to train normalizing flows/etc), but is augmented with a regularizer <em>Kullback-Leibler Divergence</em>, which encourages the distribution in the latent code ($z$-) space to become Gaussian; so the overall loss looks like this:</p>
\[L_{\text{VAE}}(\theta,\phi) = - E_{z \sim q_\phi} \log p_\theta(x | z) + D_{KL}(q_\phi(z | x) || p_\theta(z))\]
<p>which is minimized over both $\theta$ and $\phi$. We will skip the details; refer <a href="https://arxiv.org/abs/1606.05908">here</a> for a rigorous treatment.</p>Course NotesIn which we introduce the concept of generative models and two common instances encountered in deep learning.Lecture 12: Diffusion Models2021-04-01T00:00:00+00:002021-04-01T00:00:00+00:00https://chinmayhegde.github.io/dl-notes/notes/lecture11-old<p><em>In which we discuss the foundations of generative neural network models.</em></p>
<h1 id="unsupervised-learning-and-generative-models">Unsupervised Learning and Generative Models</h1>
<h2 id="motivation">Motivation</h2>
<p>Much of what we have discussed in the first part of this course has been in the context of making <em>deterministic, point</em> predictions: given image, predict cat vs dog; given sequence of words, predict next word; given image, locate all balloons; given a piece of music, classify it; etc. By now you should be quite clear (and confident) in your ability to solve such tasks using deep learning (given, of course, the usual caveats on dataset size, quality, loss function, etc etc).</p>
<p>All of the above tasks have a well defined <em>answer</em> to whatever question we are asking, and deep networks trained with suitable supervision can find them. But modern deep networks can be used for several other interesting tasks that conceivably fall into the purview of “artificial intelligence”. For example, think about the following tasks (that humans can do quite well), that do not cleanly fit into the supervised learning:</p>
<ul>
<li>
<p>find underlying laws/characteristics that are salient in a given corpus of data.</p>
</li>
<li>
<p>given a topic/keyword (say “water lily”), draw/synthesize a new painting (or 250 paintings, all different) based on the keyword.</p>
</li>
<li>
<p>given a photograph of a face (with the left half blacked out), mentally hallucinate how the rest would look like.</p>
</li>
<li>
<p>be able to quickly adapt to new tasks.</p>
</li>
<li>
<p>be able to memorize and recall objects.</p>
</li>
<li>
<p>be able to plan ahead in the face of uncertain and changing environments;</p>
</li>
</ul>
<p>among many others.</p>
<p>In the next few lectures we will focus on solving such tasks. Somewhat fortunately, the main ingredients of deep learning (feedforward/recurrent architectures, gradient descent/backpropagation, data representations) will remain the same – but we will put them together into novel formulations.</p>
<p>Tasks such as classification/regression are inherently <em>discriminative</em> – the network learns to figure out <em>the</em> answer (or label) for a given input. Tasks such as synthesis are inherently <em>generative</em> – there is no one answer, and instead the network will need to figure out a <em>probability distribution</em> (or, loosely, a <em>set</em>) of possible answers to it. Let us see how to train neural nets that learn to produce such distributions.</p>
<p>[Side note: machine learning/statistics has long dealt with modeling uncertainty and producing distributions. Probabilistic models for machine learning is a vast area in itself (independent of whether we are studying neural nets or not). We won’t have time to go into all the details – take an advanced statistical learning course if you would like to learn more.]</p>
<h2 id="setup">Setup</h2>
<p>Let us lay out the problem more precisely. In terms of symbols, instead of learning weights $W$ that learn a discriminative function mapping of the form:
\(y = f_W(x)\)
we will instead imagine that the space of all $x$ is endowed with some probability distribution $p(x)$. This may be a distribution that is without any conditions (e.g., all face images $x$ are assigned high values of $p(x)$, and the set of all images that are not faces are assigned low values of $p(x)$). Or, this may be a <em>conditional</em> distribution $p(x; c)$. (Example: the condition $c$ may denote hair color, and the set of all face images with that particular hair color $c$ will be assigned higher probability versus the rest).</p>
<p>If there was some computationally easy way to represent the distribution $p(x)$, we could do several things:</p>
<ul>
<li>
<p>we could <em>sample</em> from this distribution. This would give us the ability to synthesize new data points.</p>
</li>
<li>
<p>we could <em>evaluate</em> the likelihood of a given test data point (e.g. answering the question: does this image resemble a face image?)</p>
</li>
<li>
<p>we could solve <em>optimization problems</em> (e.g. among all potential designs of handbags, find the ones that meet color and cost criteria)</p>
</li>
<li>
<p>perhaps learn conditional relationships between different features</p>
</li>
</ul>
<p>etc.</p>
<p>The question now becomes: how do we computationally represent the distribution $p(x)$? Modeling distributions (particularly in high-dimensional feature spaces) is not easy – this is called the <em>curse of dimensionality</em> — and the typical approach to resolve this is to parameterize the distribution in some way:
\(p(x) := p_\Theta(x)\)
and try to figure out the optimal parameters $\Theta$ (where we will define what “optimal” means later).</p>
<p>Classical machine learning and statistical approaches start off with simple parameterizations (such as Gaussians). Gaussians are nice in many ways: they are exactly characterized by their mean and (co)variance. We can draw samples easily from Gaussians. Central limit theorem = any set of independent samples averaged over sufficiently many draws resembles a Gaussian. Computationally, we like Gaussians.</p>
<p>Unfortunately, nature is far from being Gaussian! Real-world data is diverse; multi-modal; discontinuous; involves rare events; and so on, none of which Gaussians can handle very well.</p>
<p>Second attempt: Gaussian mixture models. These are better (multi-modal) but still not rich enough to capture real datasets very well.</p>
<p>Enter neural networks. We will start with some simple distribution (say a standard Gaussian) and call it $p(z)$. We will generate random samples from $p$; call it $z$. We will then pass $z$ through a neural network:
\(x = f_\Theta(z)\)
parameterized by $\Theta$. Therefore, the random variable $x$ has a different distribution, say $p(x)$. By adjusting the weights we can (hopefully) deform $p(z)$ to obtain a $p(x)$ that matches any distribution we like. Here, $z$ is called the <em>latent</em> variable (or sometimes the <em>code</em>), and $f$ is called the <em>generative model</em> (or sometimes the <em>decoder</em>).</p>
<p>How are $p(x)$ and $p(z)$ linked? Let us for simplicity assume that $f$ is one-to-one and invertible, i.e., $z = f_\Theta^{-1}(x)$. Then, we can use the <em>Change-of-Variables</em> formula for probability distributions. In one dimension, this is fairly intuitive to understand: in order to conserve mass, the area of the intervals must be the same, i.e., $p(x)dx = p(z)d(z)$ and hence the probability distributions must obey:</p>
<p><img src="figures/change-of-variables.png" alt="Change of variables" />{ width=40% }</p>
\[p(x) = p(z) | \frac{dx}{dz} |^{-1}\]
<p>When both $x$ and $z$ have more than one dimension, we have to replace areas by (multi-dimensional) volumes and derivatives by partial derivatives. Fortunately, volumes correspond to determinants! Therefore, we can get an analogous formula by replacing the absolute value by the <em>determinant of the Jacobian of the mapping $x = f(z)$</em>:</p>
\[p(x) = p(z) | \frac{\partial x}{\partial z} |^{-1}\]
<p>This gives us a closed-form expression to evaluate any $p(x)$, given the forward mapping. However, note that for this formula to hold, the following conditions must be true:</p>
<ul>
<li>
<p>$f$ must be one-to-one and easily invertible.</p>
</li>
<li>
<p>$f$ needs to be differentiable, i.e., the Jacobian must be well-defined.</p>
</li>
<li>
<p>The determinant of the Jacobian must be easy to invert.</p>
</li>
</ul>
<h2 id="reversible-models">Reversible Models</h2>
<p>As a warmup, a simple approach that ensures all of the above conditions are called <em>reversible models</em>. Recall the <em>residual</em> block that we discussed in the context of CNNs: this is similar. Residual blocks implement:
\(x = z + F_\Theta(z)\)
where $F_\Theta$ is some differentiable network that has equal input and output size. (You can use ReLUs too but strictly speaking we should use differentiable nonlinearities such as sigmoids). Typically, $F_\Theta$ is a dense shallow (single- or two-layer network).</p>
<p>Reversible models use the above block as follows. We will consider two <em>auxiliary random variables</em> $u$ and $v$ as the same size as $x$ and $z$, and define two paths:
\(\begin{aligned}
x &= z + F_\Theta(u), \\
v &= u .
\end{aligned}\)
The variable $u$ is called an <em>additive coupling layer</em>. If you don’t like adding an extra variable for memory reasons (say), you can just split your features into two halves and proceed.</p>
<p>The advantage of this model is that the inverse of this forward model is easy to calculate! Given any $x$ and $v$, the inverse of this model is given by:
\(\begin{aligned}
u &= v, \\
z &= x - F_\Theta(u) .
\end{aligned}\)</p>
<p>What about the determinant of the Jacobian? Turns out that reversible blocks have very simple expressions for the determinant. For each layer, the Jacobian is of the form:
\(\left(
\begin{array}{cc}
\frac{\partial x}{\partial z} & \frac{\partial x}{\partial u} \\
\frac{\partial v}{\partial z} & \frac{\partial v}{\partial u}
\end{array}
\right) = \left(
\begin{array}{cc}
I & \frac{\partial F_\theta}{\partial u} \\
0 & I
\end{array}
\right)\)
which is an upper-triangular matrix with diagonal equal to 1. Such matrices have <em>determinant equal to 1 always</em> (and such transformations are hence called “volume preserving”). In other words, each reversible block maps a set to another set of the same volume.</p>
<p>Having defined a single reversible block, we can now chain multiple such reversible blocks into a deeper architecture by alternating the roles of $x$ and $v$. Let’s say we have a second such block $F_\Psi$ applied to $v$ and $u$. Then, we get the following two-layer architecture:
\(\begin{aligned}
x' &= z + F_\Theta(u) \\
z' &= u + F_\Psi(x')
\end{aligned}\)</p>
<p>(Exercise: can you compute the inverse of this two-layer block?)</p>
<p>Turns out that each such block is volume preserving, and hence the determinant of the overall Jacobian (no matter how many blocks we stack) are all equal to unity. We can think of each layer as <em>incrementally</em> changing the distribution until we arrive at the final result. Such a model that implements this type of incremental change is called a “flow” model. (The specific form above was called NICE – short for Nonlinear Independent Components Estimation).</p>
<p>We finally come to training this model. Different objective functions can be used: a common one is <em>maximum likelihood</em>: given a dataset of $n$ samples $x_1, x_2, \ldots, x_n$ we optimize for the parameters that maximize the overall likelihood:
\(L(\Theta) = \prod_{i=1}^n p_X(x_i) = \prod_{i=1}^n p_Z(f^{-1}(x_i))\)
where $p_Z$ is the base distribution. (Note that the Jacobian disappears.) In practice, sums are easier to optimize than products, and therefore we use the log-likelihood instead.</p>
<h2 id="normalizing-flows">Normalizing Flows</h2>
<p>Reversible blocks are nice from a compute standpoint, but have architectural limitations due to the volume preserving constraint.</p>
<p>Normalizing Flows (NF) generalize the above technique, and allow the mapping to be non-volume preseerving (NVP). The idea is to assume an arbitrary series of maps: $f_1, f_2, \ldots, f_L$ (where $L$ is the depth), so that:
\(x = f_L \odot \ldots \odot f_2 \odot f_1(z) .\)
Define $z_0 := z$ and $z_i$ as the output of the $i$-th layer. Applying the change-of-variables formula to any intermediate layer, we have the distributional relationship:
\(\log p(z_i) = \log p(z_{i-1}) - \log | \text{det} \frac{\partial z_i}{\partial {z_{i-1}}} |.\)
and recursing over $i$, we have the log likelihood:
\(\log p(x) = \log p(z) - \sum_{i=1}^L \log | \text{det} \frac{\partial z_i}{\partial z_{i-1}} .\)
This is a bit more complicated to evaluate, but in principle it can be done.</p>
<p>To make life simpler, in NF, we use the same principles as we did for reversible architectures:</p>
<ul>
<li>
<p>easy inverses for each layer</p>
</li>
<li>
<p>easy Jacobian determinant</p>
</li>
</ul>
<p>but this time, instead of creating an <em>additive</em> coupling layer $u$, we use an <em>affine</em> coupling layer:
\(\begin{aligned}
x &= z \odot \exp(F_\Theta(u)) + F_\Psi(u), \\
v &= u .
\end{aligned}\)
where $F_\theta$ and $F_\Psi$ are trainable functions, and $\odot$ is applied component wise. The inverse of the affine coupling layer is simple:
\(\begin{aligned}
u &= v, \\
z &= (x - F_\Psi(u)) \odot \exp(-F_\Theta(u)). \\
\end{aligned}\)
Moreover, the Jacobian has the following structure:
\(J = \left(
\begin{array}{cc}
\frac{\partial x}{\partial z} & \frac{\partial x}{\partial u} \\
\frac{\partial v}{\partial z} & \frac{\partial v}{\partial u}
\end{array}
\right) = \left(
\begin{array}{cc}
\text{diag}(\exp(F_\Theta(u))) & \frac{\partial F_\theta}{\partial u} \\
0 & I
\end{array}
\right)\)
which is an upper-triangular matrix, but with an easy-to-calculate determinant:
\(det(J) = \exp(\sum_{i=1}^d F^{i}_\theta(u)).\)</p>
<h2 id="autoregressive-models-diffusion-models-etc">Autoregressive models, diffusion models, etc</h2>
<p>The above generative architectures are feedforward, dense, and useful for static structured data (such as images). For sequential data (such as music), we can develop similar models using RNN-type architectures.</p>
<p>The differences between the methods lie in the details, but the basic idea is that the features in the output $x$ are not simultaneously generated (as in a feedforward network), but rather, generated one after the other. Moreover, since certain types of sequence data (such as voice or music) usually respect causality, the architectures are restricted to be <em>auto-regressive</em>, i.e., the probability distribution of <em>every</em> generated sample $x$ is decomposed as:
\(P(x) = \pi_{i=1}^d p(x[i] | x[0:i])\)
(where we are abusing Python notation here). Various typical assumptions (e.g. Markov-ness) are made to simplify this, just as how we did for an RNN. But fundamentally, since there are $d$ terms in the above product one would have to “unroll” the operation into a depth-$d$ network, which can be rather challenging.</p>
<p>WaveNet, used for audio signal generation, reduces the depth in a smart way: it uses a technique called <em>dilated convolution</em> that effectively reduces the depth to $\log d$ by grouping together symbols and effectively using parallelism. We won’t get into any further detail here.</p>
<p><em>Under construction</em></p>Course NotesIn which we discuss the foundations of generative neural network models.Lecture 12: Diffusion Models2021-04-01T00:00:00+00:002021-04-01T00:00:00+00:00https://chinmayhegde.github.io/dl-notes/notes/lecture11<p><em>In which we discuss the foundations of generative neural network models.</em></p>
<p><em>Co-authored with Teal Witter.</em></p>
<h2 id="motivation">Motivation</h2>
<p>Much of what we have discussed in the first part of this course has been in the context of making <em>deterministic, point</em> predictions: given image, predict cat vs dog; given sequence of words, predict next word; given image, locate all balloons; given a piece of music, classify it; etc. By now you should be quite clear (and confident) in your ability to solve such tasks using deep learning (given, of course, the usual caveats on dataset size, quality, loss function, etc etc).</p>
<p>All of the above tasks have a well defined <em>answer</em> to whatever question we are asking, and deep networks trained with suitable supervision can find them. But modern deep networks can be used for several other interesting tasks that conceivably fall into the purview of “artificial intelligence”. For example, think about the following tasks (that humans can do quite well), that do not cleanly fit into the supervised learning:</p>
<ul>
<li>
<p>find underlying laws/characteristics that are salient in a given corpus of data.</p>
</li>
<li>
<p>given a topic/keyword (say “water lily”), draw/synthesize a new painting (or 250 paintings, all different) based on the keyword.</p>
</li>
<li>
<p>given a photograph of a face (with the left half blacked out), mentally hallucinate how the rest would look like.</p>
</li>
<li>
<p>be able to quickly adapt to new tasks.</p>
</li>
<li>
<p>be able to memorize and recall objects.</p>
</li>
<li>
<p>be able to plan ahead in the face of uncertain and changing environments;</p>
</li>
</ul>
<p>among many others.</p>
<p>In the next few lectures we will focus on solving such tasks. Somewhat fortunately, the main ingredients of deep learning (feedforward/recurrent architectures, gradient descent/backpropagation, data representations) will remain the same – but we will put them together into novel formulations.</p>
<p>Tasks such as classification/regression are inherently <em>discriminative</em> – the network learns to figure out <em>the</em> answer (or label) for a given input. Tasks such as synthesis are inherently <em>generative</em> – there is no one answer, and instead the network will need to figure out a <em>probability distribution</em> (or, loosely, a <em>set</em>) of possible answers to it. Let us see how to train neural nets that learn to produce such distributions.</p>
<p>[Side note: machine learning/statistics has long dealt with modeling uncertainty and producing distributions. Probabilistic models for machine learning is a vast area in itself (independent of whether we are studying neural nets or not). We won’t have time to go into all the details – take an advanced statistical learning course if you would like to learn more.]</p>
<h2 id="setup">Setup</h2>
<p>Let us lay out the problem more precisely. In terms of symbols, instead of learning weights $W$ that learn a discriminative function mapping of the form:
\(y = f_W(x)\)
we will instead imagine that the space of all $x$ is endowed with some probability distribution $p(x)$. This may be a distribution that is without any conditions (e.g., all face images $x$ are assigned high values of $p(x)$, and the set of all images that are not faces are assigned low values of $p(x)$). Or, this may be a <em>conditional</em> distribution $p(x; c)$. (Example: the condition $c$ may denote hair color, and the set of all face images with that particular hair color $c$ will be assigned higher probability versus the rest).</p>
<p>If there was some computationally easy way to represent the distribution $p(x)$, we could do several things:</p>
<ul>
<li>
<p>we could <em>sample</em> from this distribution. This would give us the ability to synthesize new data points.</p>
</li>
<li>
<p>we could <em>evaluate</em> the likelihood of a given test data point (e.g. answering the question: does this image resemble a face image?)</p>
</li>
<li>
<p>we could solve <em>optimization problems</em> (e.g. among all potential designs of handbags, find the ones that meet color and cost criteria)</p>
</li>
<li>
<p>perhaps learn conditional relationships between different features</p>
</li>
</ul>
<p>etc.</p>
<p>The question now becomes: how do we computationally represent the distribution $p(x)$? Modeling distributions (particularly in high-dimensional feature spaces) is not easy – this is called the <em>curse of dimensionality</em> — and the typical approach to resolve this is to parameterize the distribution in some way:
\(p(x) := p_\Theta(x)\)
and try to figure out the optimal parameters $\Theta$ (where we will define what “optimal” means later).</p>
<p>Classical machine learning and statistical approaches start off with simple parameterizations (such as Gaussians). Gaussians are nice in many ways: they are exactly characterized by their mean and (co)variance. We can draw samples easily from Gaussians. Central limit theorem = any set of independent samples averaged over sufficiently many draws resembles a Gaussian. Computationally, we like Gaussians.</p>
<p>Unfortunately, nature is far from being Gaussian! Real-world data is diverse; multi-modal; discontinuous; involves rare events; and so on, none of which Gaussians can handle very well.</p>
<p>Second attempt: Gaussian mixture models. These are better (multi-modal) but still not rich enough to capture real datasets very well.</p>
<p>Enter neural networks. We will start with some simple distribution (say a standard Gaussian) and call it $p(z)$. We will generate random samples from $p$; call it $z$. We will then pass $z$ through a neural network:
\(x = f_\Theta(z)\)
parameterized by $\Theta$. Therefore, the random variable $x$ has a different distribution, say $p(x)$. By adjusting the weights we can (hopefully) deform $p(z)$ to obtain a $p(x)$ that matches any distribution we like. Here, $z$ is called the <em>latent</em> variable (or sometimes the <em>code</em>), and $f$ is called the <em>generative model</em> (or sometimes the <em>decoder</em>).</p>
<p>How are $p(x)$ and $p(z)$ linked? Let us for simplicity assume that $f$ is one-to-one and invertible, i.e., $z = f_\Theta^{-1}(x)$. Then, we can use the <em>Change-of-Variables</em> formula for probability distributions. In one dimension, this is fairly intuitive to understand: in order to conserve mass, the area of the intervals must be the same, i.e., $p(x)dx = p(z)d(z)$ and hence the probability distributions must obey:</p>
<p><img src="/dl-notes/assets/figures/change-of-variables.png" alt="Change of variables" /></p>
\[p(x) = p(z) | \frac{dx}{dz} |^{-1}\]
<p>When both $x$ and $z$ have more than one dimension, we have to replace areas by (multi-dimensional) volumes and derivatives by partial derivatives. Fortunately, volumes correspond to determinants! Therefore, we can get an analogous formula by replacing the absolute value by the <em>determinant of the Jacobian of the mapping $x = f(z)$</em>:</p>
\[p(x) = p(z) | \frac{\partial x}{\partial z} |^{-1}\]
<p>This gives us a closed-form expression to evaluate any $p(x)$, given the forward mapping. However, note that for this formula to hold, the following conditions must be true:</p>
<ul>
<li>
<p>$f$ must be one-to-one and easily invertible.</p>
</li>
<li>
<p>$f$ needs to be differentiable, i.e., the Jacobian must be well-defined.</p>
</li>
<li>
<p>The determinant of the Jacobian must be easy to invert.</p>
</li>
</ul>
<h2 id="reversible-models">Reversible Models</h2>
<p>As a warmup, a simple approach that ensures all of the above conditions are called <em>reversible models</em>. Recall the <em>residual</em> block that we discussed in the context of CNNs: this is similar. Residual blocks implement:
\(x = z + F_\Theta(z)\)
where $F_\Theta$ is some differentiable network that has equal input and output size. (You can use ReLUs too but strictly speaking we should use differentiable nonlinearities such as sigmoids). Typically, $F_\Theta$ is a dense shallow (single- or two-layer network).</p>
<p>Reversible models use the above block as follows. We will consider two <em>auxiliary random variables</em> $u$ and $v$ as the same size as $x$ and $z$, and define two paths:
\(\begin{aligned}
x &= z + F_\Theta(u), \\
v &= u .
\end{aligned}\)
The variable $u$ is called an <em>additive coupling layer</em>. If you don’t like adding an extra variable for memory reasons (say), you can just split your features into two halves and proceed.</p>
<p>The advantage of this model is that the inverse of this forward model is easy to calculate! Given any $x$ and $v$, the inverse of this model is given by:
\(\begin{aligned}
u &= v, \\
z &= x - F_\Theta(u) .
\end{aligned}\)</p>
<p>What about the determinant of the Jacobian? Turns out that reversible blocks have very simple expressions for the determinant. For each layer, the Jacobian is of the form:
\(\left(
\begin{array}{cc}
\frac{\partial x}{\partial z} & \frac{\partial x}{\partial u} \\
\frac{\partial v}{\partial z} & \frac{\partial v}{\partial u}
\end{array}
\right) = \left(
\begin{array}{cc}
I & \frac{\partial F_\theta}{\partial u} \\
0 & I
\end{array}
\right)\)
which is an upper-triangular matrix with diagonal equal to 1. Such matrices have <em>determinant equal to 1 always</em> (and such transformations are hence called “volume preserving”). In other words, each reversible block maps a set to another set of the same volume.</p>
<p>Having defined a single reversible block, we can now chain multiple such reversible blocks into a deeper architecture by alternating the roles of $x$ and $v$. Let’s say we have a second such block $F_\Psi$ applied to $v$ and $u$. Then, we get the following two-layer architecture:
\(\begin{aligned}
x' &= z + F_\Theta(u) \\
z' &= u + F_\Psi(x')
\end{aligned}\)</p>
<p>(Exercise: can you compute the inverse of this two-layer block?)</p>
<p>Turns out that each such block is volume preserving, and hence the determinant of the overall Jacobian (no matter how many blocks we stack) are all equal to unity. We can think of each layer as <em>incrementally</em> changing the distribution until we arrive at the final result. Such a model that implements this type of incremental change is called a “flow” model. (The specific form above was called NICE – short for Nonlinear Independent Components Estimation).</p>
<p>We finally come to training this model. Different objective functions can be used: a common one is <em>maximum likelihood</em>: given a dataset of $n$ samples $x_1, x_2, \ldots, x_n$ we optimize for the parameters that maximize the overall likelihood:
\(L(\Theta) = \prod_{i=1}^n p_X(x_i) = \prod_{i=1}^n p_Z(f^{-1}(x_i))\)
where $p_Z$ is the base distribution. (Note that the Jacobian disappears.) In practice, sums are easier to optimize than products, and therefore we use the log-likelihood instead.</p>
<h2 id="normalizing-flows">Normalizing Flows</h2>
<p>Reversible blocks are nice from a compute standpoint, but have architectural limitations due to the volume preserving constraint.</p>
<p>Normalizing Flows (NF) generalize the above technique, and allow the mapping to be non-volume preseerving (NVP). The idea is to assume an arbitrary series of maps: $f_1, f_2, \ldots, f_L$ (where $L$ is the depth), so that:
\(x = f_L \odot \ldots \odot f_2 \odot f_1(z) .\)
Define $z_0 := z$ and $z_i$ as the output of the $i$-th layer. Applying the change-of-variables formula to any intermediate layer, we have the distributional relationship:
\(\log p(z_i) = \log p(z_{i-1}) - \log | \text{det} \frac{\partial z_i}{\partial {z_{i-1}}} |.\)
and recursing over $i$, we have the log likelihood:
\(\log p(x) = \log p(z) - \sum_{i=1}^L \log | \text{det} \frac{\partial z_i}{\partial z_{i-1}} .\)
This is a bit more complicated to evaluate, but in principle it can be done.</p>
<p>To make life simpler, in NF, we use the same principles as we did for reversible architectures:</p>
<ul>
<li>
<p>easy inverses for each layer</p>
</li>
<li>
<p>easy Jacobian determinant</p>
</li>
</ul>
<p>but this time, instead of creating an <em>additive</em> coupling layer $u$, we use an <em>affine</em> coupling layer:
\(\begin{aligned}
x &= z \odot \exp(F_\Theta(u)) + F_\Psi(u), \\
v &= u .
\end{aligned}\)
where $F_\theta$ and $F_\Psi$ are trainable functions, and $\odot$ is applied component wise. The inverse of the affine coupling layer is simple:
\(\begin{aligned}
u &= v, \\
z &= (x - F_\Psi(u)) \odot \exp(-F_\Theta(u)). \\
\end{aligned}\)
Moreover, the Jacobian has the following structure:
\(J = \left(
\begin{array}{cc}
\frac{\partial x}{\partial z} & \frac{\partial x}{\partial u} \\
\frac{\partial v}{\partial z} & \frac{\partial v}{\partial u}
\end{array}
\right) = \left(
\begin{array}{cc}
\text{diag}(\exp(F_\Theta(u))) & \frac{\partial F_\theta}{\partial u} \\
0 & I
\end{array}
\right)\)
which is an upper-triangular matrix, but with an easy-to-calculate determinant:
\(det(J) = \exp(\sum_{i=1}^d F^{i}_\theta(u)).\)</p>
<h2 id="diffusion-models">Diffusion models</h2>
<h3 id="motivation-1">Motivation</h3>
<p>We had previously discussed generative adversarial networks (GANs). GANs create realistic images by playing a game between two neural networks called the generator and discriminator; the generator learns to create realistic images while the discriminator learns to differentiate between these fake images and the real ones. Unfortunately, GANs suffer from a problem called <em>mode collapse</em>. During training, the generator can memorize a single image from the real data set and the discriminator (correctly) determines that the image is real. The problem is that training stops because the generator has achieved minimum loss. Now, we’re stuck with a generator that only outputs one image (to be fair, the one image does look real).</p>
<p>There have been several attempts to mitigate mode collapse by trying different loss functions (a loss function based on Wasserstein distance is particularly effective) and adding a regularization term (the idea is to force the generator to use the random noise it’s given as input). However, these and similar approaches cannot completely prevent mode collapse.</p>
<p>So we’re left with the same problem we had before: how to generate realistic images. Instead of GANs, the deep learning community has recently turned to <em>diffusion</em> …and the results are astounding. The basic idea of diffusion is to repeatedly apply a de-noising process to a random state until it resembles a realistic image. Let’s dive into the details.</p>
<h3 id="diffusion-process">Diffusion Process</h3>
<p>Our goal is to turn random noise into a realistic image. The starting point of diffusion is the simple observation that, while it’s not obvious how to turn noise into a realistic image, we <em>can</em> turn a realistic image into noise. In particular, starting from a real image in our data set we can repeatedly apply noise (typically drawn from a normal distribution) until the image becomes completely unrecognizable. Suppose $x_0$ is the real image drawn from our data set. Let the first noised image be $x_1 = x_0 + \epsilon_1$ where the noise $\epsilon_1$ is drawn from a normal distribution $\mathcal{N}(\mathbf{0}, \sigma^2 \mathbf{I})$ for some variance $\sigma$. In this way, we can generate $x_{t} = x_{t-1} + \epsilon_{t}$ where $\epsilon_{t}$ is again drawn from the same distribution. We end up with a sequence $x_0, x_1, \ldots, x_T$ where the total number of steps $T$ is chosen so that $x_T$ looks entirely meaningless.</p>
<p><img src="/dl-notes/assets/figures/image-to-noise.png" alt="Noise addition diffusion" /></p>
<p>In this example, we turned a picture of Stripes on a bike into complete gibberish by adding normal noise five times. The key insight is that if we look at this sequence <em>backwards</em> then we have training data which we can use to teach a model how to <em>remove</em> noise.</p>
<p><img src="/dl-notes/assets/figures/noise-to-image.png" alt="Noise removal diffusion" /></p>
<p>Formally, the training data consists of $x_t$, $t$, and $x_{t-1}$. Our goal is to train a model $f_\theta$ to predict $x_{t-1}$ from $x_t$ and $t$. A very natural choice of loss function is then</p>
<p>\(\mathcal{L}(\theta) = \mathbb{E} [\| x_{t-1} - f_\theta(x_t, t) \|^2]\)
where the expectation is over $x_t$, $t$, and $x_{t-1}$. However, researchers have found that it’s actually better for $f_\theta$ to predict the noise $\epsilon_{t}$ and then subtract it from $x_t$ to get $x_{t-1}$. Formally, the loss function is</p>
<p>\(\mathcal{L}(\theta) = \mathbb{E} [\| \epsilon_{t} - f_\theta(x_t, t) \|^2]\)
where the expectation is again over $x_t$, $t$, and $x_{t-1}$ which induces $\epsilon_t = x_t - x_{t-1}$.</p>
<p>Once we have have a working $f_\theta$, we can use it to generate realistic images. We start with random noise which we’ll call $x_T’$. Then for $t=T,\ldots, 1$, we predict $\epsilon_t’ = f_\theta(x_t’, t)$ and compute $x_{t-1}’ = x_t’ - \epsilon_t’$. The final result $x_0’$ is what the diffusion process outputs. With any luck, this output is a realistic image.</p>
<p>Thinking back to our three-step recipe for machine learning, we have the loss function and optimizer (SGD, as usual) but how do we choose a good architecture?</p>
<h3 id="autoencoders-and-u-nets"><strong>Autoencoders and U-Nets</strong></h3>
<p>We’ll start with a high level description of autoencoders, work our way to u-nets, and then tie it all back to diffusion. I like to think about autoencoders in relation to GANs. Recall that GANs go small-big-small: they turn a small noise vector into an image (using the generator) and then convert the image into a scalar representing its realness (using the discriminator). In contrast, autoencoders go big-small-big: they compress a real image into a small embedding (using the encoder) and then reconstruct the original image from the embedding (using the decoder).</p>
<p><img src="/dl-notes/assets/figures/gan-autoencoder.png" alt="GAN autoencoder" /></p>
<p>We can also think of the architecture of autoencoders in relation to GANs. Just like the discriminator, the encoder uses convolutional layers to go “small” while, just like the generator, the decoder uses tranposed convolutional layers to go “big”. The loss function typically used for autoencoders is the ($\ell_2$-norm) difference between the real image and the reconstructed image.</p>
<p>The real benefit of autoencoder architectures is that we get a meaningful representation of an image that somehow captures its “inherent” properties. In our current setting, one might think we can use this inherent meaning to differentiate the true content of the image from noise. And that’s exactly the motivation for the architecture we’ll use for the diffusion model.</p>
<p>In particular, we’ll use what’s called a u-net. The u-net consists of convolutions and transposed convolutions tied together with pooling and residual connections. The model gets its distinctive name from the shape of its architecture (see below).</p>
<center>
<img src="https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/u-net-architecture.png" width="400" />
<figcaption>[[Source]](https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/)</figcaption>
</center>
<h3 id="diffusion-in-latent-space"><strong>Diffusion in Latent Space</strong></h3>
<p>At this point, we have the loss function, architecture, and optimizer for diffusion. What more could we need? Well, one nagging issue is that we’re adding and predicting noise in the high-dimensional pixel space (the space gets even larger if we want higher resolution!). This presents a computational problem since we’ll need lots of parameters and compute for our u-net. One novel contribution of stable diffusion is to apply the autoencoder idea again in a different way.</p>
<p>Looking at the first few noised versions of Stripes in our example, we could probably differentiate the noise (or at least identify his vague outline). But, to a computer, the visual properties of the pixel space that we’re so sensitive to are useless. We might as well embed the images into a latent space which captures more meaning. Then, within the latent space, we build a model to convert noise to an <em>embedding</em> of a realistic image. Once we have the final output of the diffusion process, we decode into an image that we can understand. This is exactly what stable diffusion does and, in as a result, it gains the efficiency of working in a smaller, more meaningful space.</p>
<h3 id="text-conditioning"><strong>Text Conditioning</strong></h3>
<p>The really cool part of stable diffusion is that it generates an image of any text prompt we give it. But we’ve only talked about diffusion as an <em>unconditional</em> process for turning noise into realistic images. What we want is a way of guiding the u-net through the denoising process so that it generates an image close to the text prompt we give it. We accomplish this by embedding the training images and their text descriptions in the same latent space. Using contrastive language image pretraining (CLIP), we ensure that the embedding of an image is close to the embedding of its description.</p>
<p>Now, we train the u-net on the embedded noised image $x_t$, the number of noise steps $t$, <em>and</em> the embedded text description $w$ of the original embedded image $x_0$. Formally, our goal is for $f_\theta(x_t, t, w) \approx \epsilon_t$. But how do we feed the u-net the embedded text in a meaningful way? Stable diffusion uses an architecture with cross-attention heads after the residual connection. The cross-attention is between the embedded text and the u-net’s representation of the embedded image. Intuitively, cross-attention gives the u-net a way of conditioning the de-noising process on the text description. This is very helpful: if we were told a noisy image depicts on a cat on a bike, we would see it differently than if we were told it depicts a flying turtle.</p>
<p>Once we have a working $f_\theta$, we can guide it through the denoising process with an embedded text prompt $w$. We start with noise in the latent space $x_T’$ and for $t=T, \ldots, 1$, we predict $\epsilon_t’ = f_\theta(x_t’, t, w)$ and compute $x’_{t-1} = x_t’ - \epsilon_t’$. Stable diffusion then decodes $x_0’$ into pixel space and, with any luck, the result is an image of the text prompt we started with.</p>Course NotesIn which we discuss the foundations of generative neural network models.Lecture 10: Reinforcement Learning (II)2021-03-30T00:00:00+00:002021-03-30T00:00:00+00:00https://chinmayhegde.github.io/dl-notes/notes/lecture10<p><em>In which we continue laying out the basics of reinforcement learning.</em></p>
<p>Recall that in the previous lecture we talked about a new <em>mode</em> of ML called reinforcement learning (RL), where the observations occur in a dynamic environment, and the learning module (also called the <em>agent</em>) needs to figure out the best sequence of actions to be taken (also called the policy) in order to maximize a given objective (also called the reward).</p>
<p>We also discussed a method called <em>Policy Gradients</em>, which uses the log-derivative trick to rewrite the problem in such a way that we can use standard ML tools (such as SGD) to learn a good RL policy. This led to an algorithm called REINFORCE (or Monte Carlo Policy Search), which can be viewed as an instantiation of random search used in derivative-free optimization.</p>
<p>(Aside: notice that nowhere in the above discussion did <em>deep learning</em> show up – indeed, RL can be used in very general settings. In the context of policy gradients, deep learning arises only if we choose to parameterize the policy in terms of a deep neural network.)</p>
<p>Today, we will learn about a <em>different</em> family of RL approaches which does something slightly different.</p>
<h2 id="q-learning">Q-Learning</h2>
<p>Recall the setup in policy gradients:</p>
<ul>
<li>The agent receives a sequence observations (in the form of e.g. image pixels) about the environment.</li>
<li>The state at time $t$, $s_t$, is the instantaneous relevant information of the agent.</li>
<li>The agent can choose an action, $a_t$, at each time step $t$ (e.g. go left, go right, go straight). The next state of the game is determined by the current state and the current action:</li>
</ul>
\[s_{t+1} \sim f(s_t, a_t) .\]
<p>Here, $f$ is the <em>state transition</em> function that is entirely determined by the environment. We use the symbol $\sim$ to denote the fact that environments could be random and an action may sometimes have unpredictable consequences.</p>
<ul>
<li>The agent periodically receives rewards/penalties as a function of the current state and action, $r_t = r(s_t,a_t)$.</li>
<li>The sequence of state-action pairs $\tau_t = (s_0, a_0, s_1, a_1, \ldots, a_t, s_t)$ is called a <em>trajectory</em> or <em>rollout</em>. The rollout is usually defined over a fixed time horizon $L$. In policy gradients, our goal is to minimize the (negative) reward:</li>
</ul>
\[\begin{aligned}
\text{minimize}~&R(\tau) = \sum_{t=0}^{L-1} - r(s_t, a_t), \\
\text{subject to}~&s_{t+1} \sim f(s_t,a_t) \\
& a_t \sim \pi(\tau_t) .
\end{aligned}\]
<p>Let us now think of the problem in a slightly different fashion, which is somewhat more applicable in the context of goal-oriented RL. Instead of choosing good <em>actions</em> to take at each time step, an alternative is to identify (a sequence of) good <em>states</em> to visit. For simplicity, it is convenient to assume discrete spaces for both states and actions. It is also convenient to think in terms of <em>episodes</em> instead of rollouts. So each episode could be viewed as one run of a game.</p>
<p>This makes sense in the context of games: the ultimate goal is to reach the “win” state, just as how the ultimate goal in chess is to have the board result in a “checkmate” of the opponent. A common simple example given in the RL literature is the game of <em>Frozen Lake</em> (taken from OpenAI Gym), where the objective is to skate along the surface of a (frozen) lake, modeled as a 4x4 grid, from a starting position to the goal without falling into any “holes” in the lake. (The ice is slippery, so there is some randomness in the environment.)</p>
<p><img src="/dl-notes/assets/figures/frozen-lake.png" alt="Frozen Lake" /></p>
<p>This is a rather simple game (there are 16 states, and 4 actions per state). But one could model more complex RL problems too in this manner. In autonomous navigation, for example, the ultimate state is achieved when the agent has reached the destination, and other states along the leading to this final “win” state are likely to be also good states.</p>
<p>(In fact, this idea of looking backwards from the “win” state, and identifying which states lead to wins, is exactly the same principle that we use in <em>dynamic programming</em> (DP). As we will see soon, what we will discuss below can be viewed as an approximate version of DP.)</p>
<p>The way we characterize “good states” is by a quantity called the <em>value function</em>. To understand this, we first need to define the <em>return</em>, which is the sum of all anticipated rewards in the future over an infinite time horizon. In practice, we cannot sum over infinitely many rewards so we discount future rewards by a decay factor $\gamma$, leading to the <em>discounted return</em>:</p>
\[G_t = r_t + \gamma r_{t+1} + \gamma^2 r_{t+2} + \ldots\]
<p>“Good states” are likely to provide good returns, provided a sensible policy is chosen. The value function of a state $s$ under a given policy $\pi$ is defined as the expected discounted return if we start at $s$ and obey $\pi$:</p>
\[V^{\pi}(s) = \mathbb{E} [G_t | s_t = s] = \mathbb{E} [ \sum_{i=0}^\infty \gamma^i r_{t+i} | s_t = s] .\]
<p>The value function gives us a way to identify good states versus not-so-good ones, but it does not quite tell us how to <em>reach</em> these states. In order to do so, we need to go one more step: define an <em>action-value</em> function, or a <em>Q-function</em>, which is defined as the expected discounted return if we start at $s$, take action $a$, and subsequently follow the policy:</p>
\[Q^{\pi}(s,a) = \mathbb{E} [G_t | s_t = s, a_t = a] .\]
<p>Since we have assumed (for convenience) that both the state and action spaces are discrete, we can think of the Q-function as a giant table (similar to the table that we encounter in DP). Also, by law of iterated expectation, we can link the Q-function and the value function by just averaging over all possible actions, weighted by the likelihood of choosing action $a$ under the policy:</p>
\[V^{pi} = \sum_a \pi(a | s) Q^{pi}(s,a) .\]
<p>The Q-function gives us a way to determine the optimal policy as follows. If the Q-function were available (somehow, and we will discuss how to learn it), we could just choose optimal actions by picking the one that maximizes the expected return:</p>
\[\pi^*(s) = \arg \max_a Q(s,a)\]
<p>All this sounds good, but how do actually we discover the Q-function? And where does learning enter the picture?</p>
<h3 id="algorithms-for-q-learning">Algorithms for Q-learning</h3>
<p>The key to Q-learning is a recursive characterization of the optimal Q-function called the <em>Bellman Equation</em>, similar to how DP tables are recursively constructed. There is a formal derivation in the probabilistic case, which we won’t derive here. But intuitively the Bellman equation states that if the policy is optimally chosen, then the $Q$ function at the current time step is the current reward, plus the <em>best</em> return achievable at the <em>next</em> time step.</p>
\[Q^*(s_t,a_t) = r(s_t,a_t) + \gamma \max_{a'} Q^* (s_{t+1},a')\]
<p>The Bellman equation also gives us a way to perform <em>learning</em> in the RL setting. We start with an estimate of the $Q$-function (say, an empty table, or a table with random values). We start at some state $s$, take an action, collect a reward $r$, and then move to the next state $s’$ (in short, the quadruple $(s,a,r,s’)$). The <em>Bellman error</em> is defined as the mean-squared error between the current estimate $Q$ and the predicted estimate:</p>
\[l = \frac{1}{2} (r + \gamma \max_{a'} Q(s,a') - Q(s,a))^2\]
<p>which is a quantity that we (as ML engineers) love to see, since we can immediately use this error term to perform gradient descent:</p>
\[Q(s,a) \leftarrow Q(s,a) + \eta \left[ r + \gamma \max_{a'} Q(s',a') - Q(s,a) \right]\]
<p>and that’s it! The above procedure can be repeated by sampling different states and actions, observing the rewards, and updating the Q-function as we go along.</p>
<p>There is a small catch here though: note that this limits $Q$-learning to visited states and actions; but next actions are picked according to the table itself, which are plausibly optimal. This means that certain state-action pairs are never visited. Sometimes the agent needs to pick <em>sub</em>-optimal actions in order to visit new states; this is a common issue in RL called the <em>exploration-exploitation tradeoff</em>.</p>
<p>The easy fix is to choose an $\epsilon$-<em>greedy policy</em>: with probability $\epsilon$, we choose a random action, and with probability $1-\epsilon$, we choose the optimal action according to $Q$. So the overall algorithm becomes the following.</p>
<p>Initialize $Q$, repeat (for each episode):</p>
<ul>
<li>Initialize $s$</li>
<li>Repeat for each step of episode:
<ul>
<li>Choose an action $a$ using $\epsilon$-greedy policy</li>
<li>Take action $a$, observe reward $r$ and state $s’$</li>
<li>$Q(s,a) \leftarrow Q(s,a) + \eta \left[ r + \gamma \max_{a’} Q(s’,a’) - Q(s,a) \right]$</li>
<li>$s \leftarrow s’$</li>
</ul>
</li>
<li>Until $s$ is the end-state.</li>
</ul>
<p>The above algorithm can be implemented with any game engine/simulator.</p>
<h3 id="deep-q-learning">Deep Q-learning</h3>
<p>So far, we have imagined both actions and states to be discrete spaces, and hence $Q$ is a table.</p>
<p>There are two issues here:</p>
<ul>
<li>Impractical (too many states in many cases, even infinite if we are talking about continuous environments)</li>
<li>No structure/shared information between states and actions</li>
</ul>
<p>Similar to policy gradients, one can resolve this by <em>parameterizing</em> the $Q$-function. This can be done in a few different ways. For example, we could do a <em>linear function approximation</em>:</p>
\[Q(s,a) = w^T \psi(s,a)\]
<p>where $\psi$ is some feature embedding of the tuple $(s,a)$. (As to where the embedding comes from: this is identical to the challenge of “word embeddings” for NLP, and similar techniques can be used here, which we won’t discuss.)</p>
<p>Or, alternatively, we could think of $Q$ to be some deep neural network, parameterized by weights $w$. The latter would be called <em>deep Q-learning</em>. One can prove that the Bellman equation remains the same, so the only thing that changes is the gradient descent equation:</p>
\[\begin{aligned}
t &\leftarrow r + \gamma \max_{a'} (t - Q(s',a')) \\
w &\leftarrow w + (t - Q(s,a)) \frac{\partial Q(s,a)}{\partial w}
\end{aligned}\]
<p>A bit of history: among the several breakthroughs in deep learning that happened in the early 2010s was the success of neural nets to crack 80’s-style Atari games in 2013. A rather shallow network with 3 hidden layers was used; see Figure 2.</p>
<p><img src="/dl-notes/assets/figures/atari.jpg" alt="DQN architecture for solving Atari games" /></p>
<h3 id="comparisons-with-policy-gradients">Comparisons with policy gradients</h3>
<p>In contrast with policy gradients (which directly learn policies), Q-learning introduces an intermediate quantity (the Q-function) that explicitly assigns value to states and actions.</p>
<p>Pros of policy gradients:</p>
<ul>
<li>There is no Q-function table to be populated, so one can handle large, or even continuous, action spaces.</li>
<li>No need to model intermediate variables (such as Value/Q-function); the model directly estimates the policy.</li>
<li>Unlike Q-learning (where the final optimal policy is deterministic: it is the max over all actions for a given state), policy gradients can output stochastic/non-deterministic policies. This is useful in games without stable equilibria (such as Rock-Paper-Scissors) where there is no single deterministic policy that is the best.</li>
</ul>
<p>Pros of DQN:</p>
<ul>
<li>Q-learning is (generally) more sample-efficient (recall policy gradients are similar to random search). Therefore, with a fixed number of episodes/training data, Q-learning tends to perform better.</li>
<li>Q-learning gives an estimate of anticipated return at each time step, which can be useful in higher-level planning, reasoning, and control tasks.</li>
</ul>Course NotesIn which we continue laying out the basics of reinforcement learning.Lecture 9: Reinforcement Learning (I)2021-03-29T00:00:00+00:002021-03-29T00:00:00+00:00https://chinmayhegde.github.io/dl-notes/notes/lecture09<p><em>In which we introduce the basics of reinforcement learning.</em></p>
<p>Throughout this course, we have primarily focused on supervised learning (building a prediction function from labeled data), and briefly also discussed unsupervised learning (generative models and word embeddings). In both cases, we have assumed that the data to the machine learning algorithm is <em>static</em> and the learning is performed <em>offline</em>.</p>
<p>Neither assumption is true in the real world! The data that is available is often influenced by previous predictions that you have made. (Think, for example, of stock markets.) Moreover, data is continuously streaming in, so one needs to be able to adapt to uncertainties and unexpected pitfalls in a potentially adverse environment.</p>
<p>Applications that fall into this category include:</p>
<ul>
<li>AI for games (both computer/video games as well as IRL games such as Chess or Go)</li>
<li>teaching robots how to autonomously move in their environment</li>
<li>self-driving cars</li>
<li>algorithmic trading in markets</li>
</ul>
<p>among others.</p>
<p>This set of applications motivates a third mode of ML called <em>reinforcement learning</em> (RL). The field of RL is broad and we will only be able to scratch the surface. But several of the recent success stories in deep learning are rooted in advances in RL – the most high profile of them are Deepmind’s AlphaGo and OpenAI’s DOTA 2 AI, which were able to beat the world’s best human players in Go and DOTA 2 respectively. These AI agents were able to learn winning strategies entirely automatically (albeit by leveraging massive amounts of training data; we will discuss this later.)</p>
<p>To understand the power of RL, consider – for a moment – how natural intelligence works. An infant presumably learns by continuously interacting with the world, trying out different actions in possibly chaotic environments, and observing outcomes. In this mode of learning, the input(s) to the learning module in the infant’s brain is decidedly <em>dynamic</em>; learning has to be done <em>online</em>; and very often, the environment is <em>unknown</em> before hand.</p>
<p>For all these reasons, the traditional mode of un/supervised learning does not quite apply, and new ideas are needed.</p>
<p>A quick aside: the above questions are not new, and the formal study of these problems actually classical. The field of control theory is all about solving optimization problems of the above form. But the approaches (and applications) that control theorists study are rather different compared to those that are now popular in machine learning.</p>
<p><img src="/dl-notes/assets/figures/temple.png" alt="Temple Run" /></p>
<h2 id="setup">Setup</h2>
<p>We will see that RL is actually “in-between” supervised and unsupervised learning.</p>
<p>The basis of RL is an environment (modeled by a dynamical system), and a learning module (called an <em>agent</em>) makes <em>actions</em> at each time step over a period of time. Actions have consequences: actions periodically lead to <em>reward</em>, or <em>penalty</em> (equivalently, negative reward). The goal is for the agent to learn the best <em>policy</em> that maximizes the cumulative reward. All fairly intuitive!</p>
<p>Here, the “best policy” is application-specific – it could refer to the best way to win a game of Space Invaders, or the best way to allocate investments across a portfolio of stocks, or the best way to navigate an autonomous vehicle, or the best way to set up a cooling schedule for an Amazon Datacenter.</p>
<p>All this is a bit abstract, so let us put this into concrete mathematical symbols, and interpret them (as an example) in the context of the classic iOS game <em>Temple Run</em>, where your game character is either Guy Dangerous or Scarlett Fox and your goal is to steal a golden idol from an Aztec temple while being chased by demons. (Fun game. See Figure 1.) Here,</p>
<ul>
<li>The environment is the 3D game world, filled with obstacles, coins, etc.</li>
<li>The agent is the player.</li>
<li>The agent receives a sequence observations (in the form of e.g. image pixels) about the environment.</li>
<li>The state at time $t$, $s_t$, is the instantaneous relevant information of the agent (e.g. the 2D position and velocity of the player).</li>
<li>The agent can choose an action, $a_t$, at each time step $t$ (e.g. go left, go right, go straight). The next state of the game is determined by the current state and the current action:</li>
</ul>
\[s_{t+1} = f(s_t, a_t) .\]
<p>Here, $f$ is the <em>state transition</em> function that is entirely determined by the environment. In control theory, we typically call this a <em>dynamical system</em>.</p>
<ul>
<li>
<p>The agent periodically receives rewards (coins/speed boosts) or penalties (speed bumps, or even death!). Rewards are also modeled as a function of the current state and action, $r(s_t,a_t)$.</p>
</li>
<li>
<p>The agent’s goal is to decide on a strategy (or policy) of choosing the next action based on all past states and actions:</p>
</li>
</ul>
\[s_t, a_{t-1}, s_{t-1}, \ldots, s_1, a_1, s_0, a_0.\]
<ul>
<li>The sequence of state-action pairs $\tau_t = (s_0, a_0, s_1, a_1, \ldots, a_t, s_t)$ is called a <em>trajectory</em> or <em>rollout</em>. Typically, it is impractical to store and process the entire history, so policies are chosen only over a fixed time interval in the past (called the <em>horizon length</em> $L$).</li>
</ul>
<p>So a policy is simply defined as any function $\pi$ that maps $\tau$ to $a_t$. Our goal is to figure out the best policy (where “best” is defined in terms of maximizing the rewards).</p>
<p>But as machine learning engineers, we can fearlessly handle minimization/maximization problems! Let us try and apply the ML tools we know here. Pose the cumulative negative reward as a loss function, and minimize this loss as follows:</p>
\[\begin{aligned}
\text{minimize}~&R(\tau) = \sum_{t=0}^{L-1} - r(s_t, a_t), \\
\text{subject to}~&s_{t+1} = f(s_t,a_t) \\
& a_t = \pi(\tau_t) .
\end{aligned}\]
<p>The cumulative reward function $R(\tau)$ is sometimes replaced by the <em>discounted</em> cumulative reward, in case we exponentially decay the reward across time with some factor $\gamma > 0$:</p>
\[R_{\text{discounted}}(\tau) = \sum_{t=0}^{L-1} - \gamma^t r(s_t, a_t) .\]
<p>OK, this looks similar to a loss minimization setting that we are all familiar with. We can begin to apply any of our optimization tools (e.g. SGD) to solve it. Several caveats emerge, however, and we have to be more precise about what we are doing.</p>
<p>First, what are the optimization variables? We are seeking the best among all <em>policies</em> $\pi$ (which, above, are defined as functions from trajectories to actions), so this means that we will have to parameterize these policies somehow. We could imagine $\pi$ to be a linear model that maps trajectories to actions, or kernel model, or a deep neural network. It really does not matter conceptually (although it does matter a lot in practice).</p>
<p>Second, what are the “training samples” provided to us and what are we trying to learn? The key assumptions in RL is that everything in the general case is probabilistic:</p>
<ul>
<li>the policy is stochastic. So what $\pi$ is actually predicting from a given trajectory is not a <em>single best</em> action but a <em>distribution</em> over actions. More favorable actions get assigned higher probability and vice versa.</li>
<li>the environment’s dynamics, captured by $f$, can be stochastic.</li>
<li>the reward function itself can be stochastic.</li>
</ul>
<p>The last two assumptions are not critical – for example, in simple games, the dynamics and the reward are deterministic functions; but not so in more complex environments, such as the stock market – but the first one (stochastic policies) is fundamental in RL. This also hints to why we are optimizing over probabilistic policies in the first place: if there was no uncertainty and everything was deterministic, an oracle could have designed an optimal sequence of actions for all time before hand.</p>
<p>(In older Atari-style or Nintendo video games, this could indeed be done and one could play an optimal game pretty much from memory: Youtube has several examples of folks playing games like Super Mario blindfolded.)</p>
<p>Since policies are probabilistic, they induce probability distribution over trajectories, and hence the cumulative negative reward is also probabilistic. (It’s a bit hard to grasp this, considering that all the loss functions that we have talked about until now in deep learning have been deterministic, but the math works out in a similar manner.) So to be more precise, we will need to rewrite the loss in terms of the <em>expected value</em> over the randomness:</p>
\[\begin{aligned}
\text{minimize}~&\mathbb{E}_{\pi(\tau)} R(\tau) = \sum_{t=0}^{L-1} - r(s_t, a_t), \\
\text{subject to}~&s_{t+1} = f(s_t,a_t) \\
& a_t = \pi(\tau_t),~\text{for}~t = 0,\ldots,L-1.
\end{aligned}\]
<p>This probabilistic way of thinking makes the role of ML a bit more clear. Suppose we have a yet-to-be-determined policy $\pi$. We pick a horizon length $L$, and execute this policy in the environment (the game engine, a simulator, the real world, \ldots) for $L$ time steps. We get to observe the full trajectory $\tau$ and the sequence of rewards $r(s_t,a_t)$ for $t=0,\ldots,L-1$. This pair is called a <em>training sample</em>. Because of the randomness, we simulate multiple such rollouts, and compute the cumulative reward averaged over all such rollouts, and adjust our policy parameters until this expectation is maximized.</p>
<p>We now return to the first sentence of this subsection: why RL is “in-between” supervised and unsupervised learning. In supervised learning we need to build a function that predicts label $y$ from data features $x$. In unsupervised learning there is no separate label $y$; we typically wish to predict some intrinsic property of the dataset of $x$. In RL, the “label” is the action at the next time step, but once taken, this action becomes <em>part of the training data</em> and influences the subsequent action. This issue of intertwined data and labels (due to the possibility of complicated feedback loops across time) makes RL considerably more challenging.</p>
<h2 id="policy-gradients">Policy gradients</h2>
<p>Let us now discuss a technique to numerically solve the above optimization problem. Basically, it will be a smart version of ‘trial-and-error’ – sample a rollout with some actions; if the reward is high then make those actions more probable (i.e., “reinforce” these actions), and if the reward is low then make those actions less probable.</p>
<p>In order to maximize expected cumulative rewards, we will need to figure out how to take gradients of the reward with respect to the policy parameters.</p>
<p>Recall that trajectories/rollouts $\tau$ are a probabilistic function of the policy parameters $\theta$. Our goal is to compute the gradient of the expected reward, $\mathbb{E}_{\pi(\tau)} R(\tau)$ with respect to $\theta$. To do so, we will need to take advantage of the <em>log-derivative trick</em>. Observe the following fact:</p>
\[\begin{aligned}
\frac{\partial}{\partial \theta} \log \pi(\tau) &= \frac{1}{\pi(\tau)} \frac{\partial \pi(\tau)}{\partial \theta},~\text{i.e.} \\
\frac{\partial \pi(\tau)}{\partial \theta} &= \pi(\tau) \frac{\partial}{\partial \theta} \log \pi(\tau) .
\end{aligned}\]
<p>Therefore, the gradient of the expected reward is given by:</p>
\[\begin{aligned}
\frac{\partial}{\partial \theta} \mathbb{E}_{\pi(\tau)} R(\tau) &= \frac{\partial}{\partial \theta} \sum_{\tau} R(\tau) \pi(\tau) \\
&= \sum_\tau R(\tau) \frac{\partial \pi(\tau)}{\partial \theta} \\
&= \sum_\tau R(\tau) \pi(\tau) \frac{\partial}{\partial \theta} \log \pi(\tau) \\
&= \mathbb{E}_{\pi(\tau)} [R(\tau) \frac{\partial}{\partial \theta} \log \pi(\tau)].
\end{aligned}\]
<p>So in words, the gradient of an expectation can be converted into an expectation over a closely related quantity. So instead of computing this expectation, like in SGD we <em>sample</em> different rollouts and compute a stochastic approximation to the gradient. The entire pseudocode is as follows.</p>
<p>Repeat:</p>
<ol>
<li>
<p>Sample a trajectory/rollout $\tau = (s_0, a_0, s_1, \ldots, s_L)$.</p>
</li>
<li>
<p>Compute $R(\tau) = \sum_{t=0}^{L-1} - r(s_t, a_t)$</p>
</li>
<li>
<p>$\theta \leftarrow \theta - \eta R(\tau) \frac{\partial}{\partial \theta} \log \pi(\tau)$</p>
</li>
</ol>
<p>There is a slight catch here, since we are reinforcing actions over the entire rollout; however, actions should technically be reinforced only based on future rewards (since they cannot affect past rewards). But this can be adjusted by suitably redefining $R(\tau)$ in Step 2 to sum over the $t^{th}$ time step until the end of the horizon.</p>
<p>That’s it! This form of policy gradient is sometimes called REINFORCE. Since we are sampling rollouts, this is also called <em>Monte Carlo Policy Gradient</em>.</p>
<p>In the above algorithm, notice that we never require direct access to the environment (or more precisely, the model of the environment, $f$) – only the ability to sample rollouts, and the ability to observe corresponding rewards. This setting is therefore called <em>model-free reinforcement learning</em>. A parallel set of approaches is model-based RL, which we will briefly touch upon next week.</p>
<p>Second, notice that since we don’t require gradients, this works even for non-differentiable reward functions! In fact, the reward can be anything – non-smooth, non-differentiable, even discontinuous (such as a 0-1 loss).</p>
<h2 id="connection-to-random-search">Connection to random search</h2>
<p>In the above algorithm, in order to optimize over rewards, observe we only needed to access function evaluations of the reward, $R(\tau)$, but <em>never its gradient</em>. This is in a departure from the regular gradient-based backpropagation framework we have been using thus far. The REINFORCE algorithm is in fact an example of <em>derivative free optimization</em>, which involves optimizing functions without gradient calculations.</p>
<p>Another way to do derivative free optimization is simple: just random search! Here is a quick introduction. If we are minimizing any loss function $f(\theta)$, recall that gradient descent updates $\theta$ along the negative direction of the gradient:
\(\theta \leftarrow \theta - \eta \nabla f(\theta) .\)</p>
<p>But in random search, we pick a <em>random</em> direction $v$ to update $\theta$, and instead search for the (scalar) step size that provides maximum decrease in the loss along that direction. This is a rather inefficient way to minimize a loss function (for the same intuition that if we are trying to walk to the bottom of a valley, it is much better to follow the direction of steepest descent, rather than bounce around randomly.) But in the long run, random search does provably work as well. The pseudocode is as follows:</p>
<ul>
<li>Sample a random direction $v$</li>
<li>Search for the step size (positive or negative) that minimizes $f(\theta + \eta v)$. Let that step size be $\eta_{\text{opt}}$.</li>
<li>Set $\theta \leftarrow \theta + \eta_{\text{opt}} v$.</li>
</ul>
<p>Again, observe that the gradient of $f$ never shows up! The only catch is that we need to do a step size search (also called <em>line search</em>). However, this can be done quickly using a variation of binary search. Notice the similarity of the update rules (at least in form) to REINFORCE.</p>
<p>Let us apply this idea to policy gradients. Instead of the log-derivative trick, we will simply assume deterministic policies (i.e., a particular choice of policy $\theta$ leads to a deterministic rollout $\tau$) use the above algorithm, with $f$ being the reward function. The overall algorithm for policy gradient now becomes the following.</p>
<p>Repeat:</p>
<ol>
<li>
<p>Sample a new policy update direction $v$.</p>
</li>
<li>
<p>Search for the step size $\eta$ that minimize $R(\theta + \eta v)$.</p>
</li>
<li>
<p>Update the policy parameters $\theta \leftarrow \theta + \eta v$.</p>
</li>
</ol>
<p>Done!</p>
<h2 id="details-and-extensions">Details and extensions</h2>
<p>We have only touched upon the bare minimum required to understand policy gradients in RL. This is a very vast area of emerging work and we cannot unfortunately do justice to all of it. Let us touch upon some practical aspects/concerns that may be of importance while trying to build RL systems.</p>
<p>First, the problem with REINFORCE is that we are replacing the expected value with a sample average in the gradient calculation, but unlike in standard SGD-type training, the <em>variance</em> of the sample average will be typically too high. This means that vanilla policy gradients will be far too slow and unreliable.</p>
<p>The standard solution is to perform <em>variance reduction</em>. One way to adjust the variance is via insertion of a quantity called the <em>reward baseline</em>. To understand this, observe that unlike regular gradient descent type training methods (which by definition depend on the slope/gradient of the loss), REINFORCE depends on the <em>absolute value</em>, not the <em>change</em>, of the reward function $R(\tau)$. This does not quite make sense: if a constant bias (of say +1000) is added uniformly to the reward function, the problem does not change fundamentally (we are just rewriting the reward on a different scale) but the algorithm changes quite a bit: in every iteration, every set of weights is likely to be reinforced positively no matter whether the action taken was good or bad.</p>
<p>A simple fix is to baseline-adjusted descent: subtract a baseline $b$ from the reward function $R(\tau) - b$. Here is the method: we <em>learn</em> a baseline such that good actions are always associated with positive reward, and bad actions are associated with negative reward. This is hard to do properly, and it is important to re-fit the baseline estimate each time. In the discounted reward case, we have to re-adjust the baseline depending on $\gamma$.</p>
<p>Another point in policy gradients is that we do not require a differentiable reward/loss, but we <em>do</em> require that the mapping $\pi$ from trajectories to actions is differentiable. That’s the only way we can properly define $\partial \log \pi$ in the policy gradient update step (and that’s where standard neural net training methods such as backprop enter the picture).</p>
<p>To fix this, there is a class of techniques in RL called Evolutionary search (ES) that removes backprop entirely. The idea is to define the choice of <em>policy</em> itself as probabilistic functions (so $\pi$ itself can be viewed as being drawn from a distribution over functions) and apply the log-derivative trick there. It’s a bit complicated (and the gains over policy gradient are somewhat questionable) so we will not discuss this in detail here.</p>Course NotesIn which we introduce the basics of reinforcement learning.Lecture 8: Applications in NLP2021-03-09T00:00:00+00:002021-03-09T00:00:00+00:00https://chinmayhegde.github.io/dl-notes/notes/lecture08<p><em>In which we see the power of deep networks in natural language processing.</em></p>
<p>In our discussion on deep learning for text, we have mainly focused on the middle part of this picture:</p>
<p><img src="/dl-notes/assets/figures/nlp.png" alt="NLP overview" /></p>
<p>All neural network models assume <em>real-valued vector</em> inputs, and we have assumed that there is some magical way to convert discrete data (such as text) to a form that neural networks can process.</p>
<p>Today we will focus on the bottom part. Where do the word encodings come from? And how do they interact with the rest of the learning?</p>
<h2 id="word2vec">word2vec</h2>
<p>The easiest way to encode words/tokens into real-valued vectors is one that we have already used for image-classification type applications: <em>one-hot encoding</em>.</p>
<p>Pros: this is dead simple to understand and implement.</p>
<p>Cons: there are two major drawbacks of using one-hot encodings.</p>
<ul>
<li>Each encoding can become very <em>high dimensional</em>. By definition, the encoded vectors are now the size of the vocabulary/dictionary. At the character level this is fine; at the word level it becomes very difficult; and at any higher level the space of symbols becomes combinatorially large.</li>
<li>More than just computation: one-hot encodings do not capture <em>semantic</em> similarities (all words are equally far in L1/L2/Hamming distance than every other word). It would be nice to have similar words share similar features (where the meaning of “similar” depends on the language and/or context).</li>
</ul>
<p>This was recognized by early NLP researchers. In the mid-2000s, a host of encoding methods were proposed, includng Latent semantic analysis (LSA), singular value decomposition (SVD). All of these were superseded by <em>Word2vec</em>, which came up in the early 2000s.</p>
<p>Word2vec is a word encoding framework that uses one of two approaches: <em>skip-grams</em> and <em>continuous bag-of-words</em>.</p>
<h3 id="skip-grams">Skip-grams</h3>
<p>In skip-grams, each word has two vector embeddings: $v_i$ and $u_i$. Let us first motivate why we need two such embeddings. As a running example, we will keep in mind a sentence such as:</p>
<blockquote>
<p>“It is raining right now”.</p>
</blockquote>
<p>and imagine it being represented as a sequence of words $x_1, x_2, x_3, x_4, x_5$.</p>
<p>We already discussed $n$-grams briefly before while motivating language models. For $n=2$, these are the joint probabilities</p>
\[P(x_1,x_2), P(x_2,x_3), ...,\]
<p>each of which can be empirically calculated by counting the number of co-occurrences of pairs of words in a database. Equivalently, it is easier to express this in terms of conditional probabilities</p>
\[P(x_2 | x_1), P(x_3 | x_2), ...\]
<p>The term “skip-gram” comes from the fact that we consider conditional probabilities that are not-consecutive, i.e., words can be skipped over. (The reason for exploring relationships between non-consecutive words goes back to the non-local, long-range dependency structure of natural languages.) In this case, the factorization is done with respect to the “center” (or “target”) word of the sequence; the other words are called “context” words. So the above factorization becomes:</p>
\[P(x_1 | x_3) \cdot P(x_2 | x_3) \cdot P(x_4 | x_3) \cdot P(x_5 | x_3)\]
<p>Intuitively, these probabilities tell us: “if a word $x_i$ appears in a sentence, how likely is it that the word $x_j$ will appear in its vicinity?” Here, “vicinity” would mean a window of some fixed size.</p>
<p>Having defined non-local conditional probabilities, the algorithmic question now becomes: how to estimate them? Again, one could just use the frequency of co-occurence counts in some large text corpus. However, we will depart from the standard approach, and instead train a simple neural network that predicts</p>
\[P(x_j | x_i).\]
<p>The network will be two layers deep (i.e., a single hidden layer of neurons with linear activations), followed by a softmax.</p>
<p>Some more details about this network. Say we have a dictionary of $N$ words. The input is a one-hot encoding $x_i$ of any given word $i$ (so, $N$ input neurons). The output is a vector of pre-softmax logits (so, $N$ output neurons). We can imagine (say) a hidden layer of $d$ (linear) neurons. So if we call $V \in \mathbb{R}^{d \times N}$ and $U \in \mathbb{R}^{d \times N}$ the two layers, then the conditional probability of any context word given the center is given by:</p>
\[\begin{aligned}
P(x_j | x_i) &= \text{softmax}(U^T V x_i) \\
&= \text{softmax}(U^T v_i ) \\
&= \text{softmax}([u_1^T v_i; u_2^T v_i; \ldots u_N^T v_i]) .
\end{aligned}\]
<p>So examining the rows of $U$ and $V$ give us precisely what we want – the word embeddings for the <em>target</em> and the <em>context</em> words respectively. Typically, $d \ll N$, so the embedding dimension is much smaller than the size of the vocabulary.</p>
<p>Using the rows of $U$ and $V$ as embeddings also intuitively makes sense: similar words/synonyms should give us similar output probabilities, and in order for two outputs to be similar, both target and context probabilities must match.</p>
<p>How do we train this network? First, we need to define a loss function. We can just use the standard cross-entropy loss, where the network is fed <em>pairs</em> of words (one-hot encoded) as data-label pairs. So for a particular pair of target-context words $i$ and $j$, we get the loss term:</p>
\[l(i,j) = u_j^T v_i - \log\left(\sum_j \exp(u_j^T v_i)\right)\]
<p>whose derivative can then be used to update all the weights.</p>
<p>There are a more few issues here to be considered. In English, for example, there are about $N = 10K$ commonly used words. So we already have approximately $6M$ weights to learn. Second, training can be <em>extremely</em> slow, since for every sample pair we have to touch all the weights. The word2vec paper did a few extra hacks (hierarchical softmax, negative sampling) to make this work, which we won’t dive into here – more details in an NLP course perhaps. See Chapter 14 of the textbook if you are interested.</p>
<h3 id="continuous-bag-of-words-cbow">Continuous Bag of Words (CBOW)</h3>
<p>The CBOW model is very similar to the skip-gram model, so we won’t get into too much detail. THe main difference is that the CBOW model flips things around: instead of the center word defining the context, the context words are used to predict the target. So the conditional probabilities become:</p>
\[P(x_3 | x_1, x_2, x_4, x_5)\]
<p>which cannot be easily factorized the way we did so above. But the expression remains similar, if we approximate the embedding of the context as the (vector) average of the individual embeddings:</p>
\[P(x_i | x_1, \ldots x_j \ldots) = \frac{\exp(u_i^T \text{Ave}(v_j))}{\sum_i \exp(u_i^T \text{Ave}(v_j))} .\]
<p>Given this approximation of the conditional probabilities, the training is done just the same way as described above using the cross-entropy loss.</p>
<p>Which embedding is better? Both are roughly equivalent and we could use one or the other.</p>
<h2 id="glove">GloVe</h2>
<p>The main problem with the word2vec framework is that both skip-gram and CBOW models rely on predicting output probabilities, and hence have to be trained with the cross-entropy loss.</p>
<p>For very large dictionaries, calculating cross-entropy can be troublesome: each gradient update requires computing softmaxes (and hence calculating all the outputs and marginalizing over them). Global vector (GloVe) embeddings resolve this in a slightly different manner. The idea is to use matrix factorization (a la PCA), and since it is not neural network-based we won’t go into too much detail here: take an NLP class if interested. The main steps are as follows:</p>
<ul>
<li>
<p>We construct a word-context co-occurrence matrix and try to factorize it using PCA (i.e., find the low-rank decomposition that minimizes the reconstruction loss).</p>
</li>
<li>
<p>Not trivial, but this is a very sparse matrix! Can train using SGD type methods.</p>
</li>
<li>
<p>Word distributions have a long tail, so very common words will dictate the loss function. To make things more equitable, log-probabilities are used.</p>
</li>
<li>
<p>In practice, a modified weighted form of the reconstruction loss is used:</p>
</li>
</ul>
\[L(U,V,b,c) = \sum_{i,j} f(x_{ij}) (u_i^t v_j + b_i + c_j - \log x_{ij})^2\]
<p>where $f(x_{ij}) = 1$ for reasonable $x_{ij}$ but quickly goes to 0 if $x_{ij}$ gets close to zero. This avoids the possibility that large (negative) values in the log-probabilities significantly influence the loss function.</p>
<h2 id="elmo-bert-and-gpt-2">ELMO, BERT, and GPT-2</h2>
<p>While word2vec and GloVe represented step-changes in our ability to build sophisticated language models, they have now largely been surpassed by more modern techniques — ELMo, BERT, and GPT. Fortunately, we now have all the ingredients to understand them. The details are a bit hairy (a lot of engineering has gone into finetuning each of them) so we will only stick to high-level intuition and descriptions, while relegating the specifics to the textbook (Chapter 15).</p>
<h3 id="elmo">ELMo</h3>
<p>The main problem with GloVe/word2vec is that the word embeddings are <em>context-independent</em>. Recall that if we want to get a skip-gram embedding of a word, we one-hot encode it and look at the corresponding input- and output-layer weights in the above two-layer architecture.</p>
<p>However, words (particularly in languages such as English) are <em>context-dependent</em>. E.g. consider a sentence such as “Fish fish fish” — there are three identical words here, but the context shows that each has rather different meanings. Can we somehow get embeddings that not just look at word-level semantics but their usage in a given sentence?</p>
<p>ELMo (Embeddings from Language Models) does this. Just as how we motivated RNNs/LSTMs as possible architectures that can capture context in a sequence of inputs, similarly we can replace the simple feedforward architecture of word2vec with recurrent architectures.</p>
<p>Specifically, ELMo proposes to produce word embeddings by looking at the entire sentence both left-to-right and right-to-left. It achieves this via bi-directional LSTMs: it looks at the hidden layer representations (states) for both the left-to-right and right-to-left LSTMs and takes a weighted linear combination of them as the word embedding for each word in the input. The weights are left as trainable parameters used by downstream tasks (such as classification or sentence prediction) for further fine-tuning.</p>
<p>The choice of loss function is important; ELMo uses <em>next-word-prediction</em> (NWP) as the task of choice using the cross-entropy loss.</p>
<h3 id="bert">BERT</h3>
<p>The next natural progression was to replace the bidirectional LSTM encoding used by ELMo with <em>Transformers</em>. This led to BERT (bidirectional encoder representations from transformers). The main ingredients (over and above those described above) include:</p>
<ul>
<li>Replacing LSTMs with transformer blocks. The output of each encoder layer in each token’s path can be viewed as a feature embedding of that toen.</li>
</ul>
<p><img src="/dl-notes/assets/figures/bert.png" alt="BERT overview" /></p>
<ul>
<li>
<p>The loss function/training task used to learn the embeddings is for next-sentence prediction (NSP) which is shown to be transferable to many tasks.</p>
</li>
<li>
<p>To encourage generalizability, a technique called <em>masked self-attention</em> is used: random words in a sentence are masked/zeroed out. This is similar to Dropout, which we have seen in the context of training feedforward nets.</p>
</li>
<li>
<p>BERT also uses word <em>piece</em> tokenization, which is somewhere in between character-level and word-level encoding. This is useful for languages like English. For example, the world “Walking” is broken into two pieces: “walk” and “ing”, each of which are tokenized.</p>
</li>
<li>
<p>There are two BERT models, which differ in the depth (number of encoder blocks) used in the Transformer architecture.</p>
</li>
<li>
<p>BERT is now adopted by Google Search in most of their supported languages.</p>
</li>
</ul>
<h3 id="gpt-2">GPT-2</h3>
<p>This line of work culminated in GPT-2 (GPT = Generative Pre-Training). Its successor (GPT-3) is possibly the most advanced language model currently present, but is closed-source.</p>
<p>A key difference with BERT is that GPT-2 uses <em>masked auto-regressive self-attention</em>, so tokens are not allowed to peek at words to the right of them.</p>
<p>GPT-2 also used much deeper architectures than BERT, and was trained on extremely massive datasets (called the <a href="https://skylion007.github.io/OpenWebTextCorpus/">OpenWebText</a> Corpus).</p>
<p>Other hacks: similar to BERT, GPT uses word piece tokenization. GPT-2 used something called Byte Pair encodings that uses compression algorithms to figure out how to chop up regular words into tokens.</p>
<h3 id="summary">Summary</h3>
<p>There you have it: a brief summary of modern neural architectures for NLP (and sequential data more broadly).</p>
<p>Among the many applications they support: apart from regular classification-type problems (such as sentiment analysis or named entity recognition), the above models support:</p>
<ul>
<li>Language synthesis – as used by chatbots and the like.</li>
<li>Summarization: models such as GPT-2 can read a wikipedia article (without the intro paragraph) and be asked to summarize the intro.</li>
<li>Similar architectures can be fine-tuned to perform music synthesis (such as synthetic midi file generation).</li>
</ul>Course NotesIn which we see the power of deep networks in natural language processing.Lecture 7: Transformers2021-03-08T00:00:00+00:002021-03-08T00:00:00+00:00https://chinmayhegde.github.io/dl-notes/notes/lecture07<p><em>In which we introduce the Transformer architecture and discuss its benefits.</em></p>
<h1 id="attention-mechanisms-and-the-transformer">Attention Mechanisms and the Transformer</h1>
<h2 id="motivation">Motivation</h2>
<p>Attention models/Transformers are the most exciting models being studied in NLP research today, but they can be a bit challenging to grasp – the pedagogy is all over the place. This is both a bad thing (it can be confusing to hear different versions) and in some ways a good thing (the field is rapidly evolving, there is a lot of space to improve).</p>
<p>I will <a href="http://peterbloem.nl/blog/transformers">deviate</a> a little bit from how it is explained in the textbook, and in other online resources: see Section 10 in the <a href="http://d2l.ai">textbook</a> for an alternative treatment.</p>
<p>Recall where we left off: general RNN models. They look like this:</p>
<p><img src="/dl-notes/assets/figures/multi-layer-rnn.png" alt="Multi-layer RNNs" />{ width=90% }</p>
<p>We discussed some NLP applications that are suitable to be solved by RNNs. These include:</p>
<ul>
<li>next symbol/token prediction</li>
<li>sequence classification</li>
</ul>
<p>but there are several NLP applications for which RNN-type models are not the best. These include:</p>
<ul>
<li>neural machine translation (NMT)</li>
<li>sentence generation</li>
</ul>
<p>Consider, for example, the English sentence:</p>
<ul>
<li>“How do you like the weather today”?</li>
</ul>
<p>and its German translation:</p>
<ul>
<li>“Wie finden sie das Wetter heute?”</li>
</ul>
<p>While the two sentences are rather similar (both are Germanic languages) We find some subtle differences here. One is the difference in the number of words: the German version has one less word. The second is the order of the words – the pronoun “you” comes before the verb “like” in English but the pronoun “sie” after the verb “finden” in German. Both are examples of <em>misalignment</em>, and language translation has to frequently deal with small/local misalignments of this nature.</p>
<p>RNNs are not amenable to dealing with misalignments. The main reason is that RNNs (fundamentally) are <em>sequence-to-symbol</em> models: they output symbols one after the other based on the sequence seen so far. In NMT the outputs are not single tokens but <em>sequences</em> of tokens, each of which may depend on several parts of input sequence (both forwards and backwards in time) with long-range dependencies. How do we fix this problem? Let us consider a few different solution approaches.</p>
<p><em>Attempt 1</em>. Model tokens as entire sentences, not words (i.e., build the language model at the sentence level, not at the word- or character-levels). This, of course, is not feasible – due to combinatorial explosion, the number of possible sentences becomes extremely large very quickly.</p>
<p><em>Attempt 2</em>. A second approach is to use <em>bidirectional RNNs</em>. The idea is simple: read the input sequence both backwards and forwards in time. This way we will get two sets of hidden states. We can concatenate both states to decode the output. This is fine, but still does not capture very long range dependencies.</p>
<p><em>Attempt 3</em>: Encoder-decoder architectures. Delay producing any output in the beginning. Just compute the states recursively until the last state (which is the “global” context/memory variable which captures the entire sequence). This is called the <em>encoder</em>. Then feed it to the input again to produce outputs. This is called the <em>decoder</em>. This is a fine idea but same issues with gradient vanishing, low ability of final state to capture overall context etc.</p>
<p><em>Attempt 4</em>: Why only final state? Take all intermediate encoder states, store all of them as context vectors to be used by the decoder. This is getting better, but still too complex. There are encoder states, decoder states, decoder inputs \ldots getting way too complex. Also, it would be nice to figure out which parts of the input sequence influenced which other parts, so that we get a better understanding of the context. But how to assign “influence scores” systematically?</p>
<h2 id="self-attention">Self-Attention</h2>
<p>This is the point where papers-blogs-tweets-slides etc start talking about keys/values and attention mechanisms and everything goes a bit haywire. Let’s just ignore all that for now, and instead talk about something called <em>self-attention</em>. The use of the “self-“ prefix will become clear later on.</p>
<p>Here is how it is defined. We have a <em>set</em> (not sequence, order does not matter right now) of input data points ${x_1, x_2, \ldots, x_n}$. They can all be $d$-dimensional vectors. We will produce a set of outputs ${y_1, y_2, \ldots, y_n}$, also $d$-dimensional vectors:</p>
\[y_i = \sum_{j=1}^n W_{ij} x_j\]
<p>i.e., each output is a weighted average of <em>all</em> inputs where the weights $W_{ij}$ are row-normalized such that they sum to 1.</p>
<p>Crucially, the weights here are <em>not</em> the same as the (learned) parameters in a neural network layer. Instead, they are derived from the inputs. For example, one option is that we choose the weights to be dot-products:</p>
\[w_{ij} = x_i^T x_j\]
<p>and apply the softmax function so that we get row-normalization:</p>
\[W_{ij} = \frac{\exp{w_{ij}}}{\sum_j \exp{w_{ij}}}\]
<p>and use these weights to construct the outputs. That’s basically self-attention in a nutshell. In fact, this is all we will need to understand transformers/BERT/GPT etc.</p>
<p>Notice a few fundamental differences between regular convnets/RNNs and the operation we discussed above:</p>
<ul>
<li>Convnets map single inputs to single outputs. In self-attention, we map sets of inputs to sets of outputs, and by design, the interaction <em>between</em> data points is captured.</li>
<li>RNNs map inputs seen <em>thus far</em> to single outputs. In self-attention, we are not limited to tokens/symbols only seen in the past.</li>
<li>Until now, nothing is learnable here. This is an entirely deterministic operation with no free parameters. You can think of $x_i$ being features/embeddings that were learned “upstream” before being fed into the self-attention layer. We will add a few learnable parameters to the layer itself shortly.</li>
<li>Observe that the operation is <em>permutation-equivariant</em>: if I permute the order of $x$, the output of $y$ is exactly the same, but permuted. This can pose challenges in NLP where permutations may result in completely different meanings. We will fix this shortly.</li>
</ul>
<hr />
<p>Before we proceed, why does this operation even make sense?</p>
<p>One interpretation is as follows: suppose we restrict our attention to linear models (so the output has to be a linear combination of the inputs). Say we were performing an NMT task that was translating “The cat sat on the hat” from English to German. One could represent each word in this sentence with an embedding/token.</p>
<p>However, there is a lot of redundancy in natural languages. Certain words (the, on) are common words that are not informative/correlated. Other words (cat, hat) are similar (both nouns). Words may be grouped according to subject-object relationships or subject-predicate relationships. It would be useful if the model automatically “grouped” similar words together. That would allow both better context and better training. The dot product provides a mechanism for automatically figuring out this kind of grouping.</p>
<hr />
<p>OK, now let’s generalize the self-attention operation a little bit.</p>
<p>In the above definition of the self-attention layer, observe that each data point $x_i$ plays three roles:</p>
<ul>
<li>It is compared with <em>all</em> other data points to construct weights for <em>its</em> own output $y_i$ (i.e., in the dot-product example above, the sequence of weights $w_{i 1} = x_i^T x_1, w_{i 2} = x_i^T x_2, \ldots, w_{i n} = x_i^T x_n $).</li>
<li>It is compared with <em>every</em> other data point $x_j$ to construct weights for <em>their</em> output $y_j$ (i.e., the weight $w_{1i} = x_1^T x_i, w_{2i} = x_2^T x_i$, \ldots).</li>
<li>Once all the weights $w_ij$ have been constructed, it is used to finally synthesize each actual output $y_1, y_2, \ldots, y_n$.</li>
</ul>
<p>These three roles are called the <em>query</em>, <em>key</em>, and <em>value</em> respectively. To make these roles distinct, let us add a few dummy variables:</p>
\[\begin{aligned}
q_i &= x_i, \\
k_i &= x_i, \\
v_i &= x_i
\end{aligned}\]
<p>and then write out the output as:</p>
\[w_{ij} = q_i^T k_j, \qquad W_{ij} = \text{softmax}(w_{ij}), \qquad y_i = \sum_j W_{ij} v_j .\]
<p>This is a lot of responsibility for each data point. Let’s make the life of each vector easier by adding learnable parameters (linear weights) for each these three roles. For numerical reasons, we also scale the dot-product (this does not change intuition at all).</p>
<p>Therefore, we get:</p>
\[\begin{aligned}
q_i &= W_q x_i, \qquad k_i = W_k x_i, \qquad v_i = W_v x_i \\
w_{ij} &= q_i^T k_j / \sqrt{d}, \qquad W_{ij} = \text{softmax}(w_{ij}), \qquad y_i = \sum_j W_{ij} v_j .
\end{aligned}\]
<p>We can think of each of the $W_q$, $W_k$, $W_v$ as learnable <em>projection</em> matrices that defines the roles of each data point.</p>
<p>One last complication. We can concatenate <em>different</em> self-attention mechanisms to give it more flexibility. This is the same analogy as choosing multiple filters in a convnet layer. This is called <em>multi-head</em> self-attention. We can index each head with $r = 1, 2, \ldots$, so that we get learnable parameters $W^r_q$, $W^r_k$, $W^r_v$. We get independent outputs for each head and then combine everything using a linear layer to produce the outputs. So we finally get:</p>
\[\begin{aligned}
q^r_i &= W^r_q x_i, \qquad k^r_i = W^r_k x_i, \qquad v^r_i = W^r_v x_i \\
w^r_{ij} &= \langle q^r_i, k^r_j \rangle / \sqrt{d}, \qquad W^r_{ij} = \text{softmax}(w^r_{ij}), \qquad y^r_i = \sum_j W^r_{ij} v_j, \\
y_i &= W_y \text{concat}[y^1_i, y^2_i, \ldots].
\end{aligned}\]
<p>and there we have it. The entire (multi-head) self-attention layer. We will denote the above $x$-to-$y$ mapping as follows:</p>
\[[y_1, y_2, \ldots, y_n] = \text{Att}([x_1, x_2, \ldots, x_n])\]
<hr />
<p>Quick back-story on the nomenclature. These names <em>query, key, value</em> come from a key-value data structure. If we give a query key and match it to a database of available keys, then the data structure returns the corresponding matched value. The analogy is similar in attention mechanisms, except that the matching is done via dot-products (and the softmax ensures that it is a <em>soft-matching</em>, and every key in the database is matched to the query to some extent).</p>
<p>This also relates to the name “self-attention”. Recall our original discussion in the beginning of this lecture when we started discussed encoder/decoder architectures. We had recurrent neural networks taking the input ${x_i}$ and doing complicated things to get encoder context vectors ${h_i}$ and decoder states $s_i$. Then we were computing “influence scores” to figure out which words were relevant for (or “attend to”) which output. One mechanism proposed for doing this was to compute dynamic context scores:</p>
\[c_i = \sum_{j} \alpha_{ij} h_j\]
<p>where $\alpha$ represented the alignment weights. This was called an <em>attention</em> mechanism, and early NMT papers used a shallow feedforward network (called an <em>attention layer</em>) to compute these alignment weights:</p>
\[\alpha_{ij} = W_1 \text{tanh}(W_2 [h_i, s_j])\]
<p>followed by a softmax. Notice the similarities between what we discussed so far and the above formulation. A seminal paper in 2017 called “Attention is all you need” dramatically simplified things and showed that <em>self-attention</em> is enough – you could interpret contexts quite well in NLP tasks if we just let the input data tokens attend to themselves.</p>
<h2 id="transformers">Transformers</h2>
<p>We now use the self-attention layer described above to build a new architecture called the <em>Transformer</em>. The Transformer architecture now forms the backbone of the most powerful language models yet built, including BERT and GPT-2/3.</p>
<p>The key component of a Transformer is the <em>Transformer block</em>: self-attention + residual connection, followed by Layer Normalization, followed by a set of standard MLPs, followed by another Layer Normalization, i.e., something like this:</p>
<p><img src="/dl-notes/assets/figures/transformer-block.png" alt="The Transformer block" /></p>
<p>Observe that this architecture is completely feedforward, with no recurrent units. Therefore, gradients do not vanish/explode (by construction), and the depth of the network is no longer dictated by the length of the input (unlike RNNs). Multiple transformer blocks can then be put together to form the transformer architecture.</p>
<h2 id="transformers-wrapup">Transformers: Wrapup</h2>
<p>One part that we didn’t emphasize too much in the previous lecture is the fact that unlike sequence models (such as RNNs or LSTMs), self-attention layers are <em>permutation-equivariant</em>. This means that sentences of the form:</p>
<blockquote>
<p>“Jack gave water to Jill”</p>
</blockquote>
<p>and</p>
<blockquote>
<p>“Jill gave water to Jack”</p>
</blockquote>
<p>will learn the exact same features. In order to incorporate positional information, some more effort is needed.</p>
<p>One way to achieve this is via <em>positional embedding</em>, or <em>positional encoding</em>. We create, in addition to the word embedding, a vector that encodes the location of the token. This vector can either be learned (just as word embeddings – see below) or just fixed. The latter is typically used in Transformer architectures.</p>
<p>What kind of positional encodings are useful? One-hot encoding the position is possible (although quickly becomes cumbersome – can you reason why this is the case?). Just adding an integer feature encoding the position is fine too, although we may run into scale/dynamic range issues, sinc the value of the feature can become very large for one sequences. A common approach is to use <em>sinuisoidal encoding</em>:</p>
\[p_t = [sin(\omega_1 t); sin(\omega_2 t); \ldots sin(\omega_d) t]\]
<p>where $\omega_k = \frac{1}{10000^{k/d}}$ represents different frequencies.
Thus the values of the positional encoding vector are always bounded, and because of the periodic nature of the definition this can be applied for any choice of $d$ and $t$.</p>Course NotesIn which we introduce the Transformer architecture and discuss its benefits.Lecture 6: Recurrent Neural Nets2021-03-07T00:00:00+00:002021-03-07T00:00:00+00:00https://chinmayhegde.github.io/dl-notes/notes/lecture06<p><em>In which we introduce deep networks for modeling time series data.</em></p>
<h1 id="recurrent-neural-networks">Recurrent Neural Networks</h1>
<p>Thus far, we have mainly discussed deep learning in the context of image processing and computer vision. Let us now turn our attention to a different set of applications that involve <em>text</em>. For example, consider natural language processing (NLP), where the goal might be to:</p>
<ul>
<li>
<p>perform document retrieval: used in database- and web-search;</p>
</li>
<li>
<p>convert speech (audio waveforms) to text: used in Siri, or Google Assistant;</p>
</li>
<li>
<p>achieve language translation: used in Google Translate,</p>
</li>
<li>
<p>map video to text: used in automatic captioning,</p>
</li>
</ul>
<p>among a host of other applications.</p>
<p>Let us think about trying to use the tools we have developed so far to solve the above types of problems. Recall the kind of tools we have been using: thinking of data as real-valued vectors/arrays; representing entries of this array as nodes in a network; recursively applying arithmetic operations (organized in the form of layers); training the parameters of each layer; and so on.</p>
<p>Immediately we run into problems. For example, a document (or any other type of text object) is a string of characters, so how do we encode them into real-valued vectors? The naive approach would be to perform one-hot encoding of each character, just as how we encoded categorical labels in classification; but is this the best we can do? Should we instead try to model words, and if yes, then should we one-hot-encode words instead? Defining how to represent text is the first challenge.</p>
<p>Setting this question aside, a second challenge arises in the context of designing neural architectures for processing text data. If we think of representing the characters in a sentence into a linear vector/array, notice that the contents of the vector exhibits <em>both</em> short range as well as <em>long-range</em> dependencies. The short range dependencies encode relationships between characters in a word, or relationships between adjacent words; it is reasonable that one can capture this via a convnet.</p>
<p>But the long range dependencies are harder to model, and in a lot of languages the start of a sentence may have relevance to the end of a sentence. (Example: “The cow, in its full glory, jumped over the moon” – the subject and object are at two opposite ends of the sentence.) These kinds of <em>non-local</em> interactions are not easily captured by convnets, and therefore we need a new approach.</p>
<h2 id="markov-and-n-gram-models">Markov and n-gram models</h2>
<p>Before delving into neural nets for text processing, let us first discuss some classical methods. We will assume that text can be represented as a sequence of numerical symbols $w_1, w_2, \ldots$ where the symbols represent characters, words, or whatever model we define.</p>
<p>Classically, the tools to solve NLP problems were <em>probabilistic language models</em>. If we consider any sequence $w = (w_1,w_2,\ldots,w_d)$, then the goal would be to estimate the probability distribution:</p>
\[P(w) = P(\{w_1, w_2, \ldots, w_T\})\]
<p>From basic probability, we can factorize this distribution as:</p>
\[P(w) = \Pi_{t=1}^T P(\{w_t | w_{t-1}, w_{t-2}, \ldots w_1\})\]
<p>So the likelihood of any given sequence appearing depends on the conditional probability of a word given the appearance of the previous several words.</p>
<p>These probabilities, in principle, can be empirically estimated given a very large corpus of training data. However, in practice such estimates can be noisy (or even intractable, given the combinatorial explosion in the number of possible word combinations). To alleviate this, it is typical to make the (first-order) <em>Markov model</em> assumption, which states that the likelihood of each word only depends on the previous word in the sentence:</p>
\[P(w) = P((w_1,w_2,\ldots,w_T)) = P(w_1) \cdot P(w_2 | w_1) \cdot \ldots P(w_T | w_{T-1}) .\]
<p>Now the conditional probabilities are relatively easier to estimate: if we have $n$ words in the dictionary then we need to estimate roughly $O(n^2)$ probabilities. This is large but not intractable.</p>
<p>The first-order Markov assumption unfortunately <em>ignores</em> dependencies across time beyond a single hop. If we were being brave, we could extend it to two, or three, or $n$ previous words – these are called <em>bigram</em>, <em>trigram</em>, or <em>n-gram</em> models. But realize that as we introduce more and more dependencies across time, the probability computations quickly become large.</p>
<h2 id="recurrent-architectures">Recurrent architectures</h2>
<p>An elegant way to resolve the time dependency issue and introduce long(er) range dependencies is via the notion of a <em>latent variable</em> called the <em>state</em>. We will rely on the following approximation:</p>
\[P(\{w_t | w_{t-1}, w_{t-2}, \ldots w_1\}) \approx P(\{w_t | h_{t-1} \})\]
<p>where $h_t$ is a hidden variable that approximately encodes all history up to the current instant. In general, we can assume that the state $h_t$ is a function of the previous state and the current input: $h_t = f(h_{t-1}, x_t)$.</p>
<p>Let us interpret this in the context of neural nets. Thus far, we have strictly used feedforward connections while discussing neural network architectures. Let us now introduce a new type of neural net with <em>self-loops</em> which acts on time series, called the <em>recurrent neural net</em> (RNN). In reality, the self-loops in the hidden neurons are computed with unit-delay, which really means that the state of the hidden unit at a given time step depends both on the input at that time step, and the state at the previous time step. The mathematical definition of the operations are as follows:</p>
\[\begin{aligned}
h^{t} &= \sigma(U x^{t} + W h^{t-1}) \\
y^{t} &= \text{softmax}(V h^{t}).
\end{aligned}\]
<p>So, historical information is stored in the output of the hidden neurons, across different time steps. We can visualize the flow of information across time by “unrolling” the network across time.</p>
<p><img src="/dl-notes/assets/figures/rnn.png" alt="Structure of RNN" /></p>
<p>Observe that the layer weights $U, W, V$ are <em>constant</em> over different time steps; they do not vary. Therefore, the RNN can be viewed as a special case of deep neural nets with weight sharing.</p>
<h3 id="loss-functions-and-metrics">Loss functions and metrics</h3>
<p>Let us recall our three-step recipe for machine learning. Having defined a model (or a representation), we now have to define a goodness of fit. For text, there are a couple of options. The training loss is typically chosen as the cross-entropy (recall that we are trying to approximate the probability of an output symbol/token given previous inputs). So if $y^t$ is the predicted output and $g^t$ is the one-hot encoding of the ground truth, then we can write out:</p>
\[l(y^t, g^t) = - \sum_i g_i^t \log y_i^t = - \log y_{I(g)}^t\]
<p>where $I(g)$ is the index corresponding to the true word, and the overall loss is given by averaging over the entire training corpus:</p>
\[L(\theta) = \frac{1}{T} \sum_{t=1}^T l(y^t, g^t) = - \frac{1}{T} \sum_t \log y_{I(g)}^t (\theta) .\]
<p>In practice, this can be very hard to compute for large datasets, so this is broken down into batches (usually sentences). There are additional complications while computing gradients which we discuss below.</p>
<p>Evaluation of a given model is done via a quantity called <em>perplexity</em>, which happens to be related to the loss that we defined above. Perplexity is an information-theoretic concept that measures how well a probability model predicts a given object/symbol. It is defined as the exponent of the cross-entropy of the final model measured over the predictions made over a validation dataset:</p>
\[\text{Perplexity} = \exp \left( - \frac{1}{T} \sum_t \log y_{I(g)}^t \right)\]
<p>If there is a lot of certainty about what the model is predicting, then the probability distribution is peaked around the right output, the cross-entropy is 0, and the perplexity is 1. If the model is spitting out random words, the probability distribution is likely going to be uniform and the perplexity is going to be equal to the number of tokens in the vocabulary (<em>exercise: why is this?</em>). A good prediction model achieves lower perplexities.</p>
<h3 id="backpropagation-through-time">Backpropagation through time</h3>
<p>Again, training an RNN can be done using the same tools as we have discussed before: variants of gradient descent via backpropagation. The twist in this case is the feedback loop, which complicates matters. To simplify this, we simply unroll the feedback loop into $T$ time steps, and perform <em>backpropagation through time</em> for this unrolled (deep) network. We need to be careful when we apply the multivariate chain rules while computing the backward pass, but really it is all about careful book-keeping; conceptually the algorithm is the same.</p>
<p>Here is a more concrete description of the backprop updates. Let’s just ignore all matrix-vector multiplies (the calculus becomes complex) and just pretend that everything (input, output, hidden state) is a scalar. There are three sets of weights we need to figure out: the weights mapping input to the state ($u$), the weights mapping the state to itself ($w$), and the weights mapping the state to the output ($v$).</p>
<p>Remember that these weights are constant across time, so even if we unroll the network out to $T$ steps, there is a massive amount of weight-sharing going on. The chain rule gives us:</p>
\[\begin{aligned}
\frac{\partial L}{\partial w} &= \frac{1}{T} \sum_{t=1}^T \frac{\partial l^t}{\partial w} \\
&= \frac{1}{T} \sum_{t=1}^T \frac{\partial l^t}{\partial y^t} \frac{\partial y^t}{\partial h^t} \frac{\partial h^t}{\partial w}
\end{aligned}\]
<p>The first and second factors above are easy to calculate (it’s just the derivative of the cross-entropy and the soft-max). However, the last term is tricky. By definition,</p>
\[h^t = \sigma(u x^{t} + w h^{t-1}) := f(w, h^{t-1})\]
<p>Therefore, the derivative of $h^t$ with respect to $w$ has two components:</p>
\[\frac{\partial h^t}{\partial w} = \frac{\partial f(w, h^{t-1})}{\partial w} + \frac{\partial f(w, h^{t-1})}{\partial h^{t-1}} \cdot \frac{\partial h^{t-1}}{\partial w} .\]
<p>If we define a sequence $a_t := \frac{\partial h^t}{\partial w}$, then <em>each</em> $a_t$ depends on $a_{t-1}$, which in turn depends on $a_{t-2}$, and so on. This induces a recurrence relation for $a_t$. So to accurately compute gradients with respect to $w$, we need to perform backprop all the way to the start of time. In practice this is far too cumbersome and we usually just truncate after a certain number of time steps.</p>
<p>(Observe that this problem did not come up in regular feed-forward networks – the gradients at any layer only depended on the forward pass activations and the backward pass messages at that layer).</p>
<p>Even more troubling is the fact there is a <em>multiplicative</em> factor linking the terms $a_t$ and $a_{t-1}$. This has the effect of a geometric series: if the factor is greater than one on average across time, then the gradients <em>explode</em>, while if the factor is lesser than one on average across time, then the gradients <em>vanish</em>.</p>
<h3 id="stabilizing-rnns-training-and-extensions">Stabilizing RNNS training and extensions</h3>
<p>Vanishing/exploding gradients are a major headache in deep learning, and are even more pertinent in RNNs (which, by design, require unrolling over several time steps). One way to solve this problem is called <em>gradient clipping</em> where we simply ignore the magnitude of the gradient and normalize it to some norm $\alpha$ that is kept constant:</p>
\[g \leftarrow \alpha \frac{g}{\|g\|} .\]
<p>As you can imagine this is sub-optimal since it may lead to erroneous gradient updates. But at least the numerics are stable.</p>
<p>The alternative approach is to redesign the architecture itself.
Notice the above example is for a <em>single-layer</em> RNN (which itself – let us be clear — is a deep network, if we imagine the RNN to be unrolled over time). We could make it more complex, and define a <em>multi-layer</em> RNN by computing the mapping from input to state to output <em>itself</em> via several layers. The equations are messy to write down so let’s just draw a picture:</p>
<p><img src="/dl-notes/assets/figures/multi-layer-rnn.png" alt="Multi-layer RNNs" /></p>
<p>Depending on how we define the structure of the intermediate layers, we get various flavors of RNNs:</p>
<ul>
<li>Gated Recurrent Units (GRU) networks</li>
<li>Long Short-Term Memory (LSTM) networks</li>
<li>Bidirectional RNNs</li>
</ul>
<p>and many others. This gives us a lot of flexibility as to how to ensure that the gradient information propagates across several time steps.</p>
<p>LSTMs are the most well-known among the above architectures, but GRU’s are a bit simpler to explain formally so let’s do that (refer to the textbook for LSTMs if you are interested). The idea is similar: we interpret the state as the <em>memory</em> of a recurrent unit, and hence would like to also somehow decide whether certain units are worth memorizing (in which case the state is <em>updated</em>), and others are worth forgetting (in which case the state is <em>reset</em>). Let us define two <em>gating</em> operations, called “reset” ($r$) and “update” ($z$):</p>
\[r^t = \sigma(U_r x^t + W_r h^{t-1}), z^t = \sigma(U_z x^t + W_z h^{t-1})\]
<p>which both look like a regular state-update equation. Now, ordinarily in an RNN, as we discussed in the beginning of this lecture, we would update the state as:</p>
\[h^{t} = \sigma(U x^{t} + W h^{t-1}) .\]
<p>But in the GRU, we define the <em>candidate</em> state as:</p>
\[\tilde{h}^{t} = \sigma\left(U x^{t} + W (h^{t-1} \odot r^t)\right)\]
<p>with the intuition being that if the reset gate is close to 1, then this looks like a regular RNN unit (i.e., we retain memory), while if the reset gate is close to 0, then this looks like a regular perceptron/dense layer (i.e., we forget).</p>
<p>Now, the update gate tells us how much memory retention versus forgetting needs to happen:</p>
\[h^t = h^{t-1} \odot z^t + \tilde{h}^{t} \odot (1 - z^t) .\]
<p>Whenever the update gate is close to one, we retain the old state; whenever it is close to zero, the state is over-written.</p>Course NotesIn which we introduce deep networks for modeling time series data.