Note on using Apex with multiple models and one optimizer.

This post is just a quick note on how to use Nvidia's Apex for pytorch with multiple models that somehow using a single optimizer.

I am currently working with a  classifier using a pre-trained backbone feature extractor(which needs to be finetuned as well). I could encapsulate them into one pytorch nn.module. But for some reason, I want to do ad-hoc modifications on the features. In my training loop it will looks like this:

So the problem is how to set up one optimizer for these two separated nn.modules, and how to initialize these two models one optimizer combo with apex.

The solution is quite simple:

Wait... 24GB GPU memory is not enough? How to accumulate gradients in PyTorch

I was training the Nasnet-A-Large network on a 4 channel 512 by 512 images using PyTorch. Even with my beast GPU RTX Titan, I could only use a batch size of 8. The training is very volatile with that batch size, and I believe one way to combat that is to accumulate gradients for a few batches and then do a bigger update. Luckily, with PyTorch, it is very simple.

So, let's say below is your training loop:

We would only need a small modification to accumulate gradients:

The latter training code will accumulate gradients for 8 batches and do an update. Note that the backward pass is done on individual small batches still, this is crucial.

I originally implemented it as follows:

which is not only uglier but also not gonna work, because the backward step will cause GPU OOM for that the backward pass is on batch_size * num_batches images.

Writing your own loss function/module for PyTorch

Yes, I am switching to PyTorch, and I am so far very happy with it.

Recently, I am working on a multilabel classification problem, where the evaluation metric is the macro f1 score. So, ideally, we would want the loss function to be aligned with our evaluation metric, instead of using standard BCE.

Initially, I was using the following function:

It is perfectly usable for the purpose of a loss function, like your typical training code:

Better, we can make it a PyTorch module, so that the usage is more like your typical PyTorch loss:

That is simply to put the original f1_loss function on to the forward pass of a simple module. As a result, I can explicitly put the module to GPU.

 

Simple word2vec and doc2vec from PMI matrix decomposition

Earlier I made a post about decomposing PMI matrix as label embedding. It was turning the sparse label representation into a dense representation in the multilabel classification problem, that is to operate on the right-hand side of our AX = Y problem. The same technique works on the left-hand side too. Specifically, when working with text classification problem, we can use PMI matrix decomposition to get word vectors as well as a simple doc2vec transformation.

Here is a short code snippet demonstrating how it works:

We first constructed the simplest binary doc-term matrix M, use it to construct the PMI matrix PP and then decompose it into an embedding matrix E consists of word vectors of dimension 300.

Given the word boston, we can find similar words to it by computing and sorting words based on the cosine similarity between word vectors. And we see the top 20 similar words to boston are cities, places and sports-related words.

We can also transform the original new articles into 300 dimension vectors by simply taking the dot product between M and E, and feed the resulting matrix into downstream modeling tasks, like here we made a logistic regression that archives 0.82 on both recall and precision(as a result 0.82 on f1 too).

We can show the visualize the word vectors via TSNE.

It does showcase that the more frequent two words co-occurs the closer the two words in the embedding space.

Label Embedding in Multi-label classification

In the recent Kaggle competition, inclusive images challenge  I tried out label embedding technique for training multilabel classifiers, outlined in this paper by François Chollet.

The basic idea here is to decompose the pointwise mutual information(PMI) matrix from the training labels and use that to guide the training of the neural network model. The steps are as follow:

  1. Encode training labels as you would with multilabel classification settings. Let M (of size n by m, ie n training example with m labels) denote the matrix constructed by vertically stacking the label vectors.
  2. The PMI (of size m by m) is a matrix with PMI_{i,j}=log(\frac{P(i,j)}{P(i)*P(j)}), it can be easily implemented via vectorized operations thus very efficient in computing, even on large datasets. See more explanation of the PMI here.
  3. The embedding matrix E is obtained by computing the singular value decomposition on PMI matrix and then take the dot product between U and the first k columns of \sqrt{\Sigma}.
  4. We then can use the embedding matrix to transform the original sparse encoded labels into dense vectors.
  5. During the training of deep learning model, instead of using m sigmoid activations together with BCE loss in the end, now we can use k linear activation with cosine proximity loss.
  6. During inference time, we take the model prediction and search in the rows from the embedding matrix E and select the top similar vectors and find their corresponding labels.

Below is a toy example calculation of the label embedding procedure. The two pictures are the pairwise cosine similarity between item labels in the embedding space and a 2d display of items in the embedding space.

In my own experiments, I find the model trained on label embeddings are a bit more robust to label noises, it is faster in convergence and returns higher top k precision compared with models with logistic outputs.

I believe it is due to the high number of labels in the competition (m ~= 7000) problem contrasted with the small batches the model is trained on. As this label embedding is obtained from matrix factorization, it is similar to PCA that we keep crucial information and throw out some unnecessary detail/noise, except we are doing so on the labels instead of the inputs.