In the recent Kaggle competition, inclusive images challenge I tried out label embedding technique for training multilabel classifiers, outlined in this paper by François Chollet.

The basic idea here is to decompose the pointwise mutual information(PMI) matrix from the training labels and use that to guide the training of the neural network model. The steps are as follow:

- Encode training labels as you would with multilabel classification settings. Let (of size n by m, ie n training example with m labels) denote the matrix constructed by vertically stacking the label vectors.
- The PMI (of size m by m) is a matrix with , it can be easily implemented via vectorized operations thus very efficient in computing, even on large datasets. See more explanation of the PMI here.
- The embedding matrix is obtained by computing the singular value decomposition on PMI matrix and then take the dot product between and the first k columns of .
- We then can use the embedding matrix to transform the original sparse encoded labels into dense vectors.
- During the training of deep learning model, instead of using m sigmoid activations together with BCE loss in the end, now we can use k linear activation with cosine proximity loss.
- During inference time, we take the model prediction and search in the rows from the embedding matrix and select the top similar vectors and find their corresponding labels.

Below is a toy example calculation of the label embedding procedure. The two pictures are the pairwise cosine similarity between item labels in the embedding space and a 2d display of items in the embedding space.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
import numpy as np from scipy.sparse import csr_matrix from scipy.sparse.linalg import svds from sklearn.metrics.pairwise import cosine_similarity import seaborn as sns from matplotlib import pyplot as plt M = csr_matrix([ [0,1,0,1,0], [1,1,0,0,0], [0,0,1,1,0], [1,0,0,0,1], [0,1,1,1,0], [1,0,0,1,1], [0,0,0,0,1], ]) n, m = M.shape PP = M.T.dot(M) / n TP = PP.diagonal() PP.setdiag(0) PP.eliminate_zeros() i, j = PP.nonzero() PP.data = np.log(PP.data / (TP[i] * TP[j])) U, S, V = svds(PP, k=2) E = np.multiply(U, np.sqrt(S)) Embedded = M.dot(E) sns.heatmap(cosine_similarity(Embedded), vmin=-1, vmax=1, annot=True) plt.show() plt.scatter(x = Embedded[:,0], y = Embedded[:,1]) for i in range(n): plt.annotate(str(i+1), (Embedded[i,0], Embedded[i, 1])) plt.show() |

In my own experiments, I find the model trained on label embeddings are a bit more robust to label noises, it is faster in convergence and returns higher top k precision compared with models with logistic outputs.

I believe it is due to the high number of labels in the competition (m ~= 7000) problem contrasted with the small batches the model is trained on. As this label embedding is obtained from matrix factorization, it is similar to PCA that we keep crucial information and throw out some unnecessary detail/noise, except we are doing so on the labels instead of the inputs.