Python Matrix Transpose oneliner two ways

I was asked to write some python code to transpose a matrix (represented as a list of lists) during a recent interview. I quickly came up with a oneliner using list comprehension like this:

It works, but I was asked to make it shorter and I wasn't able to think of a way to do so in front of the whiteboard. So I have been thinking about this little exercise since then, and finally came up with one that is pretty elegant.

The way it works is that: *M is taken in as the *args in zip() thus unpacked as zip([0,1,0,0], [1,1,1,0], [0,1,0,0], [1,1,0,0]) .


Build Tensorflow from source with GPU support after the meltdown kernel patch

So my google could compute instance with Nvidia-docker which I used to train deep learning models was suddenly not working a couple of days ago, and the reason seems to related to the recent Ubuntu kernel update that was intended to solve the meltdown issue. I found the solution is to install a different kernel and also build from source. As a reminder for myself, here are the steps:

  1. Get a newer kenel:
  2. Install dependencies for build tensorflow, also install cuda if not already. Remeber to reboot after install cuda.
  3.  Might need to do the following
  4. Check your GPU with
  5. Install cuDNN, go to and download the files, scp them to the machine and install :
  6. Get libcupti and Brazel
  7. Get Tensorflow source and build with gpu support and finally install
  8. You are good to go

Running tensorflow with GPU on GCP VM through docker.

Recently, I am working on the speech command recognition competition on Kaggle ( by Google and got $500 Google cloud platform credits. I am writing down rough instructions on how I set up my VM to do experiments with deep learning models on GCP.

  1. First, request quota increase to use a GPU. I requested usage of a Nvidia Tesla K80 under Zone us-east-1.
  2. Create VM instance in the requested zone, customize the VM to use GPU, also configure SSH access. Oh, I used an Ubuntu 16.04 os.
  3. Login to the VM and install the Cuda driver
  4. Install docker community edition.
  5. Install Nvidia-docker
  6. Fire up bash and pretty much good to go. Note notebooks is the default landing directory, and here you would want to specify a directory in your GCP VM that you want to share with the container so that it can access your training data and write results to your VM disk.

That's pretty much it. On a side note, I found it strange that my code actually ran slower on my GCP VM using docker than on my home PC with just 1070 card. I am suspecting that since my GCP VM's CPU is the old Haswell one (I tried to provisioning  a Skylake one, but the GCP portal keep telling me there is not enough resources to create one for me...), the training is slow due to my data augmentation process in my data generator, so the more powerful Tesla K80 is idle and waiting for batches to go in, and it is totally the fault of my crappy code...

It has been really long since I last posted anything here, but I am thinking about getting more back often.

Incase you are curious about the pricing, here is a screenshot of my current billing page. My VM instance is 8 core cpu, 30GB RAM, 128GB SSD disk and of course a Nvidia Tesla K80.

Kaggle Digit Recognizer Revisited (Using Convolutional NN with Keras)

Almost a year ago, I revisited the Kaggle version of the Hand Written Digit Recognition problem, the link to that post is here. At that time, my go to language is R, since the majority of friends around me use R as well. This summer, I evidently switched back to use python as my primary language to do almost everything, because it is just so efficient.

So, here is a convolutional neural network using Keras to tackle this problem again, in less than 100 lines of code you can get a convolutional neural network and obtain 99% accuracy on the Kaggle leaderboard.

A quick note about training time, it took close to 9 minutes to be trained on my laptop with GeForce GTX 970M chip. You can increase the number of epochs and run it by yourself, it should be able to lead to better results.


A Logistic Regression Benchmark for Red Hat Customer Business Value Prediction Problem

Red Hat put out a competition on Kaggle asking people to build models to predict customer potential. It is a simple binary classification problem and the metric to this problem that Red Hat wanted to determine which model rank best is the AUC score.

I am sort of late in participating in this competition, and there are only 7 days to go. I sketched a rather simple logistic regression model, and it ranks somewhere in the middle among 2,200 teams in total. Kind of surprised to see that a simple logistic regression can beat half of the participants.

My model uses all the features and I find out the penalty strength parameter C should take on value 10.

Below is my code: