Running tensorflow with GPU on GCP VM through docker.

Recently, I am working on the speech command recognition competition on Kaggle (https://www.kaggle.com/c/tensorflow-speech-recognition-challenge/) by Google and got $500 Google cloud platform credits. I am writing down rough instructions on how I set up my VM to do experiments with deep learning models on GCP.

  1. First, request quota increase to use a GPU. I requested usage of a Nvidia Tesla K80 under Zone us-east-1.
  2. Create VM instance in the requested zone, customize the VM to use GPU, also configure SSH access. Oh, I used an Ubuntu 16.04 os.
  3. Login to the VM and install the Cuda driver
  4. Install docker community edition.
  5. Install Nvidia-docker
  6. Fire up bash and pretty much good to go. Note notebooks is the default landing directory, and here you would want to specify a directory in your GCP VM that you want to share with the container so that it can access your training data and write results to your VM disk.

That's pretty much it. On a side note, I found it strange that my code actually ran slower on my GCP VM using docker than on my home PC with just 1070 card. I am suspecting that since my GCP VM's CPU is the old Haswell one (I tried to provisioning  a Skylake one, but the GCP portal keep telling me there is not enough resources to create one for me...), the training is slow due to my data augmentation process in my data generator, so the more powerful Tesla K80 is idle and waiting for batches to go in, and it is totally the fault of my crappy code...

It has been really long since I last posted anything here, but I am thinking about getting more back often.

Incase you are curious about the pricing, here is a screenshot of my current billing page. My VM instance is 8 core cpu, 30GB RAM, 128GB SSD disk and of course a Nvidia Tesla K80.

Leave a Reply

Your email address will not be published. Required fields are marked *