Author Archives: admin

Random states in multiprocessing, learnt a lesson after wasted a weeks GPU time.

I was recently training CNN on the openimages dataset using Keras. I am using a custom batch generator together with the .fit_generator() method in Keras, and observed super slow training progress.

My code looks something like this:

I wasted a lot of time debugging the model structure, loss, and optimizer, but the problem is much simpler. I eventually found it by printing out the indices been sampled.

The problem with the code is that when the generator gets duplicated on multiple workers, the random states also get copied, so the 8 workers have the same random state. As a result, during training, the model will see the exact same batch 8 times before seeing a new batch. The fix is easy, just insert a np.random.seed() before sampling the indices.

 

 

 

Repost of my KDD Cup 2018 summary

I finished 30th place at this year's KDD CUP. I still remember back to 2015, when I was very rusty with coding and tried to attempt that years' KDD cup with my potato laptop Lenovo U310. I did not know what I was doing, all I did is trying to throw data into XGBoost and my performance then is a joke. I see myself became more and more capable of comming up with ideas and implement them out during these two years. And below is a repost of my summary to KDD 2018.

Hooray~! fellow KDD competitors. I entered this competition on day 1 and very quickly established a reasonable baseline. Due to some personal side of things, I practically stopped improving my solutions since the beginning of May. Even though my methods did not work really well compared to many top players in phase 2, but I think my solution may worth sharing due to it is relative simplicity. I did not touch the meo data at all, and one of my models is just calculating medians.

Alternative data source

For new hourly air quality data, as shared in the forum, I am using this for London and this for Beijing instead of the API from the organizer.

Handling missing data

I filled missing values in air quality data with 3 steps:

  1. Fill missing values for a station-measure combo based on the values from other stations.
    To be specific: I trained 131 lightgbm regressors for this. If PM2.5 reading on 2:00 May 20th is missing for Beijing aotizhongxin station, the regressor aotizhongxin_aq-PM2.5 will predict this value based on known PM2.5 readings on 2:00 May 20th from 34 other stations in Beijing.
    I used thresholds to decide whether to do this imputation or not. If more than the threshold number of stations also don't have a reading, then skip this step.
  2. Fill the remaining missing values by looking forward and backward to find known values.
  3. Finally, replace all remaining missing values by overall mean value.

Approaches

1. median of medians

This is a simple model that worked reasonably well in this Kaggle competition.

To predict PM2.5 reading on 2:00 May 20th for aotizhongxin, look back for a window of days history, calculating the median 2:00 PM2.5 readings from aotizhongxin in that window. You do this median calculation exercise for a bunch of different window sizes to obtain a bunch medians. The median value of those medians is used as the prediction.

Intuitively this is just an aggregated yesterday once more. With more larger windows in the collection, the model memorizes the long-term trend better. The more you add in smaller windows, the quicker the model would respond to recent events.

2. facebooks' prophet

This is practically even simpler than the median of medians. I treated the number of days history I throw at it and the model parameters changepoint_prior_scalen_changepoints as main hyperparameters and tweaked them. I did a bit work to parallelizing the fitting process for all the station-measure combos to speed up the fitting process, other than that, it is pretty much out of the box.

I tried to use holiday indicator or tweaking other parameters of the model and they all degrade the performance of the model.

3. neural network

My neural network is a simple feed-forward network with a single shortcut, shamelessly copied the structure from a senior colleague's Kaggle solution with tweaked hidden layer sizes.
The model looks like this:
nn_plot

The input to the neural network are concatenated (1) raw history readings, (2) median summary values from different window_sizes, and (3) indicator variables for the city, type of measure.

The output layer in the network is a dense layer with 48 units, each corresponding to an hourly reading in the next 48 hours.

The model is trained directly using smape as loss function with Adam optimizer. I tried standardizing inputs into zero mean and unit variance, but it will cause a problem when used together with smape loss, thus I tried switching to a clipped version MAE loss, which produced similar results compared to raw input with smape loss.

The model can be trained on CPU only machine in very short time.

I tried out some CNN, RNN models but couldn't get them working better than this simple model, and had to abandon them.

Training and validation setup

This is pretty tricky, and I am still not quite sure if I have done it correctly or not.

For approach 1 and 2

I tried to generate predictions for a few historical months, calculating daily smape scores locally. Then sample 25 days out to calculate a mean smape score. Do this sample-scoring a large number of times and take mean as local validation score. I used this score to select parameters.

For neural network

I split the history data into (X, y) pairs based on a splitting day, and then move the splitting day backward by 1 day to generate another (X, y) pair. Do this 60 times and vertically concatenate them to form my training data.

I used groupedCV split on the concatenated dataset to do cross-validation so that measures from one station don't end up in both training and validation set. During training, the batch size is specified so that data in the batch all based on the same splitting day. I did this trying to preventing information leaking.

I got average smape scores 0.40~44 for Beijing and 0.30-0.34 for London in my local validation setting. Which I think is pretty aligned with how it averages out through May.

Closing

Without utilizing any other weather information or integrating any sort of forecasts, all my models failed miserably for events like the sudden peak on May 27th in Beijing.

Pass more than one arguments to Pool.map

I am recently tackling this year's kdd cup competition. I was trying to speed up my code to fit Prophet model on multiple time series from a pandas dataframe using python's multiprocessing module. Below is an example how the map function of Pool class from multiprocessing module works:

Basically, Pool(8) creates a process pool object with 8 processes. p.map(square, range(16)) chop the iterable into 8 pieces and assign them to the 8 processes. For each process in the pool, it applies the function square on each element(in this case 2 in total) in the smaller iterable assigned to it. The results were collected into the results object.

One possible way for me to use this mechanism to fit my models is to prepare the data into smaller dataframes with columns ['ds', 'y'], collect these smaller dataframes into a list and map a Prophet wrapper function to this list using a pool of processes. The wrapper function would look like this:

This would work, but it would require duplicated computation on future time_stamps and number of changepoints which are the same for all my time series. Ideally, I want to be able to calculate them once and share the value. One obvious way to do so, is to change my run list from a collection of dataframes to a list of tuples like (df, n_points, holidays, future). Or another method I ended up using is to use partial.

 

Python Matrix Transpose oneliner two ways

I was asked to write some python code to transpose a matrix (represented as a list of lists) during a recent interview. I quickly came up with a oneliner using list comprehension like this:

It works, but I was asked to make it shorter and I wasn't able to think of a way to do so in front of the whiteboard. So I have been thinking about this little exercise since then, and finally came up with one that is pretty elegant.

The way it works is that: *M is taken in as the *args in zip() thus unpacked as zip([0,1,0,0], [1,1,1,0], [0,1,0,0], [1,1,0,0]) .

 

Build Tensorflow from source with GPU support after the meltdown kernel patch

So my google could compute instance with Nvidia-docker which I used to train deep learning models was suddenly not working a couple of days ago, and the reason seems to related to the recent Ubuntu kernel update that was intended to solve the meltdown issue. I found the solution is to install a different kernel and also build from source. As a reminder for myself, here are the steps:

  1. Get a newer kenel:
  2. Install dependencies for build tensorflow, also install cuda if not already. Remeber to reboot after install cuda.
  3.  Might need to do the following
  4. Check your GPU with
  5. Install cuDNN, go to https://developer.nvidia.com/cudnn and download the files, scp them to the machine and install :
  6. Get libcupti and Brazel
  7. Get Tensorflow source and build with gpu support and finally install
  8. You are good to go