Using Google's Quickdraw to create an MNIST style dataset!

For those running deep learning models, MNIST is ubiquotuous. This dataset of handwritten digits serves many purposes from benchmarking numerous algorithms (its referenced in thousands of papers) and as a visualization, its even more prevelant than Napoleon’s 1812 March. The digits look like this:

https://www.tensorflow.org/images/MNIST.png

There are many reasons for its enduring use, but much of it is the lack of an alternative. In this post, I want to introduce an alternative, the Google QuickDraw dataset. The quickdraw dataset was captured in 2017 by Google’s drawing game, Quick, Draw!. The dataset consists of 50 million drawings across 345 categories. The drawings look like this:

https://github.com/googlecreativelab/quickdraw-dataset/blob/master/preview.jpg

Build your own Quickdraw dataset

I want to walk through how you can use this drawings and create your own MNIST like dataset. Google has made available 28x28 grayscale bitmap files of each drawing. These can serve as drop in replacements for the MNIST 28x28 grayscale bitmap images.

As a starting point, Google has graciously made the dataset publicly available with documentation on the dataset. All the data is sitting in Google’s Cloud Console, but for the images, you want this link of the numpy_bitmaps.

GCP2

You should arrive on a page that allows you to download all the images for any category. So this is when you have fun! Go ahead and pick your own categories. I started with eyeglasses, face, pencil, and television. As I learned from the face, the drawings that have fine points can be more difficult to learn. But you should play around and pick fun categories.

shortqd

The next challenge is taking these .npy files and using them. Here is a short python gist that I used to read the .npy files and combine them to create a 80,000 images dataset that I could use in place of MNIST. They are saved in a hdf5 format that is cross platform and often used in deep learning.

gist

Using Quickdraw instead of MNIST

The next thing is to go have fun with it. I used this dataset in place of MNIST for some work playing around with autoencoders in Python from the Keras tutorials. The below picture represents the original images at the top and reconstructed ones at the bottom, using an autoencoder.

vae10

I next used this dataset with a variational autoencoder in R. Here is the code snippet to import the data:

library(rhdf5)
x_test <- t(h5read("x_test.h5", "name-of-dataset"))
x_train <- t(h5read("x_train.h5", "name-of-dataset"))
y_test <- (h5read("y_test.h5", "name-of-dataset"))
y_train <- (h5read("y_train.h5", "name-of-dataset"))

Here is a visualization of its latent space using my custom quickdraw dataset. For me, this was a nice fresh alternative to always staring at the MNIST dataset. So next time you see MNIST listed . . . go build your own!

VAE15

Deep Learning with R

For R users, there hasn’t been a production grade solution for deep learning (sorry MXNET). This post introduces the Keras interface for R and how it can be used to perform image classification. The post ends by providing some code snippets that show Keras is intuitive and powerful 💪🏽.

Tensorflow

Last January, Tensorflow for R was released, which provided access to the Tensorflow API from R. This was signficant, as Tensorflow is the most popular library for deep learning. However, for most R users, the Tensorflow for R interface was not very R like. 🤢 Take a look at this code chunk for training a model:

cross_entropy <- tf$reduce_mean(-tf$reduce_sum(y_ * tf$log(y_conv), reduction_indices=1L))
train_step <- tf$train$AdamOptimizer(1e-4)$minimize(cross_entropy)
correct_prediction <- tf$equal(tf$argmax(y_conv, 1L), tf$argmax(y_, 1L))
accuracy <- tf$reduce_mean(tf$cast(correct_prediction, tf$float32))
sess$run(tf$global_variables_initializer())

for (i in 1:20000) {
  batch <- mnist$train$next_batch(50L)
  if (i %% 100 == 0) {
    train_accuracy <- accuracy$eval(feed_dict = dict(
        x = batch[[1]], y_ = batch[[2]], keep_prob = 1.0))
    cat(sprintf("step %d, training accuracy %g\n", i, train_accuracy))
  }
  train_step$run(feed_dict = dict(
    x = batch[[1]], y_ = batch[[2]], keep_prob = 0.5))
}

test_accuracy <- accuracy$eval(feed_dict = dict(
     x = mnist$test$images, y_ = mnist$test$labels, keep_prob = 1.0))
cat(sprintf("test accuracy %g", test_accuracy))

Yikes!

Unless you are familiar with tensorflow, it’s not readily apparent what is going on. A quick search on Github finds less than a 100 code results using tensorflow for R. 😔

Keras

All this is going to change with Keras and R! ☺️

For background, Keras is a high-level neural network API that is designed for experimentation and can run on top of Tensorflow. Keras is what data scientists like to use. 🤓 Keras has grown in popularity and supported on a wide set of platforms including Tensorflow, CNTK, Apple’s CoreML, and Theano. It is becoming the de factor language for deep learning.

As a simple example, here is the code to train a model in Keras:

model_top %>% fit(
        x = train_x, y = train_y,
        epochs=epochs, 
        batch_size=batch_size,
        validation_data=valid)

Image Classification with Keras

So if you are still with me, let me show you how to build deep learning models using R, Keras, and Tensorflow together. You will find a Github repo at https://github.com/rajshah4/image_keras/ that contains the code and data you will need. Included is an R notebook (and Python notebooks) that walks through building an image classifier (telling 🐱 from 🐶), but can easily be generalized to other images. The walk through includes advanced methods that are commonly used for production deep learning work including:

  • augmenting data
  • using the bottleneck features of a pre-trained network
  • fine-tuning the top layers of a pre-trained network
  • saving weights of models

Code Snippets of Keras

The R interface to Keras truly makes it easy to build deep learning models in R. Here are some code snippets based on my example of building an image classifier to illustrate how intuitive and useful Keras for R is:

To load 🖼 from a folder:

train_generator <- flow_images_from_directory(train_directory, generator = image_data_generator(), target_size = c(img_width, img_height), color_mode = "rgb",
  class_mode = "binary", batch_size = batch_size, shuffle = TRUE,
  seed = 123)

To define a simple convolutional neural network:

model <- keras_model_sequential()

model %>%
  layer_conv_2d(filter = 32, kernel_size = c(3,3), input_shape = c(img_width, img_height, 3)) %>%
  layer_activation("relu") %>%
  layer_max_pooling_2d(pool_size = c(2,2)) %>% 
  
  layer_conv_2d(filter = 32, kernel_size = c(3,3)) %>%
  layer_activation("relu") %>%
  layer_max_pooling_2d(pool_size = c(2,2)) %>%
  
  layer_conv_2d(filter = 64, kernel_size = c(3,3)) %>%
  layer_activation("relu") %>%
  layer_max_pooling_2d(pool_size = c(2,2)) %>%
  
  layer_flatten() %>%
  layer_dense(64) %>%
  layer_activation("relu") %>%
  layer_dropout(0.5) %>%
  layer_dense(1) %>%
  layer_activation("sigmoid")

To augment data:

augment <- image_data_generator(rescale=1./255,
                               shear_range=0.2,
                               zoom_range=0.2,
                               horizontal_flip=TRUE)

To load a pretrained network:

model_vgg <- application_vgg16(include_top = FALSE, weights = "imagenet")

To save model weights:

save_model_weights_hdf5(model_ft, 'finetuning_30epochs_vggR.h5', overwrite = TRUE)

The Keras for R interface makes it much easier for R users to build and refine deep learning models. Its no longer necessary to force everyone to use Python to build, refine, and test deep learning models. I really think this will open up deep learning to a wider audience that was a bit apprehensive on using python.

To start with, you can grab my repo, fire up RStudio (or your IDE of choice), and go build a simple classifier using Keras. There are also a wealth of other examples such as generating text from Nietzsche’s writings, deep dreaming, or creating a variational encoder.

So for now, give it a spin!

An earlier version of this post was posted at Datascience+.

Building Worlds for Reinforcement Learning

OpenAI’s Gym places reinforcement learning into the masses. It comes with a wealth of environments from the classic cart pole, board games, Atari, and now the new Universe which adds flash games and PC Games like GTA or Portal. This is great news, but for someone starting out, working on some of these games is overkill. You can learn a lot more in a shorter time, by playing around with some smaller toy environments.

One area I like within the gym environments are the classic control problems (besides the fun of eating melon and poop). These are great problems for understanding the basics of reinforcement learning because we intutiively understand the rewards and they run really fast. Its not like pong that can take several days to train, instead, you can train these environments within minutes!

If you aren’t happy with the current environments, it is possible to modify and even add more environments. In this post, I will highlight other environments and share how I modified an Acrobot-v1 environment.

RLPy

To begin, grab the repo for the OpenAI gym. Inside the repo, navigate to gym/envs/classic_controlwhere you will see the scripts that define the class control environments. If you open one of the scripts, you will see a heading on the top that says:

__copyright__ = "Copyright 2013, RLPy http://acl.mit.edu/RLPy"

Ahh! In the spirit of open source, OpenAI stands on the shoulders of another reinforcement library, RLPy. You can learn a lot more about them at the RLPy site or take a look at their github. If you browse here, you can find the original script that was used in OpenAI under rlpy /rlpy/Domains. The interesting thing here is that there are a ton more interesting reinforcement problems!

RLlisting

You can run these using RLPy or you can try and hack this into OpenAI.

Modifying OpenAI Environments

I decided to modify the Acrobot environment. Acrobot is a 2-link pendulum with only the second joint actuated (it has three states, left, right, and no movement). The goal is to swing the end to a height of at least one link above the base. If you look at the leaderboard on OpenAIs site, they meet that criterion, but its not very impressive. Here is the current highest scoring entry:

training

This is way boring compared to what Hardmaru shows in his demo, where a pendulum is capable of balancing for a short time.hardmaru

So I decided to try and modify the Acrobot demo to make this task a little more interesting, Acrobot gist here. The main change was to the reward system. I added a variable steps_beyond_done that would keep track of successes when the end was swung high. I also changed the reward structure, so it would gradually be rewarded as it swung higher. I also changed g to 0, this removes gravity’s effect.

self.rewardx = (-np.cos(s[0]) - np.cos(s[1] + s[0])) ##Swung height is calculated 
if self.rewardx < .5:
    reward = -1.
    self.steps_beyond_done = 0
if (self.rewardx > .5 and self.rewardx < .8):
    reward = -0.8
    self.steps_beyond_done = 0  
if self.rewardx > .8:
    reward = -0.6 
if self.rewardx > 1:
    reward = -0.4
    self.steps_beyond_done += 1 
if self.steps_beyond_done > 4:
    reward = -0.2
if self.steps_beyond_done > 8:
    reward = -0.1
if self.steps_beyond_done > 12:
    reward = 0.

Another important file to be aware of is where the benchmarks are kept for each environment. You can navigate to this at gym/gym/benchmarks/__init__.pyWithin this file, you will see the following:

{'env_id': 'Acrobot-v1',
         'trials': 3,
         'max_timesteps': 100000,
         'reward_floor': -500.0,
         'reward_ceiling': 0.0,
        },

I then ran an implementation of Asynchronous Advantage Actor Critic A3C) by Arno Moonens. After running for a half hour, you can see the improvement in the algorithm:

training1

Now a half hour later:

training2

The result is teaching the pendulum to stay up for an extended time! This is much more interesting and what I was looking for. I hope this will inspire others to build new and interesting environments.

Taking an H2O Model to Production

One of the best feelings as a Data Scientist is when the model you have poured your heart and soul into, moves into production. Your model is now grown-up and you get to watch it mature.

This post shows how to take a H2O model and move it into a production environment. In this post, we will develop a simple H2O based predictive model, convert it into a Plain Old Java Object (POJO), compile it along with other Java packages, and package the compiled class files into a deployable JAR file so that it can readily be deployed onto any Java based application servers. This model will accept the input data set in the form of CSV file and return the predicted output in CSV format.

H2O is one of my favorite tools for building models because it is well designed from an algorithm perspective, easy to use, and can scale to larger datasets. However, H2O’s documentation, though voluminous, doesn’t have clear instructions for moving a POJO model into production. This post will discuss this approach in greater detail besides providing code for how to do this. (H2O does have a post on doing real time predictions with storm). Special thanks to Socrates Krishnamurthy who co-wrote this post with me.

Building H2O Model

As a starting point, lets use our favorite ice cream dataset to create a toy model in H2O:

  library(h2o)  
  library(Ecdat)  
  data(Icecream)  
  h2o.init()  
  train.h2o <- as.h2o(Icecream)  
  rf <- h2o.randomForest(x=2:4, y=1, ntrees=2, training_frame=train.h2o)   

Once you have developed your model in H2O, then the next step is downloading the POJO:

h2o.download_pojo(rf, getjar=TRUE, path="~/Code/h2o-3.9.1.3459/test/")
# you must give a path to download a file

This will save two files, a H2O jar file about the model and an actual model file (that begins with DRF and ends with .java). Go ahead and open the model file in a text editor if you want to have a look at it.

Compiling the H2O Model

The next step is to compile and run the model (say, the downloaded model name is DRF_model_R_1470416938086_15.java), then type:

> javac -cp h2o-genmodel.jar -J-Xmx2g DRF_model_R_1470416938086_15.java  

This creates a bunch of java class files.

Scoring the Input Data

The final step is scoring some input data. Prior to running the model, it is necessary to have files created for the input and output. For the input, the default setting is to read the first row as a header. The assumption is that the csv is well formed (this approach is not using the H2O parser). Once that is done, run:

> java -cp .:h2o-genmodel.jar hex.genmodel.tools.PredictCsv --header --model DRF_model_R_1470416938086_15 --input input.csv --output output.csv

If you open the output.csv file, it can be noticed that the predicted values are in Hexadecimal and not in Numeric format. For example, the output will be something like this:

0x1.a24dd2p-2

Fixing the Hexadecimal Issue

The model is now predicting, but the predictions are in the wrong format. Yikes! To fix this issue requires some hacking of the java code. The rest of this post will show you how to hack the java code in PredictCsv, which can fix this issue and other unexpected issues with PredictCsv (for example, if your input comes tab separated).

If we take a deeper look at the PredictCsv java file located in the h2o github, the myDoubleToString method returns Hexadecimal string. But the challenge is this method being static in nature, cannot be overridden in a subclass or cannot be updated directly since it was provided by H2O jar file, to return regular numeric value in String format.

This can be fixed by creating a new java file (say, NewPredictCsv.java) by copying the entire content of PredictCsv.java from the above location and saving it locally. You then need to:

  • comment out the first line, so it should be //package hex.genmodel.tools;
  • change the name of the class name (~line 20) to read: public class NewPredictCsv {
  • correct the hexadecimal issue by changing the return statement of myDoubleToString method to .toString() in lieu of .toHexString() (~line 131).

After creating NewPredictCsv.java, compile it using the following command:

> javac -cp h2o-genmodel.jar -J-Xmx2g NewPredictCsv.java DRF_model_R_1470416938086_15.java

Run the compiled file by providing input and output CSV files using the following command (Ensure that the input.csv file is in the current folder where you will run this):

> java -cp .:h2o-genmodel.jar NewPredictCsv --header --model DRF_model_R_1470416938086_15 --input input.csv --output output.csv

If you open the output.csv file now, it will be in the proper numeric format as follows:

0.40849998593330383

Deploying the Solution into Production:

At this point, we have a workable flow for using our model to score new data. But we can clean up the code to make it a little friendlier for our data engineers. First, create a jar file out of the class files created in previous steps. To do that, issue the following command:

> jar cf my-RF-model.jar *.class

This will place all the class files and our NewPredictCsv inside the jar. This is helpful when we have a model with say 500 trees. Now all we need is three files to run our scorer. So copy the above two jar files along with input.csv file in any folder/directory from where the program has to be executed. After copying, the folder should contain following files:

> my-RF-model.jar  
> h2o-genmodel.jar  
> input.csv  

The above input.csv file contains the dataset for which the dependent variable has to be predicted. To compute/ predict the values, run the java command as below:

> java -cp .:my-RF-model.jar:h2o-genmodel.jar NewPredictCsv --header --model DRF_model_R_1470416938086_15 --input input.csv --output output.csv

Note:

Replace : with ; in above commands if you are working in Windows (yuck).

Using xgbfi for revealing feature interactions

Tree based methods excel in using feature or variable interactions. As a tree is built, it picks up on the interaction of features. For example, buying ice cream may not be affected by having extra money unless the weather is hot. It is the interaction of both of these features that can affect whether ice cream will be consumed.

The traditional manner for examining interactions is relying on measures of variable importance. However, these measures don’t provide insights into second or third order interactions. Identifying these interactions are important in building better models, especially when finding features to use within linear models.

In this post, I show how to find higher order interactions using XGBoost Feature Interactions & Importance. This tool has been available for a while, but outside of kagglers, it has received relatively little attention.

As a starting point, I used the Ice Cream dataset to illustrate using xgbfi. This walkthrough is in R, but python instructions are also available at the repo. I am going to break the code into three sections, the initial build of the model, exporting the files necessary for xgbfi, and running xgbi.

Building the model

Lets start by loading the data:

library(xgboost)
library(Ecdat)
data(Icecream)
train.data <- data.matrix(Icecream[,-1])

The next step is running xgboost:

bst <- xgboost(data = train.data, label = Icecream$cons, max.depth = 3, eta = 1, nthread = 2, nround = 2, objective = "reg:linear")

To better understand how the model is working, lets go ahead and look at the trees:

xgb.plot.tree(feature_names = names((Icecream[,-1])), model = bst)

xg tree plot

The results here line up with our intution. Hot days seems to be the biggest variable by just eyeing the plot. This lines up with the results of a variable importance calculation:

> xgb.importance(colnames(train.data, do.NULL = TRUE, prefix = "col"), model = bst)
   Feature       Gain      Cover Frequency
1:    temp 0.75047187 0.66896552 0.4444444
2:  income 0.18846270 0.27586207 0.4444444
3:   price 0.06106542 0.05517241 0.1111111

All of this should be very familiar to anyone who has used decision trees for modeling. But what are the second order interactions? Third order interactions? Can you rank them?

Exporting the tree

The next step involves saving the tree and moving it outside of R so xgbfi can parse the tree. The code below will help to create two files that are needed:xgb.dump and fmap.text.

featureList <- names(Icecream[,-1])
featureVector <- c() 
for (i in 1:length(featureList)) { 
  featureVector[i] <- paste(i-1, featureList[i], "q", sep="\t") 
}
write.table(featureVector, "fmap.txt", row.names=FALSE, quote = FALSE, col.names = FALSE)
xgb.dump(model = bst, fname = 'xgb.dump', fmap = "fmap.txt", with.stats = TRUE)

Running xgbfi

The first step is to clone the xgbfi repository onto your computer. Then copy the files xgb.dump and fmap.text to the bin directory.

Go to your terminal or command line and run: XgbFeatureInteractions.exe application. On a mac, download mono and then run the command: mono XgbFeatureInteractions.exe. There is also a XgbFeatureInteractions.exe.config file that contains configuration settings in the bin directory.

After the application runs, it will write out an excel spreadsheet titled: XgbFeatureInteractions.xlsx. This spreadsheet has the good stuff! Open up the spreadsheet and you should see:

interaction depth 0

This tab of the spreadsheet shows the first order interactions. These results are similar to what variable importance showed. The good stuff is when you click on the tab for Interaction Depth 1 or Interaction Depth 2.

interaction depth 1

interaction depth 2

It is now possible to rank the higher order interactions. With the simple dataset, you can see that the results out of xgbfi match what is happening in the tree. The real value of this tool is for much larger datasets, where its difficult to examine the trees for the interactions.