Exploring the Google QuickDraw Dataset with SketchRNN (Part 1)

This is the first part in what will hopefully be a series of notes 1 on my exploration of the recently released Google QuickDraw dataset, using the concurrently released SketchRNN model.

The QuickDraw dataset is curated from the millions of drawings contributed by over 15 million people around the world who participated in the "Quick, Draw!" A.I. Experiment, in which they were given the challenge of drawing objects belonging to a particular class (such as "cat") in under 20 seconds.

SketchRNN is a very impressive generative model that was trained to produce vector drawings using this dataset. It was of particular interest to me because it cleverly combines many of the latest tools and techniques recently developed in machine learning, such as Variational Autoencoders, HyperLSTMs (a HyperNetwork for LSTM), Autoregressive models, Layer Normalization, Recurrent Dropout, the Adam optimizer, and others.

This notebook was based on the notebook included with the code release. I've made significant stylistic changes and some minor changes to ensure Python 3 compatibility as Magenta only supports Python 2 currently.


  1. These notes will likely be quite hasty and unpolished , as it has been written more for myself than anyone else. While I've always tried to avoid cluttering up my blog with notebooks on in-progress work, I've decided to just make a habit of posting them everytime I complete a session of work on something for the day. The aim is to start creating series of short and succinct notebooks, and avoid further accumulating a collection of long and disorganized notebooks that I never want to touch again, because the sheer effort of going through, cleaning up the experimental code, and articulating it effectively in a well-thought-out blog article just requires too much willpower. Doing it this way will hopefully make it much easier for me to create and share useful content quickly.

Environment Set-up

Some preamble for plotting (I really ought to put these in a config file at some point...), and importing dependencies. I've made the imports here explicit so we know exactly what methods/objects are imported and can find the module from which they were imported more easily.

In [1]:
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
%load_ext autoreload
%autoreload 2
In [2]:
import matplotlib.pyplot as plt
import matplotlib.patches as patches

import numpy as np
import tensorflow as tf

from matplotlib.path import Path
In [3]:
from magenta.models.sketch_rnn.sketch_rnn_train import \
    (load_env,
     load_checkpoint,
     reset_graph,
     download_pretrained_models,
     PRETRAINED_MODELS_URL)
from magenta.models.sketch_rnn.model import Model, sample
from magenta.models.sketch_rnn.utils import (get_bounds, 
                                             to_big_strokes,
                                             to_normal_strokes)
In [4]:
# set numpy output to something sensible
np.set_printoptions(precision=8, 
                    edgeitems=6, 
                    linewidth=200, 
                    suppress=True)
In [5]:
tf.logging.info("TensorFlow Version: {}".format(tf.__version__))
INFO:tensorflow:TensorFlow Version: 1.1.0

Getting the Pre-Trained Models and Data

In [6]:
DATA_DIR = ('http://github.com/hardmaru/sketch-rnn-datasets/'
            'raw/master/aaron_sheep/')
MODELS_ROOT_DIR = '/tmp/sketch_rnn/models'
In [7]:
DATA_DIR
Out[7]:
'http://github.com/hardmaru/sketch-rnn-datasets/raw/master/aaron_sheep/'
In [8]:
PRETRAINED_MODELS_URL
Out[8]:
'http://download.magenta.tensorflow.org/models/sketch_rnn.zip'
In [9]:
download_pretrained_models(
    models_root_dir=MODELS_ROOT_DIR,
    pretrained_models_url=PRETRAINED_MODELS_URL)
INFO:tensorflow:/tmp/sketch_rnn/models/sketch_rnn.zip already exists, using cached copy
INFO:tensorflow:Unzipping /tmp/sketch_rnn/models/sketch_rnn.zip...
INFO:tensorflow:Unzipping complete.

The directory tree looks like this. There are a few pretrained models for us to explore.

In [10]:
!tree -L 3 /tmp/sketch_rnn/
/tmp/sketch_rnn/
└── models
    ├── aaron_sheep
    │   ├── layer_norm
    │   ├── layer_norm_uncond
    │   ├── lstm
    │   └── lstm_uncond
    ├── catbus
    │   └── lstm
    ├── elephantpig
    │   └── lstm
    ├── flamingo
    │   └── lstm_uncond
    ├── owl
    │   └── lstm
    └── sketch_rnn.zip

14 directories, 1 file

We look at the layer normalized model trained on the aaron_sheep dataset for now.

In [11]:
MODEL_DIR = MODELS_ROOT_DIR + '/aaron_sheep/layer_norm'
In [12]:
(train_set, 
 valid_set, 
 test_set, 
 hps_model, 
 eval_hps_model, 
 sample_hps_model) = load_env(DATA_DIR, MODEL_DIR)
INFO:tensorflow:Downloading http://github.com/hardmaru/sketch-rnn-datasets/raw/master/aaron_sheep/aaron_sheep.npz
INFO:tensorflow:Loaded 7400/300/300 from aaron_sheep.npz
INFO:tensorflow:Dataset combined: 8000 (7400/300/300), avg len 125
INFO:tensorflow:model_params.max_seq_len 250.
total images <= max_seq_len is 7400
total images <= max_seq_len is 300
total images <= max_seq_len is 300
INFO:tensorflow:normalizing_scale_factor 18.5198.

Drawing the Dataset

The strokes object variable is the list of data points, which are sequences of strokes, represented as a 2D NumPy array of xy-offsets and the pen state.

In [13]:
len(train_set.strokes)
Out[13]:
7400

We can get a random sample from the dataset like so

In [122]:
a = train_set.random_sample()
a.shape
Out[122]:
(122, 3)

In the original notebook, the authors implemented their own function to iterate through a stroke sequence and write out a SVG Path string. I found this a bit inelegant and cumbersome to work with.

Here we simply subclass Path, which underpins all matplotlib.patch classes. This is almost perfect for our data format since it "supports the standard set of moveto, lineto, curveto commands to draw simple and compound outlines consisting of line segments and splines. The Path is instantiated with a (N,2) array of (x,y) vertices, and a N-length array of path codes." We just need to normalize the data slightly as we shall explain later.

In [133]:
class StrokesPath(Path):
    
    def __init__(self, data, factor=.2, *args, **kwargs):
        
        vertices = np.cumsum(data[::, :-1], axis=0) / factor
        codes = np.roll(self.to_code(data[::,-1].astype(int)), 
                        shift=1)

        super(StrokesPath, self).__init__(vertices, 
                                          codes, 
                                          *args, 
                                          **kwargs)
        
    @staticmethod
    def to_code(cmd):
        # if cmd == 0, the code is LINETO
        # if cmd == 1, the code is MOVETO (which is LINETO - 1)
        return Path.LINETO - cmd

Now drawing the strokes becomes as simple as

In [134]:
fig, ax = plt.subplots(figsize=(3, 3))

strokes = StrokesPath(a)

patch = patches.PathPatch(strokes, facecolor='none')
ax.add_patch(patch)

x_min, x_max, y_min, y_max = get_bounds(data=a, factor=.2)

ax.set_xlim(x_min-5, x_max+5)
ax.set_ylim(y_max+5, y_min-5)

ax.axis('off')

plt.show()

We define this as a function to maximize modularity and reusability

In [135]:
def draw(stroke, factor=.2, pad=(10, 10), ax=None):

    if ax is None:
        ax = plt.gca()

    x_pad, y_pad = pad
    
    x_pad //= 2
    y_pad //= 2
        
    x_min, x_max, y_min, y_max = get_bounds(data=stroke,
                                            factor=factor)

    ax.set_xlim(x_min-x_pad, x_max+x_pad)
    ax.set_ylim(y_max+y_pad, y_min-y_pad)

    strokes = StrokesPath(stroke)

    patch = patches.PathPatch(strokes, facecolor='none')
    ax.add_patch(patch)
    
    ax.axis('off')

Now it is easy to fully take advantage of the functionality provided by Matplotlib to create more complex plots. For example, to draw the sketches in a grid, we just call our draw function on the grid of axes created with subplots.

In [136]:
fig, ax_arr = plt.subplots(nrows=5, 
                           ncols=10, 
                           figsize=(8, 4),
                           subplot_kw=dict(xticks=[],
                                           yticks=[],
                                           frame_on=False))
fig.tight_layout()

for ax_row in ax_arr:
    for ax in ax_row:
        strokes = train_set.random_sample()
        draw(strokes, ax=ax)

plt.show()
Explanation

The last column of the 2D array is essentially a binary value that gives the pen action taken prior to the next stroke in the sequence.

In [137]:
a[::,-1].astype(int)
Out[137]:
array([1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1])

A 1 means the pen is to be lifted from the current point before being moved to the next point. In terms of SVG Path commands or Matplotlib Path codes, this corresponds to the next point being preceded by the m command or having the MOVETO code, respectively. Otherwise, the pen just draws a line to the next point, which means it is preceded by the l command or has the LINETO code.

Our to_code static method above simply turns the pen action for the next state to the corresponding Matplotlib Path code.

In [138]:
{c: getattr(Path, c) for c in dir(Path) if c.isupper()}
Out[138]:
{'CLOSEPOLY': 79,
 'CURVE3': 3,
 'CURVE4': 4,
 'LINETO': 2,
 'MOVETO': 1,
 'NUM_VERTICES_FOR_CODE': {0: 1, 1: 1, 2: 1, 3: 2, 4: 3, 79: 1},
 'STOP': 0}
In [139]:
StrokesPath.to_code(a[::,-1].astype(int))
Out[139]:
array([1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
       2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 1])

As explained, the $i$th code is meant for the $i+1$th vertex, so we are not done just yet. We must right shift the array by 1. But what do we do with the first and last elements of the sequence?

Note that the pen action for all stroke sequences are terminated with the pen being lifted. This corresponds to the next vertex having a MOVETO code. However, there are no vertices left, so this is superfluous and may be discarded.

In [140]:
all(StrokesPath.to_code(a[-1,-1]) == Path.MOVETO \
    for a in train_set.strokes)
Out[140]:
True

On the other hand, the first vertex is always required to have a MOVETO code (otherwise, where the drawing actually begins is not well-defined). Therefore, the most simple and elegant solution is just to np.roll the array to right by 1, so the first code takes on the value of the last code, which is always a MOVETO.

Lastly, while SVG Paths supports both absolute positions and relative offsets (with M and m commands respectively), Matplotlib only supports absolute positions. As the datasets gives the points as relative offsets, we convert it to absolute positions simply by calling taking the cumulative sum with np.cumsum.

Trigonometric functions with recursion and higher-order functions in Python

In [1]:
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
In [2]:
import matplotlib.pyplot as plt
import numpy as np

from itertools import count, islice, takewhile
from functools import reduce, partial

The Taylor series expansion for the trigonometric function $\sin{x}$ around the point $a=0$ (also known as the Maclaurin series in this case) is given by:

$$ \sin{x} = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \dotsb \text{ for all } x $$

The $k$th term of the expansion is given by

$$ \frac{(-1)^k}{(2k+1)!} x^{2k+1} $$

It is easy to evaluate this closed-form expression directly. However, it is more elegant and indeed more efficient to compute the terms bottom-up, by iteratively calculating the next term using the value of the previous term. This is just like computing factorials or a sequence of Fibonacci numbers using the bottom-up approach in dynamic programming.

Read more…

Matplotlib Unchained

I a previous post, I outlined how to embed a Matplotlib Animation directly in the Jupyter Notebook as a HTML5 video. In this notebook, we take the same Animation and save it as a GIF using Imagemagick. First, let us reproduce the FuncAnimation object from the notebook.

In [1]:
%matplotlib inline
In [2]:
import numpy as np
import matplotlib.pyplot as plt

from matplotlib import animation, rc
from IPython.display import HTML, Image
In [3]:
# equivalent to rcParams['animation.html'] = 'html5'
rc('animation', html='html5')
In [4]:
# Create new Figure with black background
fig = plt.figure(figsize=(8, 8), facecolor='black')

# Add a subplot with no frame
ax = fig.add_subplot(111, frameon=False)

# Generate random data
data = np.random.uniform(0, 1, (64, 75))
X = np.linspace(-1, 1, data.shape[-1])
G = 1.5 * np.exp(-4 * X * X)

# Set y limit (or first line is cropped because of thickness)
ax.set_ylim(-1, 70)

# No ticks
ax.set_xticks([])
ax.set_yticks([])

# 2 part titles to get different font weights
ax.text(0.5, 1.0, "MATPLOTLIB ", transform=ax.transAxes,
        ha="right", va="bottom", color="w",
        family="sans-serif", fontweight="light", fontsize=16)
ax.text(0.5, 1.0, "UNCHAINED", transform=ax.transAxes,
        ha="left", va="bottom", color="w",
        family="sans-serif", fontweight="bold", fontsize=16)

# Generate line plots
lines = [ax.plot((1-i/200.)*X, i+G*d , color="w", lw=1.5-i/100.)[0] 
         for i, d in enumerate(data)]
In [5]:
def animate(*args):
    # Shift all data to the right
    data[:, 1:] = data[:, :-1]

    # Fill-in new values
    data[:, 0] = np.random.uniform(0, 1, len(data))

    # Update data
    for i, line in enumerate(lines):
        line.set_ydata(i + G * data[i])

    # Return modified artists
    return lines
In [6]:
# call the animator. blit=True means only re-draw the parts that have changed.
anim = animation.FuncAnimation(fig, animate, interval=20, blit=True)

Now, we just need to save the animation instance with writer=imagemagick. But before we do that, we first make sure imagemagick has been properly installed on our system.

In [7]:
!brew install imagemagick 
Updating Homebrew...
Warning: imagemagick-7.0.5-3 already installed

Now we can go ahead and save it as a GIF.

In [8]:
anim.save('../../files/unchained.gif', writer='imagemagick', fps=60, savefig_kwargs=dict(facecolor='black'))

Let's read it back in and display it to make sure it saved as expected.

In [9]:
Image(url='../../../unchained.gif')
Out[9]:

Save Matplotlib Animations as GIFs

I a previous post, I outlined how to embed a Matplotlib Animation directly in the Jupyter Notebook as a HTML5 video. In this notebook, we take the same Animation and save it as a GIF using Imagemagick. First, let us reproduce the FuncAnimation object from the notebook.

In [1]:
%matplotlib inline
In [2]:
import numpy as np
import matplotlib.pyplot as plt

from matplotlib import animation, rc
from IPython.display import HTML, Image
In [3]:
# equivalent to rcParams['animation.html'] = 'html5'
rc('animation', html='html5')
In [4]:
# First set up the figure, the axis, and the plot element we want to animate
fig, ax = plt.subplots()

ax.set_xlim(( 0, 2))
ax.set_ylim((-2, 2))

line, = ax.plot([], [], lw=2)
In [5]:
# initialization function: plot the background of each frame
def init():
    line.set_data([], [])
    return (line,)
In [6]:
# animation function. This is called sequentially
def animate(i):
    x = np.linspace(0, 2, 1000)
    y = np.sin(2 * np.pi * (x - 0.01 * i))
    line.set_data(x, y)
    return (line,)
In [7]:
# call the animator. blit=True means only re-draw the parts that 
# have changed.
anim = animation.FuncAnimation(fig, animate, init_func=init,
                               frames=100, interval=20, blit=True)
In [8]:
anim
Out[8]:

Now, we just need to save the animation instance with writer=imagemagick. But before we do that, we first make sure imagemagick has been properly installed on our system.

In [9]:
!brew install imagemagick
Warning: imagemagick-7.0.4-6 already installed

Now we can go ahead and save it as a GIF.

In [10]:
anim.save('../../files/animation.gif', writer='imagemagick', fps=60)

Finally, let's read it back in and display it to make sure it saved as expected.

In [11]:
Image(url='../../../animation.gif')
Out[11]:

Re-implementing the Kubernetes Guestbook Example with Flask and NGINX

The official Kubernetes walkthrough guides often points to the guestbook application as a quintessential example of how a simple, but complete multi-tier web application can be deployed with Kubernetes. As described in the README, it consists of a web frontend, a redis master (for storage), and a replicated set of redis 'slaves'.

//cloud.google.com/container-engine/images/guestbook.png

This seemed like an ideal starting point for deploying my Flask applications with a similar stack, and also makes use of redis master/slaves. The difficulty I found with readily making use of this example as a starting point is that the frontend is implemented in PHP, which is considerably different to modern paradigms (Node.js, Flask/Django, Rails, etc.) As described in the README:

A frontend pod is a simple PHP server that is configured to talk to either the slave or master services, depending on whether the client request is a read or a write. It exposes a simple AJAX interface, and serves an Angular-based UX. Again we'll create a set of replicated frontend pods instantiated by a Deployment — this time, with three replicas.

I figured re-implementing the frontend pod in with Flask would require minimal changes - the UI would remain mostly the same, and the actual interaction with the redis master/slaves is quite trivial.

Read more…

A Better Approach For Initializing New Nikola Themes (since v7.7.5)

A few months ago, I wrote a post on Creating a Nikola theme with Sass-compiled Bootstrap. Since then, Nikola 7.7.5 has added several new features which makes it less tedious to get started with your custom theme.

Initializing the Theme

First, I initialize a theme named tiao, which automatically creates the necessary directories and files for me.

$ nikola theme --new=tiao --engine=jinja --parent=bootstrap3-jinja
[2016-05-18T02:29:49Z] INFO: theme: Creating theme tiao with parent bootstrap3-jinja and engine jinja in themes/tiao
[2016-05-18T02:29:49Z] INFO: theme: Created directory themes/tiao
[2016-05-18T02:29:49Z] INFO: theme: Created file themes/tiao/parent
[2016-05-18T02:29:49Z] INFO: theme: Created file themes/tiao/engine
[2016-05-18T02:29:49Z] INFO: theme: Theme themes/tiao created successfully.
[2016-05-18T02:29:49Z] NOTICE: theme: Remember to set THEME="tiao" in conf.py to use this theme.

$ tree themes/tiao
themes/tiao
├── engine
└── parent

0 directories, 2 files

Read more…

Visualizing and Animating Optimization Algorithms with Matplotlib

In this series of notebooks, we demonstrate some useful patterns and recipes for visualizing animating optimization algorithms using Matplotlib.

In [1]:
%matplotlib inline
In [2]:
import matplotlib.pyplot as plt
import autograd.numpy as np

from mpl_toolkits.mplot3d import Axes3D
from matplotlib.colors import LogNorm
from matplotlib import animation
from IPython.display import HTML

from autograd import elementwise_grad, value_and_grad
from scipy.optimize import minimize
from collections import defaultdict
from itertools import zip_longest
from functools import partial

We shall restrict our attention to 3-dimensional problems for right now (i.e. optimizing over only 2 parameters), though what follows can be extended to higher dimensions by plotting all pairs of parameters against each other, effectively projecting the problem to 3-dimensions.

The Wikipedia article on Test functions for optimization has a few functions that are useful for evaluating optimization algorithms. In particular, we shall look at Beale's function:

$$ f(x, y) = (1.5 - x + xy)^2 + (2.25 - x + xy^2)^2 + (2.625 - x + xy^3)^2 $$

In [3]:
f  = lambda x, y: (1.5 - x + x*y)**2 + (2.25 - x + x*y**2)**2 + (2.625 - x + x*y**3)**2