Rock IT

How to reuse Keras Deep Neural Network using Docker

In this post I'll show how to prepare Docker container able to run already trained Neural Network (NN). It can be helpful if you want to redistribute your work to multiple machines or send it to a client, along with one-line run command. Sample code is using Keras with TensorFlow backend.


  1. Already prepared Keras NN. We'll save it to HDF5 file (here you can find more info).
  2. Docker installed (instructions)


For simplicity, we'll be using using well known example - CIFAR10 classification.

Firstly, we have to train our model and save it for later use. Here I'll show just relevant fragment - how to save model to .h5 file, because you probably have your own code that you want to distribute. Note that we probably want to run this in the cloud or on a computer with a good GPU card, so we don't need to wait a lot:

# fragment of
# whole file can be found in repository
# links below
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
model = Sequential()

# layers omitted for clarity


# here we actually save trained model'model.h5')

Ok, we saved our model. Because model is ready, we don't need GPU support in our container, which simplifies a lot of things (it's possible to have GPU support inside docker using nvidia-docker, but it's more complicated)

A very simple code for loading saved model and running predictions:

import argparse
import sys
import os
import glob
import numpy as np

from keras.models import load_model as load_keras_model
from keras.preprocessing.image import img_to_array, load_img

# disable TF debugging info
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'

# our saved model file
# may be refactored to be taken from command line
model_filename = 'model.h5'
class_to_name = [

def get_filenames():
    parser = argparse.ArgumentParser()
    parser.add_argument('filename', nargs='*', default=['**/*.*'])
    args = parser.parse_args()

    for pattern in args.filename:
        # here we recursively look for input 
        # files using provided glob patterns
        for filename in glob.iglob('data/' + pattern, recursive=True):
            yield filename

def load_model():
    if os.path.exists(model_filename):
        return load_keras_model(model_filename)
        print("File {} not found!".format(model_filename))

def load_image(filename):
    img_arr = img_to_array(load_img(filename))
    return np.asarray([img_arr])

def predict(image, model):
    result = np.argmax(model.predict(image))
    return class_to_name[result]

if __name__ == '__main__':
    filenames = get_filenames()
    keras_model = load_model()
    for filename in filenames:
        image = load_image(filename)
        image_class = predict(image, keras_model)
        print("{:30}   {}".format(filename, image_class))

Actual Dockerfile (sorry smartphone users, it's hard to make that code mobile-friendly). It's based on the official Dockerfile for Keras:

# Dockerfile
FROM ubuntu:16.04

ENV CONDA_DIR /opt/conda

RUN mkdir -p $CONDA_DIR && \
    echo export PATH=$CONDA_DIR/bin:'$PATH' > /etc/profile.d/ && \
    apt-get update && \
    apt-get install -y wget git libhdf5-dev g++ graphviz bzip2 && \
    wget --quiet && \
    echo "c59b3dd3cad550ac7596e0d599b91e75d88826db132e4146030ef471bb434e9a *" | sha256sum -c - && \
    /bin/bash / -f -b -p $CONDA_DIR && \


RUN useradd -m -s /bin/bash -N -u $NB_UID $NB_USER && \
    mkdir -p $CONDA_DIR && \
    chown keras $CONDA_DIR -R && \
    mkdir -p /src && \
    chown keras /src

USER keras

# Python
ARG python_version=3.5

RUN conda install -y python=${python_version} && \
    pip install --upgrade pip && \
    pip install tensorflow h5py Pillow && \
    git clone git:// /src && pip install -e /src[tests] && \
    pip install git+git:// && \
    conda clean -yt


ADD . /srv/

CMD ["python", "-W", "ignore", ""]


# Firstly, let's build our model
# Remember about 'model.h5' file
docker build -t cifar .

# We run created docker image like that
# $PWD should be directory with images
docker run -it --rm -v $PWD:/srv/data cifar python

# We can also upload image to Docker Registry
# so others can also easily run it
docker tag cifar
docker push

All presented code samples can be found in my repository, along with and Makefile to simplify whole process even more.


What exactly we did here? Let's make a 3 step summary

  1. We trained our model and saved it to model.h5 file
  2. We created Docker Image with our model, keras, tensorflow and all the stuff needed to run our prediction, as well with file which loads input data (in our case, images) and outputs predictions
  3. Now we can distribute "executable box" with our Neural Network model either by Docker Registry or by giving link to repository and 2-line usage instructions.

If you found this post useful, please share / comment. If not, give me feedback what was wrong. It's very important to me :)

Author image
Warsaw, Poland
Full Stack geek. Likes Docker, Python, and JavaScript, always interested in trying new stuff.
You've successfully subscribed to Rock IT
Great! Next, complete checkout for full access to Rock IT
Welcome back! You've successfully signed in.
Unable to sign you in. Please try again.
Success! Your account is fully activated, you now have access to all content.
Error! Stripe checkout failed.
Success! Your billing info is updated.
Error! Billing info update failed.