October 29, 2017 · docker tools

Getting started with docker in development

Hey, long time no see! Today I want to tell you something about a great tool that I'm extensively using on a daily basis for a few years now. It's called Docker, and I've already created couple posts about it, but none of them is dedicated to beginners. Time to fix it.

container house

I've heard about Docker, but actually what it is?

Docker is a tool for creating and running "lightweight virtual-machines". They are distributed in packages usually containing a small distribution of Linux, all required dependencies for your application to run and your application code. We call them "docker images" and you can think about them as "executable black boxes". This image can be used to create an arbitrary number of "containers", isolated processes with access to all prepared dependencies. One important thing is that each container is an exact copy of an image, even if started on a different operating system. No matter what technology or language is required to run your application, if you have image and docker installed, you can run it.

I want to keep this article short, so I'll move to advantages that Docker gives you. If you want to learn more, check official documentation.

Why should I give Docker a chance?

Remember, you don't have to go "all in".

My advice is that you should add Docker support progressively to your application. Start with development. Tell others in your team that they can use docker, show how to use it. Wait for reactions, let them become familiar with it. Wait until they ask "Why the hell we discovered it so late?". And then think about production integration, I'll write about it soon.

In development, Docker allows keeping the whole infrastructure as a code and sharing it with others in your team. Do you know dialogues like this?


**you**: "Hey Chris, my development application won't start. Have you changed something?"

chris: "Oh yes, I forgot to tell. We have new database engine now. Also, you need to install redis, and drivers for it. It's very easy, take a look into docs and you'll do it in few moments."


Of course, it always takes more time. If you work in a bigger team, such situations can happen more often and are harder to spot. With Docker, to sync with changes you need to just run docker-compose up --build, and after few moments new version of the development stack will be running, no matter what changed. Moreover, introducing new developer to the project will look exactly the same! Imagine how much time it can save?

Personally, I'm working with many different technologies at the same time, using different versions of common dependencies (e.g databases). Docker allows me to isolate them. Also, no matter what technology, I can start working in exactly the same way - docker-compose up and rock.

There is a place called Docker Hub. It's an official store with a tremendous amount of ready to use images, that you can use in your stack. You need redis? Postgres? Mysql, apache, php, nginx, python, ruby, nodejs? All of them are just one click away!

Ok, it looks promising. How to start?

First: install docker and docker-compose.
You can find official instructions here and here.


Second: add proper Dockerfile to your application.
Dockerfile contains list of instructions that need to be run to prepare environment for running your application. For example, here you copy your project files, install all system and application dependencies, set all required settings etc. Let's take a look at a Dockerfile that I'm using for my Python projects:

FROM python:3.6

ENV PROJECT_ROOT=/srv
WORKDIR $PROJECT_ROOT

COPY requirements.txt .
RUN pip install -r requirements.txt

COPY . .

CMD ["python", "manage.py", "runserver"]

Usually, you don't need anything fancier. To find instructions how to create Dockerfile for your language - just google it, there are lots of great quality tutorials.

If you want to learn about best practices, my article already helped thousands of readers. Also, you can check official one.


Third: create docker-compose.yml file.
Docker-compose is a brilliant tool allowing us to painlessly manage stack of containers at the same time. This special file tells docker-compose which containers should be started.

Example docker-compose.yml that I'm using:

version: '2'
services:
  redis:
    image: redis

  database:
    image: postgres

  app:
    build: .
    environment:
      DEBUG: 1
    ports:
      - 8000:8000
    volumes:
      - .:/srv
    command: python manage.py runserver

This file tells that we want Redis and Postgres created using official images, and our application built from local Dockerfile, available from outside docker network on port 8000, with all local files mounted into the container for easy development. All options can be found here (I'm using version 2 because 3 adds Docker Swarm support, isn't needed for development).

Now, when we type docker-compose up, following things will happen:

  1. Docker will try to download / create all required Images. If we used image option, it will be downloaded from the internet, by default from DockerHub. Option build will instruct Docker to build image using Dockerfile found in a specific directory, in our case . (current dir). If image already exists, it won't be automatically rechecked (to do so, we must add flag --build)
  2. When images are available, docker-compose will check if configuration changed since the last run. If not, previously used containers are started, in other case new containers are created. Settings like command, ports, volumes, variables, networks etc are taken from docker-compose.yml.
  3. If not stated otherwise, a special network is created, spanning all created containers. It allows them to reference others using names instead of ip addresses, for example for our application database will be available at postgres:5432.
  4. All containers are started at the same time, and we see logs from all of them.

That's it! Now the whole stack starts automatically. Of course, you should do things like automatically creating a database if it doesn't exist, or reload application code on a change to make development really pleasant. What's important, you just need to prepare this one time, and the whole team will benefit from it immediately.

Note: During development, you should probably always use volumes option in your docker-compose.yml file, mounting your local files into the container. Without this option, you would need to rebuild container each time some changes occur.

Typical usage

Here are the most common commands that I'm using on a daily basis.

Starting work: docker-compose up or docker-compose up --rebuild if you know that some dependencies have changed

Running one-off command: It depends if we need other containers available, like the database. If yes, start the stack with a previous command, and in another terminal window run docker-compose exec $SERVICE_NAME $COMMAND. If no, just use docker-compose run --rm $SERVICE_NAME $COMMAND (I recommend --rm flag to automatically clean after command)

Removing everything created by docker-compose: docker-compose down. It will destroy all containers, images, and networks

Removing one container with all related data: docker-compose rm -v $SERVICE_NAME (very useful if you want to hard-reset database)

That's all for now!

Hope my post will help you to start with Docker. If you like this post and want more, don't hesitate to share it on social media, it really helps to keep motivation. Thank you, and have a great day :)

Comments powered by Disqus