I’ll be honest: I was pretty trepidatious about using Docker. It wasn’t something we used at my last job and most tutorials felt like this comic by Van Oktop.
This post won’t teach you everything you need to know about Docker. But if you’re getting started with Docker and feeling a little lost, hopefully this will help demystify it a bit.
What is Docker?
But first, some frequently asked (by me to my colleagues) questions:
- So… what’s Docker? Glad you asked! Docker helps you run different projects with different dependencies totally separately from each other, without needing to download a bunch of stuff onto your machine that you may never need again.
- How is that different from virtualenv? A virtual environment does some of this. You can use different versions of Python, Django, etc. in different projects when you run each project in its own virtualenv. Docker has the added benefit of isolating your database, caching service like Redis, and other tools as well. For my current project, I’m running a Postgres database and I didn’t have to download Postgres or configure it locally at all!
- So do you use Docker alongside virtualenv? Not quite. You use Docker containers instead of virtual environments. If you’re committed to Docker, you don’t need to worry about virtualenvs anymore. (They can still be useful… but that’s another post.)
A few Docker definitions
- Docker: a software container platform. In practice, this means that Docker is something you download onto your machine. You will run Docker for your projects the way you used to use virtual environments, but you will write a little extra code to set up your stuff in Docker.
- Image: a “lightweight, stand-alone, executable package that includes everything needed to run a piece of software.” You will set up a specific image for each project you work on that will tell Docker which packages your project needs, where your code lives, etc.
- Container: “a runtime instance of an image.” Containers are running copies of images, and are what your code will actually run in. This part is closest to what used to be the virtual environment.
- Dockerfile: the name of the file that contains the instructions for setting up your image.
- docker-compose.yml: the file where you can set up your database, automatically start your server when you start your container, and cool stuff like that.
I highly recommend working through the Get started with Docker tutorial on the Docker website. It will introduce you to the parts of a Dockerfile and the basics of how Docker works. The rest of this post assumes you’ve done the tutorial and are ready to use Docker with a Django project.
Setting up a new project in Docker
You should not consider these instructions as set in stone. It’s what made it easiest for me to get set up and verify that everything was working with Docker.
First, download Docker and complete the Get started with Docker tutorial.
Follow your normal process for starting a new project, including using cookie-cutter and creating a virtual environment. (You’ll discard this virtual environment later.) Create a requirements.txt file and add the packages you need. Inside your virtual environment, run pip install -r requirements.txt. Then run ./manage.py runserver and make sure you have the blue screen of success in your browser. Yay! Make your initial commit.
In the same directory as your manage.py file, create a file called Dockerfile. Remember that a Dockerfile contains the instructions for creating your image. It should look something like this (but yours might not need everything mine does, and yours might include some instructions that mine does not):
FROM python:3.6 ENV PYTHONUNBUFFERED 1 ENV DJANGO_ENV dev ENV DOCKER_CONTAINER 1 COPY ./requirements.txt /code/requirements.txt RUN pip install -r /code/requirements.txt COPY . /code/ WORKDIR /code/ EXPOSE 8000
Let's break this down:
You don’t need to create your Docker image from scratch. You can base your image off of code in another image in the Docker Hub, a repository of existing Docker images.
On this line, I’ve told Docker to base my image off of the Python 3.6 image, which (you guessed it) contains Python 3.6. Pointing to Python 3.6 versus 3.6.x ensures that we get the latest 3.6.x version, which will include bug fixes and security updates for that version of Python.
ENV PYTHONUNBUFFERED 1
ENV creates an environment variable called PYTHONUNBUFFERED and sets it to 1 (which, remember, is “truthy”). All together, this statement means that Docker won’t buffer the output from your application; instead, you will get to see your output in your console the way you’re used to.
ENV DJANGO_ENV dev
If you use multiple environment-based settings.py files, this creates an environment variable called DJANGO_ENV and sets it to the development environment. You might call that "test" or "local" or something else.
ENV DOCKER_CONTAINER 1
This creates an environment variable called DOCKER_CONTAINER that you can use in settings.py to load different databases depending on whether you’re running your application inside a Docker container.
COPY ./requirements.txt /code/requirements.txt
Remember that "." means “the current directory,” so this line copies your project’s requirements.txt file into a new directory in Docker called /code/.
RUN pip install -r /code/requirements.txt
Just like in a regular virtual environment, you need to install your required packages.
COPY . /code/
This line copies the rest of the code in your current directory "." (your project code) into the /code/ directory.
Each Docker container will already contain some subdirectories, so a good practice is to put your project code into its own directory.
You’re probably used to running things like ./manage.py runserver. But when you run that command in your Docker container, you’re likely to forget that your code doesn’t live in the current directory (.) anymore; it lives in /code/. This line tells Docker that you want your “working directory” to be /code/ so you can still continue to run commands from the current directory to your heart’s content.
In order to runserver like a champ, your Docker container will need access to port 8000. This bestows that access.
Huzzah! Your first Dockerfile is ready to go.
Deactivate your virtual environment. In Terminal or your command line, run docker build . from the same directory that contains your Dockerfile. You will see a lot of output in the console.
Your Dockerfile defines the rules and instructions for your image, and "docker build ." actually creates your image. You can’t run containers until you have a valid image to base them on. Assuming you had no errors when you ran docker build . , you will now have a functioning container!
If you are not on a Mac, install Docker Compose. (Mac users: Docker Compose ships with Docker, so you’re good to go!)
Docker Compose lets you run more than one container in a Docker application. It’s especially useful if you want to have a database, like Postgres, running in a container alongside your web app. (Docker’s overview of Compose is helpful.) Compose allows you to define several services that will make up your app and run them all together. Examples of services you might define include:
- web: defines your web service
- db: your database
- redis or another caching service
Compose can also help you relate those services to each other. For example, you likely don’t want your web service to start running until your db is ready, right?
Create a new file called docker-compose.yml in the same directory as your Dockerfile. While Dockerfile doesn’t have an extension, the docker-compose file is written in YAML, so it has the extension .yml. Mine defines two services, web and db, and looks like this:
version: '3' services: db: image: postgres:9.6.5 volumes: - postgres_data:/var/lib/postgresql/data/ web: build: . command: bash -c "python /code/manage.py migrate --noinput && python /code/manage.py runserver 0.0.0.0:8000" volumes: - .:/code ports: - "8000:8000" depends_on: - db volumes: postgres_data:
Just like we did with the Dockerfile, let’s go through the parts of this docker-compose.yml file.
This line defines the version of Compose we want to use. We’re using version 3, the most recent version.
Indented under this line, we will define the services we want our image to run in separate containers when we run our project.
db: image: postgres:9.6.5 volumes: - postgres_data:/var/lib/postgresql/data/
This is where Compose gets exciting: this section sets up the db service as a Postgres database and instructs Compose to pull version 9.6.5 of Postgres from the image that already exists in Docker Hub. This means that I don’t need to download Postgres on my computer at all in order to use it as my local database.
Upgrading Postgres from one minor version to another while keeping your data requires running some extra scripts, pgdump and pgrestore, and can get a little complicated. If you don’t want to mess with this, set your Postgres image to a specific version (like 9.6.5). You will probably want to upgrade the Postgres version eventually, but this will save you from having to upgrade with every minor version release.
volumes tells Compose where in the container I would like it to store my data: in /var/lib/postgresql/data/. Remember when I said that each container had its own set of subdirectories and that is why you needed to copy your application code into a directory named /code/? /var/ is one of those other subdirectories. A volume also lets your data persist beyond the lifecycle of a specific container.
web: build: . command: bash -c "python /code/manage.py migrate --noinput && python /code/manage.py runserver 0.0.0.0:8000" volumes: - .:/code ports: - "8000:8000" depends_on: - db
This section sets up the web service, the one that will run my application code. build: . tells Compose to build the image from the current directory. command: bash -c "python /code/manage.py migrate --noinput" will automatically run migrations when I run the container and hide the output from me in the console. && python /code/manage.py runserver 0.0.0.0:8000 will start the server when I run the container. (The && lets us put two commands on one line.)
volumes: - .:/code
This section sets up another volume for the code in your current directory and where your code will live in the Docker container (/code/).
ports: - "8000:8000"
Here we map our own port 8000 to the port 8000 in the Docker container. A more technical explanation is, “We map port 8000 to the host’s port 8000, meaning that our app server will be reachable in the host via `127.0.0.1:8000` once it’s running;” thanks to Oliver Eidel for that!
depends_on: - db
The depends_on statement declares that our web service depends on our db service, so Compose will get the db service up and running before it tries to run the web service.
Finally, Compose has a rule where you have to list your named volumes in a top-level volumes key, so we have done that.
Save the docker-compose.yml file.
In Terminal or your console and from the same directory that contains your Dockerfile and docker-compose.yml file, run docker-compose up.
Assuming you have no errors, navigate to http://localhost:8000/ in a browser and see your blue screen of success once again!
Ready for your next step? Check out Docker: Useful Command Line Stuff next!
- Get started with Docker
- Get started with Docker Compose
- Quickstart: Compose and Django
- Best practices for writing Dockerfiles
- Dockerizing Django, uWSGI and Postgres the Serious Way, Oliver Eidel
Thanks to Frank Wiles and Jeff Triplett for reviewing drafts of this post.