As a software developer who's been around for some time, I've seen many technologies emerging but nothing has had a bigger impact on my progress than Docker and not in a way you might think!
Long story short, for me Docker has been the gateway to learning a huge set of technologies that otherwise perhaps I'd never even tried. With Docker, all you need to try and learn a system, technology, software or solution is to find the right docker images and ideally a compose file.
This technology took me from a junior developer to becoming a tech lead, and a DevOps engineer, and ultimately I ended up starting my own startup DoTenX.
DoTenX is open-source (https://github.com/dotenx/dotenx), and we use it in this tutorial to explain different common sections of a compose file and how to use them.
In this short tutorial, I'll show you some of the most important concepts and commands for working with docker-compose.
I assume you have Docker Desktop and docker-compose installed on your machine.
Let's start:
First clone the repository we're going to use in our examples:
git clone https://github.com/dotenx/dotenx
cd dotenx #change directory to the folder the project is cloned in
Let's take a look at our compose file, docker-compose.yaml
. This is the file that docker-compose
command by default uses.
version: "3"
services:
scheduler_server:
build:
context: ./job_scheduler/server
dockerfile: Dockerfile.dev
ports:
- "9090:9090"
networks:
- default
volumes:
- ./job_scheduler/server:/usr/src/app
- /usr/src/app/node_modules
environment:
- REDIS_HOST=redis
- AO_API_URL=http://ao-api:3004
command: npm run dev
ui:
build:
context: ./ui
dockerfile: Dockerfile.dev
ports:
- "3010:80"
networks:
- default
volumes:
- ./ui:/usr/src/app
- ./ui/nginx.conf:/etc/nginx/nginx.conf
- /usr/src/app/node_modules
env_file:
- ./ui/.env.development
ui_builder:
build:
context: ./ui-builder
dockerfile: Dockerfile
ports:
- '8080:80'
networks:
- default
volumes:
- ./ui-builder:/usr/src/app
- ./ui-builder/nginx.conf:/etc/nginx/nginx.conf
- /usr/src/app/node_modules
redis:
image: redis
hostname: redis
ports:
- "6380:6379"
volumes:
- "redis_data:/data"
psql:
image: postgres:12
hostname: postgres
env_file:
- postgres.env
volumes:
- "postgres_data:/var/lib/postgresql/data/"
ports:
- "5434:5432"
restart: unless-stopped
ao-api:
build:
context: .
dockerfile: ./ao-api/Dockerfile
env_file:
- ./ao-api/.env
environment:
- RUNNING_IN_DOCKER=true
depends_on:
- psql
- scheduler_server
- redis
hostname: ao-api
working_dir: /root/
volumes:
- ao_api_data:/go/src/github.com/dotenx/dotenx/ao-api
- /var/run/docker.sock:/var/run/docker.sock
networks:
- default
ports:
- "3005:3004"
runner:
build:
context: runner
dockerfile: Dockerfile.dev
working_dir: /root/
volumes:
- /tmp/cache:/tmp/cache
- runner_data:/go/src/github.com/dotenx/dotenx/runner
- /var/run/docker.sock:/var/run/docker.sock
networks:
- default
networks:
default:
external:
name: dev
volumes:
redis_data:
postgres_data:
ao_api_data:
runner_data:
At the top level of this yaml file we have services
, networks
and volumes
. In simple terms, services are the containers you want to run, and networks defines the networks that service containers are attached to. I'll explain the volumes shortly.
For each service, you can specify an image, like services psql
and redis
, or a docker file, like other services in this compose file.
When you don't specify an image for a service, you have to set the build
section.
build:
context: ./job_scheduler/server
dockerfile: Dockerfile.dev
In this example, we have specified a context
and the dockerfile
.
As DoTenX is a complex project it has multiple sub-projects each of them in a separate directory and contexts defines the relative path of the files to use in docker build to create an image.
If you're using the name Dockerfile for your dockerfile you don't need to specify the parameter dockerfile
in the build section.
Another section in the services is ports
. This parameter maps the port on your machine to a port that's exposed in the dockerfile.
You can pass the environment variables to the container for each service using environment
section of the services. Docker compose also supports env_file
. These are the files typically named with the format of .env.{environment}
which you use to set multiple environment variables conveniently.
Docker compose also allows you to overwrite multiple settings of your docker container that you normally specify in your dockerfile, such as command
and working_dir
(Let me know if you want me to explain these parts in the comments).
You can use the volumes
section to allow the containers to persist data. Remember that every time you start a docker container it starts fresh with new data and when you stop it, it loses all the data unless you specify volumes
.
In some services such as ui_builder we have used mount points to map an exact path on our machine to a path on the docker container, while on some containers we have used named containers, such as redis_data
in the redis service.
The volumes
section at the top level of the compose file gives us control over the configuration of the volumes.
Another section in the services in this compose file is networks
. This is a more advanced topic and there are various types of networks supported by docker-compose. In this compose file an external network is used. Run this command to create the network:
docker network create -d bridge --attachable dev
Last but not least, depends_on
section defines the startup and shutdown dependencies between services.
Now, you can run this command to start this solution:
docker-compose up
As soon as you run this command, docker starts building images for the services that have specified a build section. The next time you run this command, if there are no changes, the build step will be skipped.
When the containers are ready go to localhost:3010
in your browser to see the result.
This is a rather large project with a lot of dependencies which helps you investigate various options. You can see the live version of this project at dotenx.com.
In the next tutorial, I'll talk about the commands you need to inspect the containers, get the logs, copy files from/to the containers and some other common use cases.