How to Create a CI/CD Pipeline with Docker

Spacelift team - May 23 - - Dev Community

CI/CD pipelines automate build, test, and deployment tasks within the software delivery lifecycle (SDLC). CI/CD is a crucial part of DevOps because it helps increase delivery throughput while ensuring consistent quality standards are maintained.

Pipeline configuration often overlaps with the use of containerization platforms like Docker. Containers are isolated, ephemeral environments that have two main benefits for CI/CD: the ability to safely run your pipeline's jobs and to package the applications you create.

In this article, we'll discuss how to combine Docker and CI/CD for maximum effect.

What is CI/CD?

Continuous Integration (CI) and Continuous Delivery (CD) pipelines automate the process of taking code changes from development through to production environments. Manually executing builds, tests, and deployment jobs is time-consuming and error-prone; using CI/CD instead lets you stay focused on development while ensuring all code is subject to required checks.

Successful CI/CD adoption depends on pipelines being fast, simple, secure, and scalable. It's important to architect your job execution environments so they fulfill these requirements, as otherwise, bottlenecks and inefficiencies can occur. Using Docker containers for your jobs is one way to achieve this.

Read more about CI/CD pipelines.

Using Docker for CI/CD

Docker is the most popular containerization platform. The isolation and scalability characteristics of containers make them suitable for a variety of tasks related to app deployment and the SDL.

In the context of CI/CD, Docker is used in two main ways:

  1. Using Docker to run your CI/CD pipeline jobs --- Your CI/CD platform creates a new Docker container for each job in your pipeline. The job's script is executed inside the container, providing per-job isolation that helps prevent unwanted side effects and security issues from occurring.
  2. Using a CI/CD pipeline to build and deploy your Docker images --- A job within your CI/CD pipeline is used to build an updated Docker image after changes are made to your source code. The built image can then be deployed to production in a later job.

These interactions between Docker and CI/CD servers are not mutually exclusive: many projects will use Docker to run their CI/CD jobs and will build Docker images in those jobs. This workflow is usually achieved using Docker-in-Docker, where an instance of the Docker daemon is started inside the container that runs the CI/CD job. The nested Docker daemon allows you to successfully perform operations like docker build and docker push that your job's script requires. Enabling Docker-in-Docker can require special configuration within your CI/CD jobs, depending on the platform you're using.

💡 You might also like:

Example: How to build a CI/CD pipeline with Docker

To illustrate the two ways in which Docker can be used with CI/CD, we'll create a simple GitLab CI/CD pipeline.

The pipeline will execute a job that runs inside a Docker container; that containerized job will use Docker-in-Docker to build our app's Docker image and push it to the image registry provided by GitLab. This pipeline will ensure that image rebuilds occur automatically and consistently each time new commits are pushed to the repository.

The steps that follow are specific to GitLab, but the overall flow is similar in other CI/CD tools.

1. Prepare GitLab CI/CD

To follow along with this tutorial, you'll need a new project prepared on either GitLab.com or your own GitLab instance. By default, GitLab.com runs your jobs as Docker containers atop ephemeral virtual machine instances, so no additional configuration is required to containerize your pipelines.

If you're using your own GitLab instance, you should ensure you've connected a GitLab Runner that's using the Docker executor --- you can find detailed set up instructions in the documentation. For the purposes of this tutorial, the runner should be configured to accept untagged jobs.

2. Create a Dockerfile

Once you've created your project, clone it to your machine, then save the following sample Dockerfile to your project's root directory:

FROM httpd:alpine
RUN echo "<h1>Hello World</h1>" > /usr/local/apache2/htdocs/index.html
Enter fullscreen mode Exit fullscreen mode

Next, use Git to commit your file and push it up to GitLab:

$ git add .
$ git commit -m "Add Dockerfile"
$ git push
Enter fullscreen mode Exit fullscreen mode

This Dockerfile configures the image that will be built for our application, within the CI/CD pipeline.

2. Create a GitLab CI/CD pipeline configuration

Now, you can set up your CI/CD pipeline to build your image when new changes are committed to your repository.

GitLab pipelines are configured using a .gitlab-ci.yml YAML file located in your repository's root directory. The following pipeline configuration uses Docker-in-Docker to build your image from your Dockerfile, inside the Docker container that's running the job. The image is then pushed to your project's GitLab Container Registry instance, which comes enabled by default.

stages:
  - build

variables:
  DOCKER_IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA

build:
  image: docker:25.0
  stage: build
  services:
    - docker:25.0-dind
  script:
    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
    - docker build -t $DOCKER_IMAGE .
    - docker push $DOCKER_IMAGE
Enter fullscreen mode Exit fullscreen mode

There are a few points to note:

  • GitLab CI/CD jobs come with predefined variables that can be used to authenticate to various GitLab components in your project. Here, the automatically generated $CI_REGISTRY_USER and $CI_REGISTRY_PASSWORD short-lived credentials are used to authenticate to your project's Container Registry via the docker login command.
  • $CI_REGISTRY_IMAGE variable provides the image URL reference that should be used for images to be stored in your project's Container Registry. In our custom $DOCKER_IMAGE variable, this is combined with the SHA of the commit the pipeline is running to produce the final tag that will be assigned to the built image. It ensures that each commit produces a distinctly tagged image.
  • The image field within the build job definition defines the Docker image that will be used to run the job---in this case, docker:25.0 so the Docker CLI is available.Because Docker-in-Docker (DinD) functionality is required, the DinD image is also referenced as aservice for the job. This is a GitLab mechanism that allows networked applications (in this case, the Docker daemon) to be started in a different container but accessed from the job container. It's required because GitLab overrides the job container's entrypoint to run your script, so the Docker daemon won't start in the job container.

Copy the pipeline file, save it as .gitlab-ci.yml, and commit it to your repository. After you push the changes to GitLab, head to the Build > Pipelines page in the web UI --- you should see your first pipeline is running:

cicd with docker

Wait while the pipeline completes, then click the green tick under the Stages column to view the logs from your build job:

docker ci cd tutorial

You can see from the logs that Docker is being used to execute the job, so your script is running within a container. GitLab selects the docker:25.0 image specified in your pipeline's config file, then starts the DinD service so the Docker daemon is accessible. Your script instructions are then followed to build your image and push it to your project's Container Registry.

3. View your image

Visit the Deploy > Container Registry page of the GitLab web interface to see your pushed image:

docker ci cd gitlab

The image is tagged with the unique SHA of your last commit. Now, you can make changes to your project, push them to GitLab, and have your CI/CD pipeline automatically build an updated image.

Although this is only a simple example, it shows the most common way to utilize CI/CD and Docker together. Your CI/CD platform might require a different pipeline configuration to that shown here, but you should still be able to achieve an equivalent result.

Best practices for CI/CD with Docker

Although CI/CD pipelines and Docker containers complement each other well, there are still several pitfalls you could encounter as you combine them.

Here are a few Docker CI/CD best practices that will improve performance, security, and scalability:

  1. Beware of the risks of using Docker-in-Docker --- Docker-in-Docker requires the use of privileged mode. This means that root in a job container is effectively root on your host too, allowing an attacker with access to your pipeline's config file to define a job that uses sensitive privileges.
  2. Lockdown your Dockerized build environments --- Because privileged mode is insecure, you should restrict your CI/CD environments to known users and projects. If this isn't feasible, then instead of using Docker, you could try using a standalone image builder like Buildah to eliminate the risk. Alternatively, configuring rootless Docker-in-Docker can mitigate some --- but not all --- of the security concerns surrounding privileged mode.
  3. Run your CI/CD jobs in parallel to improve performance --- Correctly configuring your CI/CD platform for parallel jobs will reduce pipeline duration, improving throughput. Containerization means all jobs will run in their own isolated environment, so they're less likely to cause side effects for each other when executed concurrently.
  4. Monitor resource utilization and pipeline scaling --- An active CI/CD server that runs many jobs concurrently can experience high resource utilization. Running jobs inside containers makes it easier to scale to additional hosts, as you don't need to manually replicate your build environments on each runner machine.
  5. Correctly configure build caches and persistent storage --- Using Docker-in-Docker prevents Docker's build cache from being effective as each job creates its own container, with no access to the cache created by the previous one. Configuring your builds to use previously created images as a cache will improve efficiency and performance.

Keeping these tips in mind will ensure you can use CI/CD and containers without the two techniques negatively affecting each other.

Using a managed CI/CD platform

Selecting a managed CI/CD platform removes the hassle of configuring and scaling your pipelines. Spacelift is a specialized CI/CD platform for IaC scenarios. It goes above and beyond the support that is offered by the plain backend system. Spacelift enables developer freedom by supporting multiple IaC providers, version control systems, and public cloud endpoints with precise guardrails for universal control.

Instead of manually maintaining build servers, you can simply connect the platform to your repositories. You can then test and apply infrastructure changes directly from your pull requests. It eliminates administration overheads and provides simple self-service developer access within policy-defined guardrails.

Read more why DevOps Engineers recommend Spacelift. If you want to learn more about Spacelift, create a free account today, or book a demo with one of our engineers.

Key points

CI/CD and containers are two key technologies in the modern software delivery lifecycle. They each get even better when combined together: correctly configured Docker environments provide isolation and security for your CI/CD jobs, while those jobs are also the ideal place to build the Docker images needed by your apps.

In this guide, we've given an overview of how to use Docker containers with GitLab CI/CD, but we've only touched on the broader topic. Check out the other content on our blog to learn more techniques and best practices for CI/CD and Docker, or try using Spacelift to achieve automated CI/CD for your IaC tools.

Written by James Walker

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .