Quick Links

One common use case for CI pipelines is building the Docker images you'll use to deploy your application. GitLab CI is a great choice for this as it supports an integrated pull proxy service, meaning faster pipelines, and a built-in registry to store your built images.

In this guide, we'll show you how to set up Docker builds that use both the above features. The steps you need to take vary slightly depending on the GitLab Runner executor type you'll use for your pipeline. We'll cover the Shell and Docker executors below.

Building With the Shell Executor

If you're using the Shell executor, make sure you've got Docker installed on the machine that hosts your runner. The executor works by running regular shell commands using the docker binary on the Runner's host.

Head to the Git repository for the project you want to build images for. Create a .gitlab-ci.yml file at the root of the repository. This file defines the GitLab CI pipeline that will run when you push changes to your project.

Add the following content to the file:

stages:

- build

docker_build:

stage: build

script:

- docker build -t example.com/example-image:latest .

- docker push example.com/example-image:latest

This simplistic configuration is enough to demonstrate the basics of pipeline-powered image builds. GitLab automatically clones your Git repository into the build environment so running docker build will use your project's Dockerfile and make the repository's content available as the build context.

After the build completes, you can docker push the image to your registry. Otherwise it would only be available to the local Docker installation that ran the build. If you're using a private registry, run docker login first to supply proper authentication details:

script:

- docker login -u $DOCKER_REGISTRY_USER -p $DOCKER_REGISTRY_PASSWORD

Define the values of the two credential variables by heading to Settings > CI/CD > Variables in the GitLab web UI. Click the blue "Add variable" button to create a new variable and assign a value. GitLab will make these variables available in the shell environment used to run your job.

Screenshot of defining a GitLab CI variable

Building With the Docker Executor

GitLab Runner's Docker executor is commonly used to provide a completely clean environment for each job. The job will execute in an isolated container so the docker binary on the Runner host will be inaccessible.

The Docker executor gives you two possible strategies for building your image: either use Docker-in-Docker, or bind the host's Docker socket into the Runner's build environment. You then use the official Docker container image as your job's image, making the docker command available in your CI script.

Docker-in-Docker

Using Docker-in-Docker (DinD) to build your images gives you a fully isolated environment for each job. The Docker process that performs the build will be a child of the container that GitLab Runner creates on the host to run the CI job.

You need to register your GitLab Runner Docker executor with privileged mode enabled to use DinD. Add the --docker-privileged flag when you register your runner:

sudo gitlab-runner register -n 

--url https://example.com

--registration-token $GITLAB_REGISTRATION_TOKEN

--executor docker

--description "Docker Runner"

--docker-image "docker:20.10"

--docker-volumes "/certs/client"

--docker-privileged

Within your CI pipeline, add the docker:dind image as a service. This makes Docker available as a separate image that's linked to the job's image. You'll be able to use the docker command to build images using the Docker instance in the docker:dind container.

services:

- docker:dind

docker_build:

stage: build

image: docker:latest

script:

- docker build -t example-image:latest .

Using DinD gives you fully isolated builds that can't impact each other or your host. The major drawback is more complicated caching behavior: each job gets a new environment where previously built layers won't be accessible. You can partially address this by trying to pull the previous version of your image before you build, then using the --cache-from build flag to make the pulled image's layers available as a cache source:

docker_build:

stage: build

image: docker:latest

script:

- docker pull $CI_REGISTRY_IMAGE:latest || true

- docker build --cache-from $CI_REGISTRY_IMAGE:latest -t $CI_REGISTRY_IMAGE:latest .

Socket Bind Mounts

Mounting your host's Docker socket into your job's environment is an alternative option when you're using the Docker executor. This gives you seamless caching and removes the need to add the docker:dind service to your CI config.

To set this up, register your Runner with a docker-volumes flag that binds the host's Docker socket to /var/run/docker.sock inside job containers:

sudo gitlab-runner register -n 

--url https://example.com

--registration-token $GITLAB_REGISTRATION_TOKEN

--executor docker

--description "Docker Runner"

--docker-image "docker:20.10"

--docker-volumes /var/run/docker.sock:/var/run/docker.sock

Now jobs that run with the docker image will be able to use the docker binary as normal. Operations will actually occur on your host machine, becoming siblings of the job's container instead of children.

This is effectively similar to using the shell executor with your host's Docker installation. Images will reside on the host, facilitating seamless use of regular docker build layer caching.

While this approach can lead to higher performance, less configuration, and none of the limitations of DinD, it comes with its own unique issues. Most prominent among these are the security implications: jobs could execute arbitrary Docker commands on your Runner host, so a malicious project in your GitLab instance might run docker run -it malicious-image:latest or docker rm -f $(docker ps -a) with devastating consequences.

GitLab also cautions that socket binding can cause problems when jobs run concurrently. This occurs when you rely on containers being created with specific names. If two instances of a job run in parallel, the second one will fail as the container name will already exist on your host.

You should consider using DinD instead if you expect either of these issues will be troublesome. While DinD is no longer generally recommended, it can make more sense for public-facing GitLab instances that run concurrent CI jobs.

Pushing Images to GitLab's Registry

GitLab projects have the option of an integrated registry which you can use to store your images. You can view the registry's content by navigating to Packages & Registries > Container Registry in your project's sidebar. If you don't see this link, enable the registry by going to Settings > General > Visibility, Project, Features & Permissions and activating the "Container registry" toggle.

Screenshot of enabling GitLab's container registry for a project

 

GitLab automatically sets environment variables in your CI jobs which let you reference your project's container registry. Adjust the script section to login to the registry and push your image:

script:

- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY

- docker build -t $CI_REGISTRY_IMAGE:latest .

- docker push $CI_REGISTRY_IMAGE:latest

GitLab generates a secure set of credentials for each of your CI jobs. The $CI_JOB_TOKEN environment variable will contain an access token the job can use to connect to the registry as the gitlab-ci-token user. The registry server URL is available as $CI_REGISTRY.

The final variable, $CI_REGISTRY_IMAGE, provides the complete path to your project's container registry. This is a suitable base for your image tags. You can extend this variable to create sub-repositories, such as $CI_REGISTRY_IMAGE/production/api:latest.

 

Other Docker clients can pull images from the registry by authenticating using an access token. You can generate these on your project's Settings > Access Tokens screen. Add the read_registry scope, then use the displayed credentials to docker login to your project's registry.

Using GitLab's Dependency Proxy

GitLab's Dependency Proxy provides a caching layer for the upstream images you pull from Docker Hub. It helps you stay within Docker Hub's rate limits by only pulling the content of images when they've actually changed. This will also improve the performance of your builds.

Screenshot of GitLab Dependency Proxy group settings

The Dependency Proxy is activated at the GitLab group level by heading to Settings > Packages & Registries > Dependency Proxy. Once it's enabled, prefix image references in your .gitlab-ci.yml file with $CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX to pull them through the proxy:

docker_build:

stage: build

image: $CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX/docker:latest

services:

- name: $CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX/docker:dind

alias: docker

That's all there is to it! GitLab Runner automatically logs into the dependency proxy registry so there's no need to manually supply your credentials.

GitLab will now cache your images, giving you improved performance as well as resiliency to network outages. Note that the services definition has had to be adjusted too - environment variables don't work with the inline form used earlier, so the full image name must be specified, then a command alias to reference in your script section.

While we've now set up the proxy for images directly used by our job stages, more work's needed to add support for the base image in the Dockerfile to build. A regular instruction like this won't go through the proxy:

FROM ubuntu:latest

To add this final piece, use Docker's build arguments to make the dependency proxy URL available when stepping through the Dockerfile:

ARG GITLAB_DEPENDENCY_PROXY

FROM ${GITLAB_DEPENDENCY_PROXY}/ubuntu:latest

Then modify your docker build command to define the variable's value:

script:

>

- docker build

--build-arg GITLAB_DEPENDENCY_PROXY=${CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX}

-t example-image:latest .

Now your base image will be pulled through the dependency proxy too.

Summary

Docker image builds are easily integrated into your GitLab CI pipelines. After initial Runner configuration, docker build and docker push commands in your job's script section are all you need to create an image with the Dockerfile in your repository. GitLab's built-in container registry gives you private storage for your project's images.

Screenshot of a GitLab CI job log that built a Docker image

Beyond basic builds, it's worth integrating GitLab's dependency proxy to accelerate performance and avoid hitting Docker Hub rate limits. You should also check the security of your installation by assessing whether your selected method allows untrusted projects to run commands on your Runner host. Although it carries its own issues, Docker-in-Docker is the safest approach when your GitLab instance is publicly accessible or accessed by a large user base.