Running Docker inside Docker lets you build images and start containers within an already containerized environment. There are two possible approaches to achieve this depending on whether you want to start child or sibling containers.
Access to Docker from inside a Docker container is most often desirable in the context of CI and CD systems. It’s common to host the agents that run your pipeline inside a Docker container. You’ll end up using a Docker-in-Docker strategy if one of your pipeline stages then builds an image or interacts with containers.
The Docker-in-Docker Image
Docker is provided as a self-contained image via the
docker:dind tag on Docker Hub. Starting this image will give you a functioning Docker daemon installation inside your new container. It’ll operate independently of your host’s daemon that’s running the
dind container, so
docker ps inside the container will give different results to
docker ps on your host.
docker run -d --privileged --name docker -e DOCKER_TLS_CERTDIR=/certs -v docker-certs-ca:/certs/ca -v docker-certs-client:/certs/client docker:dind
Using Docker-in-Docker in this way comes with one big caveat: you need to use privileged mode. This constraint applies even if you’re using rootless containers. Privileged mode is activated by the
--privileged flag in the command shown above.
Using privileged mode gives the container complete access to your host system. This is necessary in a Docker-in-Docker scenario so your inner Docker is able to create new containers. It may be an unacceptable security risk in some environments though.
There are other issues with
dind too. Certain systems may experience conflicts with Linux Security Modules (LSM) such as AppArmor and SELinux. This occurs when the inner Docker applies LSM policies that the outer daemon can’t anticipate.
Another challenge concerns container filesystems. The outer daemon will run atop your host’s regular filesystem such as
ext4. All its containers, including the inner Docker daemon, will sit on a copy-on-write (CoW) filesystem though. This can create incompatibilities if the inner daemon is configured to use a storage driver which can’t be used on top of an existing CoW filesystem.
Mounting Your Host’s Docker Socket Instead
The challenges associated with
dind are best addressed by avoiding its use altogether. In many scenarios, you can achieve the intended effect by mounting your host’s Docker socket into a regular
docker run -d --name docker -v /var/run/docker.sock:/var/run/docker.sock docker:latest
The Docker CLI inside the
docker image interacts with the Docker daemon socket it finds at
/var/run/docker.sock. Mounting your host’s socket to this path means
docker commands run inside the container will execute against your existing Docker daemon.
This means containers created by the inner Docker will reside on your host system, alongside the Docker container itself. All containers will exist as siblings, even if it feels like the nested Docker is a child of the parent. Running
docker ps will produce the same results, whether it’s run on the host or inside your container.
This technique mitigates the implementation challenges of
dind. It also removes the need to use privileged mode, although mounting the Docker socket is itself a potential security concern. Anything with access to the socket can send instructions to the Docker daemon, providing the ability to start containers on your host, pull images, or delete data.
When to Use Each Approach
dind has historically been widely used in CI environments. It means the “inner” containers have a layer of isolation from the host. A single CI runner container supports every pipeline container without polluting the host’s Docker daemon.
While it often works, this is fraught with side effects and not the intended use case for
dind. It was added to ease the development of Docker itself, not provide end user support for nested Docker installations.
According to Jérôme Petazzoni, the creator of the
dind implementation, adopting the socket-based approach should be your preferred solution. Bind mounting your host’s daemon socket is safer, more flexible, and just as feature-complete as starting a
If your use case means you absolutely require
dind, there is a safer way to deploy it. The modern Sysbox project is a dedicated container runtime that can nest other runtimes without using privileged mode. Sysbox containers become VM-like so they’re able to support software that’s usually run bare-metal on a physical or virtual machine. This includes Docker and Kubernetes without any special configuration.
Running Docker within Docker is a relatively common requirement. You’re most likely to see it while setting up CI servers which need to support container image builds from within user-created pipelines.
docker:dind gives you an independent Docker daemon running inside its own container. It effectively creates child containers that aren’t directly visible from the host. While it seems to offer strong isolation,
dind actually harbors many edge case issues and security concerns. These are due to Docker’s operating system interactions.
Mounting your host’s Docker socket into a container which includes the
docker binary is a simpler and more predictable alternative. This lets the nested Docker process start containers that become its own siblings. No further settings are needed when you use the socket-based approach.
- › Samsung Galaxy Z Flip 4 Has Internal Upgrades, Not Design Changes
- › How to Add Winamp Visualizations to Spotify, YouTube, and More
- › The 5 Biggest Android Myths
- › 10 Quest VR Headset Features You Should Be Using
- › What’s the Best TV Viewing Distance?
- › Vertagear SL5000 Gaming Chair Review: Comfortable, Adjustable, Imperfect