Quick Links

Docker is a popular development tool as it simplifies starting isolated instances of your application with a reproducible configuration. It can also be used in production where it ensures live deployments are identical to your development environment.

Getting a container into production isn't always as straightforward as running

        docker run
    

on your local machine. It's not a great idea to be manually pushing images to a registry, connecting to a remote Docker host, and starting your containers. This relies on human intervention so it's time consuming and error prone.

In this guide, we'll look at three different strategies you can use that make it easy to automate Docker deployments and maintain consistent configuration. These approaches can be scripted as part of a CI pipeline to start new containers each time your code changes. You'll need to build your Docker images and push them to a registry as the first stage in your script, then use one of the techniques below to pull the image and start containers in your production environment.

1. Docker Compose Over SSH

Docker Compose lets you start multiple containers with a single command. Moreover, Compose is configured via a YAML file which helps you version changes and guarantee reproducible deployments.

You may have already used Compose as a local development tool. You need to create a docker-compose.yml file in your working directory, then add one or more

        services
    

that define the containers to start:

        version: 3
services:
  app:
    image: example.com/app:latest
    ports:
      - 80:80
  database:
    image: mysql:8.0
    expose:
      - 3306

Once you've got a Compose file, use the docker-compose up -d command to launch your containers. If you modify the file, repeat the command to apply your changes. Compose will update or replace containers to achieve the new declared state.

Adding the --pull flag instructs Compose to try and pull updated images before starting containers. You can also use --force-recreate to force the creation of new containers, even if their underlying configuration hasn't changed.

How does all this relate to production deployments? It means you can use Compose as part of your CI pipeline to effortlessly start containers that satisfy the state you declare in your docker-compose.yml file. Running docker-compose up -d --pull in each pipeline will give you a set of containers that each run the latest version of their image.

 

There are several ways you can implement this method. The simplest and safest route is to install Docker and Compose on your production host, then connect to it over SSH. You'd need to use your CI provider's settings to store SSH credentials as variables accessible to your pipeline. You'd then configure the SSH client in your pipeline, copy the docker-compose.yml file to your remote host, and run the  docker-compose up command.

Here's a sample script:

        mkdir -p ~/.ssh && chmod 700 ~/.ssh
echo $SSH_PRIVATE_KEY | ssh-add -
echo $SSH_HOST_KEY > ~/.ssh/known_hosts
scp docker-compose.yml:ci-user@example.com:/home/ci-user/docker-compose.yml
ssh -t ci-user@example.com docker-compose up -d

Alternatively you could use Docker contexts to run the Compose binary locally, within your pipeline's environment. This would require you to expose the Docker socket on your remote host; as this can be a security risk, the approach is generally less favorable in situations where SSH could also be used.

Following this method would have you install Docker and Compose on the host that runs your pipelines. Within your pipeline script, you'd register and select a Docker context that points to your remote production host. The connection details would need to be supplied as variables set in your CI provider's settings panel. With the context selected, you'd run docker-compose up -d in your pipeline's environment but see the command executed against the remote server.

2. Using a Platform-as-a-Service (PaaS)

Adopting a Platform-as-a-Service (PaaS) offering is another approach to running Docker containers in production. You can self-host your own with solutions like Dokku or choose a hosted offering such as Amazon ECS, DigitalOcean App Platform, or Heroku.

A PaaS abstracts away the complexity of building images, maintaining detailed configurations, and provisioning your own Docker hosts. You either use Git to directly push your repository to the platform or run a CLI command to upload your changes. The PaaS handles container creation from your source assets, Dockerfiles, or platform-specific config file.

PaaS solutions are a great way to get online quickly with minimal hands-on Docker interaction. They're easy to integrate into your CI pipeline and most major providers offer sample scripts to get you started. However, it is possible to outgrow a PaaS which could mean you need to rethink your infrastructure in the future.

The steps to automate deployment to your chosen platform will vary by provider. If you're using Dokku or a similar PaaS with Git integration, your CI script could be as simple as two lines:

        git remote add dokku dokku@example.com:app-name
git push dokku master

The script adds your Dokku server as a Git remote and pushes up the repository's content. Dokku will automatically build an image from your Dockerfile and start container instances. You'd need to add your CI server's SSH public key to Dokku for this to work; otherwise, your CI script would be unable to authenticate to the platform.

3. Orchestration With Kubernetes/Docker Swarm

Using an orchestrator such as Kubernetes or Docker Swarm is arguably the most common way of running live container instances. These tools are purpose-built to deploy and scale containers in production environments.

Orchestrators remove the complexities of infrastructure management, letting you focus on your application and its components. Similarly to Docker Compose, they take a declarative approach to state configuration where you define what the end state should look like. The orchestrator determines the correct sequence of actions to achieve that state.

Kubernetes is the most popular orchestrator. One way to interact with Kubernetes clusters is with Kubectl, the official CLI management tool. Kubectl lets you apply manifest files in YAML format that define the container resources to create in your cluster.

Here's a simple manifest that creates a single container instance:

        apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: demo
  template:
    metadata:
      labels:
        app: demo
    spec:
      containers:
        - name: demo
          image: example.com/image:latest

You can use Kubectl to apply this manifest to a cluster:

        kubectl apply -f manifest.yaml
    

Subsequent changes to the file are applied by repeating the command. Kubernetes automatically takes the necessary actions to achieve the new declared state.

This makes Kubernetes a great option for automated production deployments. You can use kubectl apply within your pipelines to take the manifests in your repository and apply the declared state to your cluster. Creating a new image tag for each commit would see Kubernetes pull that image and start new containers for the deployment.

To set this up, you'd need to supply the contents of a Kubeconfig config file as a pipeline variable. This gives Kubectl the credentials to use for your cluster connection. The local Kubectl binary would then operate against your remote cluster.

Docker Swarm is another orchestration option which comes integrated with Docker. You can set up a Swarm stack using the same docker-compose.yml file as described earlier. Similar deployment approaches could then be used, either connecting to the Swarm host over SSH or using a Docker context to modify the target of local Docker binaries.

Orchestrators are much more complex than using plain Compose or a managed PaaS. In the case of Kubernetes, you need to learn new abstractions, terminology, and config file formats before you can deploy your containers. However, clusters also give you extra capabilities which make it easier to maintain applications over the long-term. You can easily scale replicas over multiple hosts, build in redundancy, and aggregate logs and metrics.

Orchestration is therefore the best option for larger systems running multiple containers. That doesn't mean the industry attention that the tools are receiving should cause you to use Kubernetes for every deployment. Compose or a PaaS will be easier to set up, reason about, and maintain for smaller use cases where you're less concerned about scalability and vendor lock-in.

Summary

We've looked at three different ways of running containers as production workloads. The implementation details will vary depending on your chosen strategy, supporting toolchain, and CI environment, so we've omitted precise description of how you can set up automation as part of your workflow. However, all three can be easily integrated into a CI pipeline that runs each time you merge or push your code.

Orchestration using a tool like Kubernetes has rapidly become the preferred method for scalable deployments of systems running multiple containers. While it can vastly simplify the operation of the services it's designed for, it also brings a significant learning curve and maintenance overhead so you shouldn't jump in without considering alternatives.

Smaller systems formed from a few components may see better results from using Compose to start containers with a reproducible config on an existing Docker host. This gives some of the benefits of Kubernetes, such as declarative configuration, without the extra complexity. You may "ease in" to orchestration later by adding Docker Swarm support to your existing Compose file, letting you start multiple distributed replicas of containers.

Finally, Platform-as-a-Service options accelerate application deployment without making you think about granular container details. These services offer the prospect of full infrastructure automation from minimal configuration. They can be restrictive in the long-term though so think about how your solution will grow over time before committing yourself.

When deploying any containers into production, you'll also need to consider image hosting and config injection. You can use a public registry service to make your images available in your production environment. Alternatively, you could run your own private registry and supply credentials as part of your CI pipeline. Config values are usually provided as environment variables which you can define in your CI provider's settings screen.