Docker is the best known containerization platform but it doesn’t exist in isolation. An entire ecosystem of complementary tools and spin-off projects has sprung up around the shift to containers.

Here’s a round-up of 10 open-source analyzers, indexers, and orchestrators that make Docker even more convenient and useful. Whether you’re still early in your Docker journey, or you’re a seasoned practitioner using the tech in production, you might find something here that’s worth including alongside your next project.

Docker Compose

Docker Compose is the only tool on this list that’s actually part of Docker. Compose is an accessible way to build “stacks” of Docker containers that you can manage in unison.

The standard Docker CLI lets you interact with individual containers. Compose provides a similar interface for working with containers in aggregate. This makes it possible to easily control systems that require multiple containers, such as an app server, database, and caching layer. You define these components as services in a docker-compose.yml file, then use the docker-compose binary to start them all together:

version: 3
      - 80:80
    image: mysql:latest
      - 3306
    image: redis:latest
      - 6379

Running docker-compose up -d would create three containers, one each for the app, database, and cache services. They’ll be automatically linked together. This is much more manageable than repeating the docker run command multiple times.


Portainer is a GUI for your Docker installation. It’s a browser-based tool that offers a complete interface for viewing, creating, and configuring your containers. You can also interact with other Docker object types such as images, networks, and volumes.

Portainer's dashboard

Portainer is deployed as its own Docker image:

docker run -d -p 9000:9000 --name=portainer 
    -v /var/run/docker.sock:/var/run/docker.sock 
    -v portainer_data:/data 

This sets up a Portainer instance which you can access at localhost:9000. It works by mounting your host’s Docker socket into the Portainer container. Portainer can therefore use the socket to manage the containers running on your host.


Kubernetes is a distributed container orchestration platform. It’s a common way to move Dockerized workloads into production environments. A Kubernetes cluster consists of multiple Nodes (physical machines) that are each eligible to host container instances.

Kubernetes gives you straightforward scaling and distribution. Whereas plain Docker exposes individual containers on a single machine, Kubernetes manages multiple containers that run seamlessly over several Nodes.

As Kubernetes is OCI-compatible, you can deploy your existing Docker images into your cluster:

apiVersion: apps/v1
kind: Deployment
  replicas: 3
      app: example
        app: example
        - name: example
            - containerPort: 80
kubectl apply -f deployment.yml

This example creates a Kubernetes deployment of the image. The replicas: 3 field means you’ll end up with three container instances, providing redundancy for your system. The Deployment is similar to running docker run -d -p 80:80, although this would only start a single container.


Traefik is an HTTP reverse proxy that’s easy to integrate with container workloads. It automatically reconfigures itself with new routes as you create and remove containers.

Traefik lets you attach labels to your containers to define domain names and forwarding behavior. The software will create appropriate proxy routes each time a container with matching labels joins the Traefik network.

The Traefik web UI

Traefik also offers load balancing capabilities, support for WebSockets, a REST API, integrated metrics, and a web-based dashboard so you can monitor your traffic in real-time. It’s a good way to expose multiple public-facing containers via domain names using a single Docker installation.


Trivy is a container image scanner which uncovers known vulnerabilities. Scanning your images before you deploy them into production gives you confidence your workloads are safe and secure.

Trivy is available as its own Docker image. You can start a simple scan of the example-image:latest image using the following command:

docker run --rm 
    -v trivy-cache:/root/.cache/ 
    -v /var/run/docker.sock:/var/run/docker.sock 
    aquasec/trivy:latest image example-image:latest

Screnshot of a Trivy report

Trivy identifies the software packages in your image, looks for vulnerabilities, and produces a report containing each issue’s CVE ID, severity, and impacted version range. You should upgrade each package to the FIXED VERSION indicated by Trivy. Running the tool after you build an image is therefore an easy way to boost the security of your deployments.


Syft generates SBOMs (software bill of materials) from Docker images. These are lists of all the OS packages and programming language dependencies included in the image.


image of the Syft tool for generating SBOM reports

Syft helps you audit your software supply chain. Docker makes it easy to reference remote content and layer up complex filesystems without necessarily realizing it. It’s even harder for your image’s users to work out what lies inside.

Recent high-profile attacks have demonstrated that overly long software supply chains are a serious threat. Running Syft on your images keeps you informed of their composition, letting you assess whether you can remove some packages or switch to a more minimal base image.


On a related theme, Dive simplifies Docker image filesystem inspections. Images are fairly opaque by default so it’s common to start a container to work out what lies inside. This could put you at risk if the image contains a malicious process.

image of using Dive to view a Docker image filesystem

Dive lets you navigate an image’s filesystem using an interactive tree view in your terminal. You can also browse individual layers to see how the image has been constructed. Viewing just the changes in a single layer helps you visualize the changes applied by each build stage, even if you don’t have access to the original Dockerfile.


Flocker is a volume manager which combines the management of containers and their persistent data. It supports multi-host environments, simplifying the migration of volumes between hosts as containers get rescheduled.

This portability ensures volumes are available wherever containers are. Traditional Docker volumes can’t leave the host they’re created on, forcing your containers to stay in stasis too.

Distributed storage support makes it easier to transition containers into production. Flocker is ideal for stateful containers that need to be scaled in distributed environments while maintaining compatibility with varied storage engines. It supports backends including Amazon EBS, Google GCE, and OpenStack Block Storage.


Dokku uses Docker to let you self-host your own Platform-as-a-Service (PaaS). It automatically spins up Docker containers when you push code using Git.

As a complete application platform, Dokku lets you map domains, add SSL, deploy multiple environments via Git branches, and configure auxiliary services such as databases. It’s a great alternative to commercial platforms like Heroku and Firebase that lets you keep your production deployments on your own hardware.

Setting up a Dokku server lets you start applications in isolated containers without learning all the intricacies of manual container management. You can concentrate on writing and committing code using established Git-based workflows. Adding your Dokku server as a Git remote means you can git push to deploy your changes, either locally in your terminal or as part of a CI pipeline.


Hadolint is a Dockerfile linter that checks your build stages adhere to the recommended best practices. Running Hadolint can uncover common configuration issues that make your builds slower and less secure. Hadolint uses ShellCheck internally to also lint the shell scripts in your Dockerfile RUN instructions.

You can download Hadolint as a precompiled binary, try it on the web, or use its own Docker image, hadolint/hadolint. Start a scan by supplying the path to a Dockerfile to the Hadolint binary:

hadolint Dockerfile

image of the Hadolint Dockerfile linter

Hadolint will scan your Dockerfile for problems and present the results in your terminal. Some of the bundled rules include checking for absolute WORKDIR paths, mandating unique COPY --from aliases, and not switching to a non-root user before the end of the Dockerfile. Running Hadolint regularly will result in safer and more performant image builds that comply with community standards.


Docker is a great developer tool but it gets even better when paired with other popular projects. Community initiatives can boost the security of your images, help you spot issues in your Dockerfiles, and provide versatile GUIs for managing your containers.

New tools are constantly emerging so it’s worth browsing code sharing sites like GitHub to discover upcoming projects. The Docker topic is a good starting point for your exploration.

Profile Photo for James Walker James Walker
James Walker is a contributor to How-To Geek DevOps. He is the founder of Heron Web, a UK-based digital agency providing bespoke software development services to SMEs. He has experience managing complete end-to-end web development workflows, using technologies including Linux, GitLab, Docker, and Kubernetes.
Read Full Bio »