While Docker is much lighter weight than traditional VMs, too many containers can quickly consume your host’s resources. Here’s how to check hardware utilization and monitor the process counts inside your containers.

The Docker Stats Command

Docker’s built-in mechanism for viewing resource consumption is docker stats. This command gives you a tabulated view of your containers. Each container displays a live feed of its critical metrics.

The command’s output includes CPU consumption and a measure of each container’s network and storage use during its lifetime. The Memory column shows the live memory usage as well as the memory limit configured on the container. When no limit is set, you’ll see the amount of RAM available on your host. The final column, PIDS, is a count of the number of processes started by the container.

image of the docker stats command output

Stopped containers are excluded by default. You can add them to the table by passing the -a (--all) flag to the command. CPU and memory use will be unavailable but you’ll be able to see the metrics that are aggregated through the container’s life, such as network activity.

You can view the stats of single and multiple containers in the same way as other common docker CLI commands. Pass a list of space-separated container IDs or names. The output will show the metrics for the specified containers, removing everything else.

docker stats first-container second-container

docker stats supports custom formatting so you can select just the columns you need. The --format flag accepts a Go placeholder string that lets you create custom data visualizations.

Here’s how to show container names with CPU and memory use metrics:

docker stats --format "table {{.Name}}t{{.CPUPerc}}t{{.MemUsage}}"

The table formatting type prepends column headers to the output. Omit this if you want the raw data without tabulation. If you use the same formatting string regularly, consider adding it as a shell alias for ease of access.

Getting More Info

More detailed information about a container’s resource usage can be acquired by inspecting its control group (cgroup). This kernel mechanism tracks the consumption of a group of processes, exposing collected metrics in a pseudo-filesystem.

Two versions of the cgroup system are available. v2 is only supported on Docker 20.10 or later with Linux kernel v4.15. Older releases will be using v1. Documentation on v2 is still incomplete, so v1 can be easier to work with.

To find a container’s cgroup, you need to determine which version is active and know the container’s full ID. This must be the complete version, not the truncated form shown in docker ps and docker stats output. You can find it by running docker ps --no-trunc.

Combine the container ID with the path to your system’s control groups directory. Paths for v1 and v2 are documented by Docker. Then you can inspect the pseudo-filesystem to find detailed resource stats. Here’s the path to find a container’s memory use when using cgroups v1:

cat /sys/fs/cgroup/memory/docker/<full container id>/memory.stat

The memory file provides detailed information on consumption, limits, paging, and swap use.

Finding Resource Metrics With the Docker API

A more straightforward way of accessing this information is via the Docker API. This is enabled by default via the Docker daemon’s Unix socket. The /containers/{id}/stats endpoint provides in-depth resource utilization details. Replace {id} with your container’s ID.

curl --unix-socket /var/run/docker.sock "http://localhost/v1.41/containers/{id}/stats" | jq

We’re using curl in this example. It’s instructed to use the Docker daemon socket via the --unix-socket flag. The Docker API will return data in JSON format; this is piped into jq to make it more readable in the terminal.

image of Docker container resource use data returned by the Docker API

Each API response contains detailed information on the container’s current and past resource utilization. It’s numerical data intended for consumption by machine tools. Values are presented “raw” and may not be immediately intelligible without further processing or ingest into a dashboard tool.

Viewing Running Processes

A separate command, docker top, lets you see the current process list of a specified container:

docker top my-container

It enumerates the container’s process list at the time the command is run. Unlike stats, it does not provide a live data stream. You can see each process’ ID, the user which started it, and the command that’s being run.

image of running docker top to view a container's process list

You can also get this information from the API. Use the same approach as described above, substituting the /containers/{id}/stats endpoint for /containers/{id}/top.

Docker doesn’t provide an integrated way of viewing per-process resource utilization. If you want this information, it’s best to attach to the container and install top or htop. These tools will give you a much deeper view of the container’s activity.

docker exec -it my-container sh

# substitute your package manager's commands
apt update && apt install htop -y



The Docker daemon collects and exposes real-time and historical resource consumption statistics about your containers. You can access a basic graphical view of the data using docker stats but for more advanced readouts the Docker API or manual control group inspection is needed.

You can list a container’s running processes too but the docker top command does not provide any indication of resource metrics. This means it’s of limited use when inspecting why a container is holding excessive CPU or memory. You’ll need to attach to it manually and inspect from within.

Docker’s tools target general monitoring and observability, not detailed inspection to facilitate resolution of issues. Most of the time, they’re perfectly adequate but a good knowledge of wider Linux monitoring tools which work inside containers will be more effective when solving problems.

Profile Photo for James Walker James Walker
James Walker is a contributor to How-To Geek DevOps. He is the founder of Heron Web, a UK-based digital agency providing bespoke software development services to SMEs. He has experience managing complete end-to-end web development workflows, using technologies including Linux, GitLab, Docker, and Kubernetes.
Read Full Bio »