Monitoring the resource usage of your Kubernetes cluster is essential so you can track performance and understand whether your workloads are operating efficiently. The
kubectl top command streams metrics directly from your cluster, letting you access the basics in your terminal.
This command won’t usually work straightaway in a fresh Kubernetes environment. It depends on the Metrics Server addon being installed in your cluster. This component collects metrics from your Nodes and Pods and provides an API to retrieve the data.
In this article we’ll show how to install Metrics Server and access its measurements using
kubectl top. You’ll be able to view the CPU and memory consumption of each of your Nodes and Pods.
Adding Metrics Server to Kubernetes
Kubernetes distributions don’t normally come with Metrics Server built-in. You can easily check whether your cluster already has support by trying to run
$ kubectl top node error: Metrics API not available
The error message confirms that the metrics server API is not present in the cluster.
Metrics Server is maintained within the Kubernetes Special Interest Group (SIG) community. It can be added to your cluster using its plain YAML manifest or the project’s Helm chart.
We’ll use the manifest file for this tutorial. Run the following Kubectl command to install the Metrics Server:
$ kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml serviceaccount/metrics-server created clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrole.rbac.authorization.k8s.io/system:metrics-server created rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created service/metrics-server created deployment.apps/metrics-server created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
Metrics Server will now start collecting and exposing Kubernetes resource consumption data. If the installation fails with an error, you should check your cluster meets the project’s requirements. Metrics Server has specific dependencies which may not be supported in some environments.
Many Kubernetes distributions bundle Metrics Server support using their own addons system. You can use this command to easily add Metrics Server to a Minikube cluster, for example:
$ minikube addons enable metrics-server Using image k8s.gcr.io/metrics-server/metrics-server:v0.4.2 The 'metrics-server' addon is enabled
Retrieving Metrics With Kubectl Top
With Metrics Server installed, you can now run
kubectl top to access the information it collects.
node sub-command to get the current resource utilization of each of the Nodes in your cluster:
$ kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% minikube 249m 3% 847Mi 2%
pod sub-command provides individual metrics for each of your Pods:
$ kubectl top pod NAME CPU(cores) MEMORY(bytes) nginx 120m 8Mi
This will surface Pods in the
default namespace. Add the
--namespace flag if you’re interested in Pods in a specific namespace:
$ kubectl top pod --namespace demo-app NAME CPU(cores) MEMORY(bytes) nginx 0m 2Mi
--all-namespaces flag is also supported to list every Pod in your cluster.
Metrics may take a few minutes to become available after new Pods are created. There’s a delay in the metrics server’s pipeline so it doesn’t become a performance issue itself.
kubectl top command doesn’t overwhelm you with dozens of metrics. It focuses on covering the bare essentials of CPU and memory usage. This basic start can be adequate for scenarios where you simply need data fast, such as identifying the Pod that’s caused a spike in overall utilization.
One source of confusion can be the
100m values reported in the
CPU(cores) field. The command displays CPU usage in millicores. A measurement of
1000m always means 100% consumption of a single CPU core.
500m indicates 50% consumption of one core, while
2000m means two cores are being occupied.
Changing the Object Sort Order
kubectl top command can optionally sort the emitted object list by CPU or memory consumption. This makes it easier to quickly spot the Nodes or Pods that are exerting the highest pressure on cluster resources.
--sort-by flag with either
memory as its value to activate this behavior:
$ kubectl top pod --sort-by=memory NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% nginx-1 249m 3% 1790Mi 5% nginx-2 150m 1% 847Mi 2%
Filtering the Object List
In common with other Kubectl commands, the
--selector flag lets you filter the object list to items with specific labels:
$ kubectl top pod --selector application=demo-app NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% nginx-1 249m 3% 1790Mi 5% nginx-2 150m 1% 847Mi 2%
In this example, only Pods that have the
application: demo-app label will be included in the output.
!= are supported as operators. Multiple constraints can be applied by stringing them together as a comma-separated string, such as
application=demo-app,version!=1. Objects will only show up if they match all of the label filters in your query.
Getting the Utilization of a Specific Resource
top node and
top pod sub-commands can both be passed the name of a specific Node or Pod to fetch. The current metrics associated with that item will be displayed in isolation.
Supply the object’s name as a plain argument to the command, straight after
$ kubectl top node minikube NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% minikube 245m 3% 714Mi 2%
kubectl top command surfaces essential resource consumption metrics for Nodes and Pods in your Kubernetes cluster. You can use it to quickly check the CPU and memory usage associated with each of your workloads. This information can be helpful to diagnose performance issues and identify when it’s time to add another Node.
Before using the command, you need to install the Kubernetes Metrics Server in your cluster. This provides the API that exposes resource utilization data. Enabling Metrics Server incurs a performance overhead but this is usually negligible in most deployments. It typically requires 1m core of CPU and 2MiB of memory per monitored Node, although this may vary with the workloads running in your specific environment.