Quick Links

Kubernetes clusters can encounter several issues while trying to pull your container images. When an error occurs, your Pods will enter an ImagePullBackOff state. Here's how to debug this common but cryptic message so you can get your services online.

How Image Pulls Work

Kubernetes needs to fetch an image when you create a new deployment or update an existing one with a different tag reference. Responsibility for pulling images lies with the Kubelet process on each worker node. Every image referenced by a Pod's manifest needs to be accessible to all the nodes in the cluster so that any of them could fulfil a container scheduling request.

The download could fail if the image path is incorrect, you're improperly authenticated, or the network goes down. When this happens, Kubernetes "pulls back" and schedules another download attempt. The delay before the next pull increases exponentially each time an attempt fails, up to a limit of five minutes.

If your Pod shows the ImagePullBackOff state, Kubernetes has had multiple successive image pull failures and is now waiting before it retries again. The container won't be able to start until the image is available.

You can leave the Pod in this state if you know the issue is due to network conditions or another transient error. Kubernetes will eventually complete another retry and successfully acquire the image. If that's not the case, here's how to start debugging so you can bring your Pod up.

Check The Basics

First and foremost, it's worth checking the very basics. Is your resource manifest referencing a valid image which actually exists? Check the registry path and image tag for simple typos.

You can inspect the internal Kubernetes state with the

        describe pod
    

command in Kubectl. This gives you more information than

        get pod
    

and the Kubernetes dashboard provide.

kubectl describe pod my-pod --namespace my-namespace

Changes in the Pod's lifecycle are displayed under the "Events" heading. The first event will be Scheduled; it should be followed by a Pulling event for the first pull attempt. After this, you'll see a Failed or BackOff event if the pull couldn't succeed. These will be repeated later in the list if Kubernetes is still in a back off and retry cycle.

Screenshot showing a Kubernetes ImagePullBackOff event

Reading the Message associated with these events often provides the root cause of the problem. A manifest for image:tag not found message means the image is valid but you've specified an invalid tag. If you see does not exist or no pull access, check the registry and image paths are correct. When you're sure they're right, the issue will be related to incorrect authentication.

Managing Registry Logins

You need to be logged in before you pull private images. In Kubernetes, it's a two-step mechanism: create a secret containing credentials, then reference that secret in your Pod definitions.

The Pod field is called imagePullSecrets. It needs to indicate a Kubernetes secret that provides a login token for the registry. This secret should store a Docker-compatible JSON value.

apiVersion: v1

kind: Secret

type: kubernetes.io/dockerconfigjson

metadata:

name: image-pull-secret

data:

.dockerconfigjson: {{ "{"auths": {"registry.example.com": {"username": "demo-user", "password": "my-password"}}}" | b64enc }}

apiVersion: v1

kind: Pod

metadata:

name: my-pod

spec:

containers:

- name: my-container

image: registry.example.com/my-image:latest

imagePullSecrets:

- name: image-pull-secret

This manifest shows how to create a secret that logs you into registry.example.com as demo-user with the password my-password. The Pod references the secret by its name. Kubelet processes on your cluster's nodes will include the Docker config.json snippet when they're pulling images from the registry.

The snippet needs to be Base64-encoded to be a valid Kubernetes secret value. You can use a pre-encoded value or pipe plain text through YAML's b64enc, as shown in the manifest above.

The type of credentials you use will depend on your registry. In many cases, password will actually be a personal access token or API key. Docker Hub requires an access token generated in your account settings if you've got two-factor authentication enabled on your account.

Registry Rate Limits

If you've checked your registry URL, image tag name, and login credentials, you might be seeing ImagePullBackOff because of registry rate limits. Docker Hub now restricts you to 100 container pulls every six hours. This increases to 200 pulls per six hours if you supply your login credentials. That cap could be reached quickly in an active cluster with many frequently deployed Pods.

A pull failure due to a rate limit will manifest in the same way as an authentication issue. You'll need to wait until enough time elapses for the cap to expire. Kubernetes should then successfully pull the image, bringing your Pods up.

For longer-term mitigation, consider running your own in-cluster registry or proxy to cache your images. This can significantly reduce the frequency you hit Docker's servers, helping you stay within the rate limits.

Summary

Kubernetes Pods enter an ImagePullBackOff state when a node fails to pull an image. Kubelet will periodically retry the pull so transient errors don't require any manual intervention to address.

When you're sure an ImagePullBackOff isn't just a temporary blip, begin by making sure the Pod's image path is valid. If that checks out, suspect incorrect login credentials or an exhausted rate limiting allowance. Using kubectl describe will expose the sequence of events that led to the failure.

As a final option, you can try pulling the image yourself from another machine to make sure the remote registry server is actually up. If you can pull the image but your cluster can't, you might have more general networking issues preventing your nodes from reaching the registry.