Quick Links

Kubernetes Pods should operate without intervention but sometimes you might hit a problem where a container's not working the way it should. Restarting the Pod can help restore operations to normal.

Kubectl doesn't have a direct way of restarting individual Pods. Pods are meant to stay running until they're replaced as part of your deployment routine. This is usually when you release a new version of your container image.

Here are a few techniques you can use when you want to restart Pods without building a new image or running your CI pipeline. They can help when you think a fresh set of containers will get your workload running again.

Scaling the Replica Count

Although there's no

        kubectl restart
    

, you can achieve something similar by scaling the number of container replicas you're running. This works when your Pod is part of a Deployment, StatefulSet, ReplicaSet, or Replication Controller.

kubectl scale deployment my-deployment --replicas=0

kubectl scale deployment my-deployment --replicas=3

Scaling your Deployment down to 0 will remove all your existing Pods. Wait until the Pods have been terminated, using kubectl get pods to check their status, then rescale the Deployment back to your intended replica count. Kubernetes will create new Pods with fresh container instances.

Downtimeless Restarts With Rollouts

Manual replica count adjustment comes with a limitation: scaling down to 0 will create a period of downtime where there's no Pods available to serve your users. An alternative option is to initiate a rolling restart which lets you replace a set of Pods without downtime. It's available with Kubernetes v1.15 and later.

kubectl rollout restart deployment my-deployment

When you run this command, Kubernetes will gradually terminate and replace your Pods while ensuring some containers stay operational throughout. The rollout's phased nature lets you keep serving customers while effectively "restarting" your Pods behind the scenes.

Screenshot of checking rollout status with Kubectl

After the rollout completes, you'll have the same number of replicas as before but each container will be a fresh instance. You can check the status of the rollout by using kubectl get pods to list Pods and watch as they get replaced. There's also kubectl rollout status deployment/my-deployment which shows the current progress too.

kubectl rollout works with Deployments, DaemonSets, and StatefulSets. Most of the time this should be your go-to option when you want to terminate your containers and immediately start new ones.

(Ab)using ReplicaSet Monitoring

When your Pod's part of a ReplicaSet or Deployment, you can initiate a replacement by simply deleting it. The ReplicaSet will notice the Pod has vanished as the number of container instances will drop below the target replica count.

kubectl delete pod my-pod

The ReplicaSet will intervene to restore the minimum availability level. It'll automatically create a new Pod, starting a fresh container to replace the old one.

This is technically a side-effect - it's better to use the scale or rollout commands which are more explicit and designed for this use case. Nonetheless manual deletions can be a useful technique if you know the identity of a single misbehaving Pod inside a ReplicaSet or Deployment. A rollout would replace all the managed Pods, not just the one presenting a fault.

You can expand upon the technique to replace all failed Pods using a single command:

kubectl delete pods --field-selector=status.phase=Failed

Any Pods in the Failed state will be terminated and removed. The replication controller will notice the discrepancy and add new Pods to move the state back to the configured replica count. If you're confident the old Pods failed due to a transient error, the new ones should stay running in a healthy state.

Changing Pod Annotations

Another way of forcing a Pod to be replaced is to add or modify an annotation. Kubernetes will replace the Pod to apply the change.

You can use the kubectl annotate command to apply an annotation:

kubectl annotate pods my-pod app-version="2" --overwrite

This command updates the app-version annotation on my-pod. The --overwrite flag instructs Kubectl to apply the change even if the annotation already exists. Without it you can only add new annotations as a safety measure to prevent unintentional changes.

Updating a deployment's environment variables has a similar effect to changing annotations. This is ideal when you're already exposing an app version number, build ID, or deploy date in your environment.

kubectl set env deployment my-deployment APP_VERSION="2"

Conclusion

Kubernetes Pods should usually run until they're replaced by a new deployment. As a result, there's no direct way to "restart" a single Pod. If one of your containers experiences an issue, aim to replace it instead of restarting. The subtle change in terminology better matches the stateless operating model of Kubernetes Pods.

Scale your replica count, initiate a rollout, or manually delete Pods from a ReplicaSet to terminate old containers and start fresh new instances. Rollouts are the preferred solution for modern Kubernetes releases but the other approaches work too and can be more suited to specific scenarios.

Foremost in your mind should be these two questions: do you want all the Pods in your Deployment or ReplicaSet to be replaced, and is any downtime acceptable? Manual Pod deletions can be ideal if you want to "restart" an individual Pod without downtime, provided you're running more than one replica, whereas scale is an option when the rollout command can't be used and you're not concerned about a brief period of unavailability.