Quick Links

Kubeadm is the official tool for installing and maintaining a cluster that's based on the default Kubernetes distribution. Created clusters don't automatically upgrade themselves and disabling package updates for Kubernetes components is part of the set up process. This means you have to manually migrate your cluster when a new Kubernetes release arrives.

In this article you'll learn the steps involved in a Kubernetes upgrade by walking through a transition from v1.24 to v1.25 on Ubuntu 22.04. The process is usually similar for any Kubernetes minor release but you should always refer to the official documentation before you start, in case a new release carries specialist requirements.

Identifying the Precise Version to Install

The first step is determining the version you're going to upgrade to. You can't skip minor versions - going directly from v1.23 to v1.25 is unsupported, for example - so you should pick the most recent patch release for the minor version that follows your cluster's current release.

You can discover the latest patch version with the following command:

$ apt-cache policy kubeadm | grep 1.25
    

1.25.1-00 500

1.25.0-00 500

This shows that 1.25.1-00 is the newest release of Kubernetes v1.25. Replace 1.25 in the command with the minor version that you're going to be moving to.

Upgrading the Control Plane

Complete this section on the machine that's running your control plane. Don't touch the worker nodes yet - they can continue using their current Kubernetes release while the control plane is updated. If you have multiple control plane nodes, run this sequence on the first one and follow the worker node procedure in the next section on the others.

Update Kubeadm

First release the hold on the Kubeadm package and install the new version. Specify the exact release identified earlier so that Apt doesn't automatically grab the latest one, which could be an unsupported minor version bump.

$ sudo apt update
    

$ sudo apt-mark unhold kubeadm

$ sudo apt install -y kubeadm=1.25.1-00

Now reapply the hold so that apt upgrade doesn't deliver unwanted releases in the future:

$ sudo apt-mark hold kubeadm
    

kubeadm set on hold

Verify that Kubeadm is now the expected version:

$ kubeadm version --short
    

kubeadm version: &version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.1"...

Create the Upgrade Plan

Kubeadm automates the control plane upgrade process. First use the upgrade plan command to establish which versions you can migrate to. This checks your cluster to make sure it can accept the new release.

$ sudo kubeadm upgrade plan

The output is quite long but it's worth closely inspecting. The first section should report that all the Kubernetes components will upgrade to the version number you selected earlier. New versions may also be displayed for CoreDNS and etcd.

COMPONENT CURRENT TARGET
    

kube-apiserver v1.24.5 v1.25.1

kube-controller-manager v1.24.5 v1.25.1

kube-scheduler v1.24.5 v1.25.1

kube-proxy v1.24.5 v1.25.1

CoreDNS v1.8.6 v1.9.3

etcd 3.5.3-0 3.5.4-0

The end of the output includes a table that surfaces any required config changes. You may occasionally need to take manual action to adjust these config files and supply them to the cluster. Refer to the documentation for your release if you get a "yes" in the "Manual Upgrade Required" column.

API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
    

kubeproxy.config.k8s.io v1alpha1 v1alpha1 no

kubelet.config.k8s.io v1beta1 v1beta1 no

This cluster is now ready to upgrade. The plan has confirmed that Kubernetes v1.25.1 is available and no manual actions are required. Check you've installed the correct Kubeadm version if no plan is produced or errors appear. You might be trying to move between more than one minor version.

Applying the Upgrade Plan

Now you can instruct Kubeadm to proceed with applying the upgrade plan by running upgrade apply with the correct version number:

$ sudo kubeadm upgrade apply v1.25.1

A confirmation prompt will appear:

[upgrade/version] You have chosen to change the cluster version to "v1.25.1"
    

[upgrade/versions] Cluster version: v1.24.5

[upgrade/versions] kubeadm version: v1.25.1

[upgrade] Are you sure you want to proceed? [y/N]:

Press y to continue with the upgrade. The process may take several minutes while it pulls the images for the new components and restarts your control plane. You won't be able to reliably interact with your cluster's API during this time but any running Pods should remain operational on your Nodes.

Eventually you should see a success message:

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.25.1". Enjoy!]

The control plane has now been upgraded.

Upgrading Worker Nodes

Now you can upgrade your worker nodes. These steps also need to be performed on your control plane nodes. Upgrade each node in sequence to minimize the effects of capacity being removed from your cluster. Pods will be rescheduled to other nodes while each one gets upgraded.

First drain the node of its existing Pods and place a cordon around it. Substitute in the name of the node instead of node-1 in the following commands.

$ kubectl cordon node-1
    

$ kubectl drain node-1

This evicts the node's Pods and prevents any new ones from being scheduled. The node's now inactive in your cluster.

Next release the package manager hold on the kubeadm, kubectl, and kubelet packages. Install the new version of each one. The versions of all three packages should exactly match. Remember to set the hold status again after you've got the new releases.

$ sudo apt update
    

$ sudo apt-mark unhold kubeadm kubectl kubelet

$ sudo apt install -y kubeadm=1.25.1-00 kubectl=1.25.1-00 kubelet=1.25.1-00

$ sudo apt-mark hold kubeadm kubectl kubelet

Next use Kubeadm's upgrade node command to apply the upgrade and update your node's configuration:

$ sudo kubeadm upgrade node

Finally restart the Kubelet service and uncordon the node. It should rejoin the cluster and start accepting new Pods.

$ sudo systemctl daemon-reload
    

$ sudo systemctl restart kubelet

$ kubectl uncordon node-1

Checking Your Cluster

Once you've finished your upgrade, run kubectl version to check the active release matches your expectations:

$ kubectl version --short
    

Client Version: v1.25.1

...

Server Version: v1.25.1

Next check that all your nodes are reporting their new version and have entered the Ready state:

$ kubectl get nodes -o wide
    

NAME STATUS ROLES AGE VERSION

ubuntu22 Ready control-plane 70m v1.25.1

The upgrade is now complete.

Recovering From an Upgrade Failure

Occasionally an upgrade could fail even though Kubeadm successfully plans a pathway and verifies your cluster's health. Problems can occur if the upgrade gets interrupted or a Kubernetes component stops responding. Kubeadm should automatically rollback to the previous version if this happens.

The upgrade apply command can be safely repeated to retry a failed upgrade. It will detect the ways in which your cluster differs from the expected version, allowing it to attempt a recovery of both total failures and partial upgrades.

When repeating the command doesn't work, you can try forcing the upgrade by adding the --force flag to the command:

$ kubeadm upgrade apply --force

This will allow the upgrade to continue in situations where requirements are missing or can no longer be fulfilled.

When disaster strikes and your cluster seems to be totally broken, you should be able to restore it using the backup files that Kubeadm writes automatically:

  • Copy the contents of /etc/kubernetes/tmp/kubeadm-backup-etcd-<date>-<time> into your /var/lib/etcd directory.
  • Copy the contents of /etc/kubernetes/tmp/kubeadm-backup-manifests-<date>-<time> into your /etc/kubernetes/manifests directory.

These backups can be used to manually restore the previous Kubernetes version to a working state.

Summary

Upgrading Kubernetes with Kubeadm shouldn't be too stressful. Most of the process is automated with your involvement limited to installing the new packages and checking the upgrade plan.

Before upgrading you should always consult the Kubernetes changelog and any documentation published by components you use in your cluster. Pod networking interfaces, Ingress controllers, storage providers, and other addons may all have incompatibilities with a new Kubernetes release or require their own upgrade routines.