Quick Links
Kubeadm isthe official tool for installingand maintaining a cluster that’s based on the default Kubernetes distribution. Created clusters don’t automatically upgrade themselves and disabling package updates for Kubernetes components is part of the set up process. This means you have to manually migrate your cluster when a new Kubernetes release arrives.
In this article you’ll learn the steps involved in a Kubernetes upgrade by walking through a transition from v1.24 to v1.25 on Ubuntu 22.04. The process is usually similar for any Kubernetes minor release but you should always refer tothe official documentationbefore you start, in case a new release carries specialist requirements.
Identifying the Precise Version to Install
The first step is determining the version you’re going to upgrade to. You can’t skip minor versions - going directly from v1.23 to v1.25 is unsupported, for example - so you should pick the most recent patch release for the minor version that follows your cluster’s current release.
You can discover the latest patch version with the following command:
1.25.1-00 500
1.25.0-00 500
This shows that1.25.1-00is the newest release of Kubernetes v1.25. Replace1.25in the command with the minor version that you’re going to be moving to.
Upgrading the Control Plane
Complete this section on the machine that’s running your control plane. Don’t touch the worker nodes yet - they can continue using their current Kubernetes release while the control plane is updated. If you have multiple control plane nodes, run this sequence on the first one and follow the worker node procedure in the next section on the others.
Update Kubeadm
First release the hold on the Kubeadm package and install the new version. Specify the exact release identified earlier so that Apt doesn’t automatically grab the latest one, which could be an unsupported minor version bump.
$ sudo apt-mark unhold kubeadm
$ sudo apt install -y kubeadm=1.25.1-00
Now reapply the hold so thatapt upgradedoesn’t deliver unwanted releases in the future:
kubeadm set on hold
kubeadm version: &version.Info{Major:“1”, Minor:“25”, GitVersion:“v1.25.1”…
Create the Upgrade Plan
Kubeadm automates the control plane upgrade process. First use theupgrade plancommand to establish which versions you can migrate to. This checks your cluster to make sure it can accept the new release.
The output is quite long but it’s worth closely inspecting. The first section should report that all the Kubernetes components will upgrade to the version number you selected earlier. New versions may also be displayed for CoreDNS and etcd.
kube-apiserver v1.24.5 v1.25.1
kube-controller-manager v1.24.5 v1.25.1
kube-scheduler v1.24.5 v1.25.1
kube-proxy v1.24.5 v1.25.1
CoreDNS v1.8.6 v1.9.3
etcd 3.5.3-0 3.5.4-0
The end of the output includes a table that surfaces any required config changes. You may occasionally need to take manual action to adjust these config files and supply them to the cluster. Refer to the documentationfor your releaseif you get a “yes” in the “Manual Upgrade Required” column.
kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
kubelet.config.k8s.io v1beta1 v1beta1 no
This cluster is now ready to upgrade. The plan has confirmed that Kubernetes v1.25.1 is available and no manual actions are required. Check you’ve installed the correct Kubeadm version if no plan is produced or errors appear. You might be trying to move between more than one minor version.
Applying the Upgrade Plan
Now you can instruct Kubeadm to proceed with applying the upgrade plan by runningupgrade applywith the correct version number:
[upgrade/versions] Cluster version: v1.24.5
[upgrade/versions] kubeadm version: v1.25.1
[upgrade] Are you sure you want to proceed? [y/N]:
Pressyto continue with the upgrade. The process may take several minutes while it pulls the images for the new components and restarts your control plane. You won’t be able to reliably interact with your cluster’s API during this time but any running Pods should remain operational on your Nodes.
The control plane has now been upgraded.
Upgrading Worker Nodes
Now you can upgrade your worker nodes. These stepsalso need to be performed on your control plane nodes. Upgrade each node in sequence to minimize the effects of capacity being removed from your cluster. Pods will be rescheduled to other nodes while each one gets upgraded.
Firstdrain the nodeof its existing Pods and place a cordon around it. Substitute in the name of the node instead ofnode-1in the following commands.
$ kubectl drain node-1
This evicts the node’s Pods and prevents any new ones from being scheduled. The node’s now inactive in your cluster.
Next release the package manager hold on thekubeadm,kubectl, andkubeletpackages. Install the new version of each one. The versions of all three packages should exactly match. Remember to set the hold status again after you’ve got the new releases.
$ sudo apt-mark unhold kubeadm kubectl kubelet
$ sudo apt install -y kubeadm=1.25.1-00 kubectl=1.25.1-00 kubelet=1.25.1-00
$ sudo apt-mark hold kubeadm kubectl kubelet
Next use Kubeadm’supgrade nodecommand to apply the upgrade and update your node’s configuration:
Finally restart the Kubelet service and uncordon the node. It should rejoin the cluster and start accepting new Pods.
$ sudo systemctl restart kubelet
$ kubectl uncordon node-1
Checking Your Cluster
Once you’ve finished your upgrade, runkubectl versionto check the active release matches your expectations:
…
Server Version: v1.25.1
Next check that all your nodes are reporting their new version and have entered theReadystate:
ubuntu22 Ready control-plane 70m v1.25.1
The upgrade is now complete.
Recovering From an Upgrade Failure
Occasionally an upgrade could fail even though Kubeadm successfully plans a pathway and verifies your cluster’s health. Problems can occur if the upgrade gets interrupted or a Kubernetes component stops responding. Kubeadm should automatically rollback to the previous version if this happens.
Theupgrade applycommandcan be safely repeatedto retry a failed upgrade. It will detect the ways in which your cluster differs from the expected version, allowing it to attempt a recovery of both total failures and partial upgrades.
When repeating the command doesn’t work, you can try forcing the upgrade by adding the–forceflag to the command:
This will allow the upgrade to continue in situationswhere requirements are missingor can no longer be fulfilled.
When disaster strikes and your cluster seems to be totally broken, you should be able to restore itusing the backup filesthat Kubeadm writes automatically:
These backups can be used to manually restore the previous Kubernetes version to a working state.
Summary
Upgrading Kubernetes with Kubeadm shouldn’t be too stressful. Most of the process is automated with your involvement limited to installing the new packages and checking the upgrade plan.
Before upgrading you should always consult the Kubernetes changelog and any documentation published by components you use in your cluster. Pod networking interfaces, Ingress controllers, storage providers, and other addons may all have incompatibilities with a new Kubernetes release or require their own upgrade routines.