This page explains how to upgrade a Kubernetes cluster created with kubeadm from version
1.15.x to version 1.16.x, and from version 1.16.x to 1.16.y (where y > x
).
To see information about upgrading clusters created using older versions of kubeadm, please refer to following pages instead:
The upgrade workflow at high level is the following:
kubeadm upgrade
does not touch your workloads, only components internal to Kubernetes, but backups are always a best practice.apt update
apt-cache policy kubeadm
# find the latest 1.16 version in the list
# it should look like 1.16.x-00, where x is the latest patch
yum list --showduplicates kubeadm --disableexcludes=kubernetes
# find the latest 1.16 version in the list
# it should look like 1.16.x-0, where x is the latest patch
# replace x in 1.16.x-00 with the latest patch version
apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm=1.16.x-00 && \
apt-mark hold kubeadm
# replace x in 1.16.x-0 with the latest patch version
yum install -y kubeadm-1.16.x-0 --disableexcludes=kubernetes
Verify that the download works and has the expected version:
kubeadm version
On the control plane node, run:
sudo kubeadm upgrade plan
You should see output similar to this:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.15.2
[upgrade/versions] kubeadm version: v1.16.0
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
Kubelet 1 x v1.15.2 v1.16.0
Upgrade to the latest version in the v1.16 series:
COMPONENT CURRENT AVAILABLE
API Server v1.15.2 v1.16.0
Controller Manager v1.15.2 v1.16.0
Scheduler v1.15.2 v1.16.0
Kube Proxy v1.15.2 v1.16.0
CoreDNS 1.3.1 1.6.2
Etcd 3.3.10 3.3.15
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.16.0
_____________________________________________________________________
This command checks that your cluster can be upgraded, and fetches the versions you can upgrade to.
Note:kubeadm upgrade
also automatically renews the certificates that it manages on this node. To opt-out of certificate renewal the flag--certificate-renewal=false
can be used. For more information see the certificate management guide.
Choose a version to upgrade to, and run the appropriate command. For example:
sudo kubeadm upgrade apply v1.16.x
x
with the patch version you picked for this upgrade.You should see output similar to this:
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade/version] You have chosen to change the cluster version to "v1.16.0"
[upgrade/versions] Cluster version: v1.15.2
[upgrade/versions] kubeadm version: v1.16.0
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.16.0"...
Static pod: kube-apiserver-luboitvbox hash: 8d931c2296a38951e95684cbcbe3b923
Static pod: kube-controller-manager-luboitvbox hash: 2480bf6982ad2103c05f6764e20f2787
Static pod: kube-scheduler-luboitvbox hash: 9b290132363a92652555896288ca3f88
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests446257614"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing "apiserver-etcd-client" certificate
[upgrade/staticpods] Renewing "apiserver" certificate
[upgrade/staticpods] Renewing "apiserver-kubelet-client" certificate
[upgrade/staticpods] Renewing "front-proxy-client" certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-06-05-23-38-03/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-luboitvbox hash: 8d931c2296a38951e95684cbcbe3b923
Static pod: kube-apiserver-luboitvbox hash: 1b4e2b09a408c844f9d7b535e593ead9
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing certificate embedded in "controller-manager.conf"
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-06-05-23-38-03/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-luboitvbox hash: 2480bf6982ad2103c05f6764e20f2787
Static pod: kube-controller-manager-luboitvbox hash: 6617d53423348aa619f1d6e568bb894a
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing certificate embedded in "scheduler.conf"
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-06-05-23-38-03/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-luboitvbox hash: 9b290132363a92652555896288ca3f88
Static pod: kube-scheduler-luboitvbox hash: edf58ab819741a5d1eb9c33de756e3ca
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/staticpods] Renewing certificate embedded in "admin.conf"
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.16.0". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
Manually upgrade your CNI provider plugin.
Your Container Network Interface (CNI) provider may have its own upgrade instructions to follow. Check the addons page to find your CNI provider and see whether additional upgrade steps are required.
This step is not required on additional control plane nodes if the CNI provider runs as a DaemonSet.
One common issue you may encounter if you are using Flannel network is
that after the upgrade, the cluster nodes remain in NotReady
state.
If you do kubectl get nodes -o yaml
, you may see following lines in the
output:
message:docker: network plugin is not ready: cni config uninitialized
In this case, you will need to update the /etc/cni/net.d/10-flannel.conflist
file to include the cniVersion
line, as shown below:
"name": "cbr0",
"cniVersion": "0.2.0",
"plugins": [ ...
Same as the first control plane node but use:
sudo kubeadm upgrade node
instead of:
sudo kubeadm upgrade apply
Also sudo kubeadm upgrade plan
is not needed.
Note: If you’re using the kubeadm support for certificate management, please add--certificate-renewal=true
to thekubeadm upgrade node
command line. bug issue
Prepare the node for maintenance by marking it unschedulable and evicting the workloads:
kubectl drain $MASTER --ignore-daemonsets
# replace x in 1.16.x-00 with the latest patch version
apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet=1.16.x-00 kubectl=1.16.x-00 && \
apt-mark hold kubelet kubectl
# replace x in 1.16.x-0 with the latest patch version
yum install -y kubelet-1.16.x-0 kubectl-1.16.x-0 --disableexcludes=kubernetes
Restart the kubelet
sudo systemctl restart kubelet
Bring the node back online by marking it schedulable:
kubectl uncordon $MASTER
The upgrade procedure on worker nodes should be executed one node at a time or few nodes at a time, without compromising the minimum required capacity for running your workloads.
# replace x in 1.16.x-00 with the latest patch version
apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm=1.16.x-00 && \
apt-mark hold kubeadm
# replace x in 1.16.x-0 with the latest patch version
yum install -y kubeadm-1.16.x-0 --disableexcludes=kubernetes
Call the following command:
sudo kubeadm upgrade node
Prepare the node for maintenance by marking it unschedulable and evicting the workloads. Run:
kubectl drain $NODE --ignore-daemonsets
You should see output similar to this:
node/ip-172-31-85-18 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-dj7d7, kube-system/weave-net-z65qx
node/ip-172-31-85-18 drained
# replace x in 1.16.x-00 with the latest patch version
apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet=1.16.x-00 kubectl=1.16.x-00 && \
apt-mark hold kubelet kubectl
# replace x in 1.16.x-0 with the latest patch version
yum install -y kubelet-1.16.x-0 kubectl-1.16.x-0 --disableexcludes=kubernetes
Restart the kubelet
sudo systemctl restart kubelet
Bring the node back online by marking it schedulable:
kubectl uncordon $NODE
After the kubelet is upgraded on all nodes verify that all nodes are available again by running the following command from anywhere kubectl can access the cluster:
kubectl get nodes
The STATUS
column should show Ready
for all your nodes, and the version number should be updated.
If kubeadm upgrade
fails and does not roll back, for example because of an unexpected shutdown during execution, you can run kubeadm upgrade
again.
This command is idempotent and eventually makes sure that the actual state is the desired state you declare.
To recover from a bad state, you can also run kubeadm upgrade apply --force
without changing the version that your cluster is running.
kubeadm upgrade apply
does the following:
Ready
statekube-dns
and kube-proxy
manifests and makes sure that all necessary RBAC rules are created.kubeadm upgrade node
does the following on additional control plane nodes:
ClusterConfiguration
from the cluster.kubeadm upgrade node
does the following on worker nodes:
ClusterConfiguration
from the cluster.Was this page helpful?
Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on Stack Overflow. Open an issue in the GitHub repo if you want to report a problem or suggest an improvement.