How to Upgrade Your VKE Cluster

Updated on July 25, 2024
How to Upgrade Your VKE Cluster header image

Introduction

This article explains how to upgrade your Vultr Kubernetes Engine (VKE) cluster to the latest version. VKE follows the upstream Kubernetes project, which typically releases a new minor version every three months. VKE supports the latest three releases, which means around nine months of support for each version. You should perform regular updates to receive the latest security updates and features as part of a healthy, planned lifecycle.

Before You Begin

First, please read the VKE changelog and upstream Kubernetes release notes carefully. Then, after familiarizing yourself with the new changes, back up your essential components, especially any app-level state stored in a database. We only update the internal components of Kubernetes and do not touch your workloads, but, as a best practice, you should always make backups.

Upgrade Considerations

If your workloads follow cloud-native best practices, you can expect your cluster upgrade to be complete in a few minutes, without downtime. When upgrading your cluster, you should consider these points to ensure your services remain available during the upgrade.

Use a PodDisruptionBudget

You should configure a PodDisruptionBudget (PDB), which determines how many instances can be down at the same time for a short period due to a voluntary disruption. Please refer to Specifying a Disruption Budget for your Application in the Kubernetes documentation for more specifics.

Implement Graceful Shutdowns

Your workloads should implement graceful shutdowns. This means that your workloads should be able to shut down and recover from a failure gracefully. This is particularly important for long-running workloads, such as web servers. Please refer to Termination of Pods in the Kubernetes Pod Lifecycle documentation for more specifics, and consider implementing a PreStop hook for your workloads.

During a VKE upgrade, Nodes are replaced following the standard Kubernetes termination lifecycle:

  1. Pods are set to the Terminating state.
  2. If a preStop hook exists, it is executed.
  3. A SIGTERM signal is sent to the Pod, warning the containers that they will be shut down.
  4. Kubernetes waits for the default grace period, 30 seconds.
  5. Any containers that haven't shut down are sent a SIGKILL signal.
  6. The Pod is removed.

Use Readiness and Liveness Probes

Readiness and liveness probes can be used in parallel to ensure a container will not receive traffic until it is ready. You should consider implementing these probes in your application. See the Kubernetes documentation for more details.

The Upgrade Process

If your application follows best practices, we can upgrade your cluster without downtime or loss of capacity.

First, we replace the control plane with a new control plane running the new version of Kubernetes. During this process, kubectl commands and other access to the cluster is unavailable, but the workloads are not impacted. We then perform a rolling upgrade of each nodepool. For stability and to ensure no loss of capacity, we create new replacement nodes before draining and removing the old nodes. This process is sometimes called a surge upgrade.

The control plane reschedules the workload for each node, then replaces it with a new node running the new version and reattaches any block storage volumes. New worker nodes will have new IP addresses.

NOTE: You should use Block Storage volumes for persistent data. The local disk storage on worker nodes is transient, and will be lost during the upgrade process.

Getting Started

To begin the upgrade process, log in to the Vultr customer portal with a web browser.

  1. Navigate to the Kubernetes tab.
  2. Select the cluster that you want to upgrade from the list. You'll see an alert banner and an Upgrade Available alert next to your current version number if an upgrade is available.
  3. Select the Manage Upgrades tab.
  4. Choose an upgrade option and click the Start Upgrade button.
  5. The cluster status will change to Upgrading.

You can monitor the upgrade progress for individual nodes on the Nodes tab. After the upgrade completes, the cluster status changes to Running.

About VKE Version Numbers

We follow the same Semantic Versioning terminology as Kubernetes, with an added build number suffix. VKE versions are formatted as:

w.x.y+z

where:

  • w is the major version
  • x is the minor version
  • y is the patch version
  • +z is the build number

Frequently Asked Questions

Can I upgrade to the latest version in one step, or do I need to upgrade one version at a time?

It depends on which semantic version is being discussed.

  • You can upgrade from one minor version to the next minor version but cannot skip a minor version.
  • You can upgrade to any available patch version or build number within a minor version.

For example, if you are performing upgrades on a fictional major version 99:

  • You can upgrade from 99.20.1+1 to 99.20.8+2, skipping patch versions 2 through 7, because minor version 20 is the same. The build number in this case does not matter.
  • You can upgrade from 99.20.3+5 to 99.21.1+2, regardless of patch version and build number, because minor version 21 is only one version newer than version 20.
  • However, you cannot upgrade from 99.20.1+1 to 99.23.1+1 because minor version 23 is more than one version newer than version 20. In this case, you must upgrade from minor version 20 to 21, then 22, then 23.

How can I find the VKE version for my cluster?

You can find this information in the Customer Portal on the Overview tab of your cluster's information page.

Overview tab of a cluster's information page

Vultr also adds a version label to every node in the vke.vultr.com namespace. You can view the version label by running the following command:

$ kubectl get nodes -L vke.vultr.com/node-pool,vke.vultr.com/version

NAME                       STATUS   ROLES    AGE   VERSION   NODE-POOL     VERSION
cluster-one-0197188b1f84   Ready    <none>   99m   v1.22.8   cluster-one   v1.22.8-2
cluster-one-16ca2242c777   Ready    <none>   99m   v1.22.8   cluster-one   v1.22.8-2
cluster-one-930fb1751cdf   Ready    <none>   99m   v1.22.8   cluster-one   v1.22.8-2
cluster-one-96c8b07f8a5a   Ready    <none>   99m   v1.22.8   cluster-one   v1.22.8-2
cluster-one-c99d50f8ae68   Ready    <none>   99m   v1.22.8   cluster-one   v1.22.8-2

The VKE version shown in this example is v1.22.8-2.

There are two version columns: The first is the Kubernetes version, and the second is the version label set by Vultr, which shows the VKE build number. The normal + sign is substituted by a - sign due to label character limitations.

Where can I find information about Vultr's VKE versions?

You'll find information about each VKE version in our changelog.

Where can I find information about Kubernetes versions?

Kubernetes release history is available on their website.

Should I upgrade my VKE cluster to a newer version?

Yes, you should stay current because upstream Kubernetes only supports the latest three minor versions. Typically, upstream Kubernetes has a new minor version release every three months, which means the latest version has around nine months of support.

Can I monitor the progress?

Yes. You can monitor the progress of your cluster upgrade by visiting the Vultr customer portal or by periodically running the kubectl command:

$ kubectl get nodes -o wide

If you have watch installed, you may find that a convenient way to monitor the progress of your cluster upgrade. Running kubectl under watch continuously monitors the command by running it every few seconds.

$ watch kubectl get nodes -o wide

More Information

VKE is a managed product that handles the upgrade tasks for you. However, if you'd like to know more about Kubernetes upgrades and administration in general, see the upstream documentation at kubernetes.io. To learn about Kubernetes at Vultr and VKE, see our documentation library.