Vultr Kubernetes Engine (VKE) Changelog

Updated on August 20, 2024
Vultr Kubernetes Engine (VKE) Changelog header image

Vultr Kubernetes Engine (VKE) is a fully managed Kubernetes product with predictable pricing. When you deploy VKE, you get a managed Kubernetes control plane that includes our Cloud Controller Manager (CCM) and the Container Storage Interface (CSI).

This changelog lists important changes for each Kubernetes version we support in VKE. Changelogs are also referred to as release notes. Historical changelogs for archived versions are available below.

Current Versions

VKE on 1.31.x

1.31.0+1 (2024-08-20)

  • Initial release of v1.31 support

VKE on v1.30.x

1.30.3+1 (2024-08-20)

  • calico -> v3.28.0
  • etcd -> v3.5.15
  • vultr-ccm -> v0.13.1
  • vultr-csi -> v0.13.2
  • cni -> v1.45.1
  • crictl -> v1.30.1
  • runc -> v1.1.13
  • containerd -> v1.7.20
  • Fixed hairpinning issues in the new CCM and resolved the problem of load balancers being created multiple times
  • Updated workers to use the new Nvidia packages
  • Integrated the VCR auth plugin into workers, eliminating the need for customers to add VCR credentials for repositories within the same account
  • Allowed UDP traffic for workers in the firewall

1.30.0+1 (2024-05-14)

  • Initial release of v1.30 support

VKE on v1.29.x

1.29.7+1 (2024-08-20)

  • calico -> v3.28.0
  • etcd -> v3.5.15
  • vultr-ccm -> v0.13.1
  • vultr-csi -> v0.13.2
  • cni -> v1.45.1
  • crictl -> v1.30.1
  • runc -> v1.1.13
  • containerd -> v1.7.20
  • Fixed hairpinning issues in the new CCM and resolved the issues of load balancers being created multiple times
  • Updated workers to use the new Nvidia packages
  • Integrated the VCR auth plugin into workers, eliminating the need for customers to add VCR credentials for repositories within the same account
  • Allowed UDP traffic for workers in the firewall

1.29.4+1 (2024-05-14)

  • calico -> v3.27.3
  • etcd -> v3.5.13
  • vultr-ccm -> v0.12.0
  • vultr-csi -> v0.12.4
  • cni -> v1.4.0
  • crictl -> v1.30.0
  • runc -> v1.1.12
  • containerd -> v1.7.15
  • Add VKE Baremetal support
  • Replace Kube-Proxy service with a DaemonSet
  • Increase payload size to accommodate larger ETCD databases
  • XFS Support
    • Ensure the appropriate storage class is properly configured

1.29.2+1 (2024-02-27)

  • calico -> v3.27.2
  • etcd -> v3.5.12
  • vultr-csi -> v0.12.3
  • containerd -> v1.7.13
  • Switched VKE Worker OS: Moved from Debian 11 to Ubuntu 22.04 LTS

1.29.2-1 (2024-02-27)

  • calico -> v3.27.2
  • etcd -> v3.5.12
  • vultr-csi -> v0.12.3
  • containerd -> v1.7.13
  • Switched VKE Worker OS: Moved from Debian 11 to Ubuntu 22.04 LTS
  • BYO CNI Feature
  • Arkenstone support

1.29.1+1 (2024-01-30)

  • Initial release of v1.29 support

VKE on v1.28.x

1.28.12+1 (2024-08-20)

  • calico -> v3.28.0
  • etcd -> v3.5.15
  • vultr-ccm -> v0.13.1
  • vultr-csi -> v0.13.2
  • cni -> v1.45.1
  • crictl -> v1.30.1
  • runc -> v1.1.13
  • containerd -> v1.7.20
  • Fixed hairpinning issues in the new CCM and resolved the issues of load balancers being created multiple times
  • Updated workers to use the new Nvidia packages
  • Integrated the VCR auth plugin into workers, eliminating the need for customers to add VCR credentials for repositories within the same account
  • Allowed UDP traffic for workers in the firewall

1.28.9+1 (2024-05-14)

  • calico -> v3.27.3
  • etcd -> v3.5.13
  • vultr-ccm -> v0.12.0
  • vultr-csi -> v0.12.4
  • cni -> v1.4.0
  • crictl -> v1.30.0
  • runc -> v1.1.12
  • containerd -> v1.7.15
  • Add VKE Baremetal support
  • Replace Kube-Proxy service with a DaemonSet
  • Utilize GPU Operator for Nvidia package management instead of preloading drivers on the machine.
  • Increase payload size to accommodate larger ETCD databases
  • XFS Support
    • Ensure the appropriate storage class is properly configured

1.28.7+1 (2024-02-27)

  • calico -> v3.27.2
  • etcd -> v3.5.12
  • vultr-csi -> v0.12.3
  • containerd -> v1.7.13
  • Switched VKE Worker OS: Moved from Debian 11 to Ubuntu 22.04 LTS

1.28.7-1 (2024-02-27)

  • calico -> v3.27.2
  • etcd -> v3.5.12
  • vultr-csi -> v0.12.3
  • containerd -> v1.7.13
  • Switched VKE Worker OS: Moved from Debian 11 to Ubuntu 22.04 LTS
  • BYO CNI Feature
  • Arkenstone support

1.28.6+1 (2024-01-30)

  • calico -> v3.27.0

  • etcd -> v3.5.11

  • vultr-ccm -> v0.11.0

  • vultr-csi -> v0.12.0

  • cni -> v1.4.0

  • crictl -> v1.29.0

  • runc -> v1.1.11

  • containerd -> v1.7.12

  • Resolve issue causing excessive resource usage on HA clusters with large etcd db’s

  • Upgrade VKE to kernel 6.X

1.28.3+2 (2023-11-10)

  • vultr-csi -> v0.10.1

  • Resolve max cpu issue related to nvidia packages due to unattend-upgrades

1.28.3+1 (2023-11-06)

  • calico -> v3.26.3

  • etcd -> v3.5.10

  • runc -> v1.1.9

  • containerd -> v1.7.7

  • High Availability (HA) control plane support

1.28.2+1 (2023-09-25)

  • Initial release of v1.28 support

VKE on v1.27.x

1.27.11+1 (2024-02-27)

  • calico -> v3.27.2
  • etcd -> v3.5.12
  • vultr-csi -> v0.12.3
  • containerd -> v1.7.13
  • Switched VKE Worker OS: Moved from Debian 11 to Ubuntu 22.04 LTS

1.27.11-1 (2024-02-27)

  • calico -> v3.27.2
  • etcd -> v3.5.12
  • vultr-csi -> v0.12.3
  • containerd -> v1.7.13
  • Switched VKE Worker OS: Moved from Debian 11 to Ubuntu 22.04 LTS
  • BYO CNI Feature
  • Arkenstone support

1.27.10+1 (2024-01-30)

  • calico -> v3.27.0

  • etcd -> v3.5.11

  • vultr-ccm -> v0.11.0

  • vultr-csi -> v0.12.0

  • cni -> v1.4.0

  • crictl -> v1.29.0

  • runc -> v1.1.11

  • containerd -> v1.7.12

  • Resolve issue causing excessive resource usage on HA clusters with large etcd db’s

  • Upgrade VKE to kernel 6.X

1.27.7+2 (2023-11-10)

  • vultr-csi -> v0.10.1

  • Resolve max cpu issue related to nvidia packages due to unattend-upgrades

1.27.7+1 (2023-11-06)

  • calico -> v3.26.3

  • etcd -> v3.5.10

  • runc -> v1.1.9

  • containerd -> v1.7.7

  • High Availability (HA) control plane support

1.27.6+1 (2023-09-25)

  • calico -> v3.26.1

  • coredns -> v1.11.1

  • konnectivity -> v0.0.37

  • etcd -> v3.5.9

  • vultr-ccm -> v0.10.0

  • vultr-csi -> v0.9.0

  • cni -> v1.3.0

  • crictl -> v1.28.0

  • runc -> v1.1.8

  • containerd -> v1.7.6

  • Implement hard-eviction thresholds on worker nodes

1.27.2+1 (2023-06-23)

  • Initial release of v1.27 support

VKE on v1.26.x

1.26.10+2 (2023-11-10)

  • vultr-csi -> v0.10.1

  • Resolve max cpu issue related to nvidia packages due to unattend-upgrades

1.26.10+1 (2023-11-06)

  • calico -> v3.26.3

  • etcd -> v3.5.10

  • runc -> v1.1.9

  • containerd -> v1.7.7

  • High Availability (HA) control plane support

1.26.9+1 (2023-09-25)

  • calico -> v3.26.1

  • coredns -> v1.11.1

  • konnectivity -> v0.0.37

  • etcd -> v3.5.9

  • vultr-ccm -> v0.10.0

  • vultr-csi -> v0.9.0

  • cni -> v1.3.0

  • crictl -> v1.28.0

  • runc -> v1.1.8

  • containerd -> v1.7.6

  • Implement hard-eviction thresholds on worker nodes

1.26.5+1 (2023-06-23)

  • calico -> v3.26.0
  • coredns -> v1.10.1
  • konnectivity -> v0.0.37
  • etcd -> v3.5.9
  • vultr-ccm -> v0.9.0
  • vultr-csi -> v0.9.0
  • cni -> v1.3.0
  • crictl -> v1.27.0
  • runc -> v1.1.7
  • containerd -> v1.7.2
  • VKE is now available in Sao Paulo
  • VKE is now available in Tel Aviv

1.26.2+2 (2023-03-22)

  • Resolving issue with RFC 3849 IPv6 space by switching to ULA space
  • Implement soft-eviction thresholds on worker nodes
    • Memory available <=250Mi for 1 minute

1.26.2+1 (2023-03-14)

  • Initial release of v1.26 support

VKE on v1.25.x

1.25.10+1 (2023-06-23)

  • calico -> v3.26.0
  • coredns -> v1.10.1
  • konnectivity -> v0.0.37
  • etcd -> v3.5.9
  • vultr-ccm -> v0.9.0
  • vultr-csi -> v0.9.0
  • cni -> v1.3.0
  • crictl -> v1.27.0
  • runc -> v1.1.7
  • containerd -> v1.7.2
  • VKE is now available in Sao Paulo
  • VKE is now available in Tel Aviv

1.25.7+2 (2023-03-22)

  • Resolving issue with RFC 3849 IPv6 space by switching to ULA space
  • Implement soft-eviction thresholds on worker nodes
    • Memory available <=250Mi for 1 minute

1.25.7+1 (2023-03-14)

  • Vultr CSI -> v0.9.0
    • Block stats now available
    • Block resizing introduced
  • Vultr CCM -> v0.9.0
    • Dual-Stack load-balancers implemented
  • konnectivity -> v0.0.37
  • Initial IPv6 Dual Stack support
    • All workers nodes are now provisioned with a public IPv6 address
    • Cluster networking setup to use ipv6 by default
  • Konnectivity fixes implemented to resolve issues with socket being unavailable
  • Added more strict memory and cpu accounting for resource management on nodes

1.25.6+1 (2023-01-25)

  • calico -> v3.25.0
  • coredns -> v1.10.0
  • konnectivity -> v0.0.36
  • etcd -> v3.5.7
  • vultr-ccm -> v0.7.0
  • vultr-csi -> v0.7.0
  • cni -> v1.2.0
  • crictl -> v1.26.0
  • runc -> v1.1.4
  • containerd -> v1.6.15

Upgraded to Debian 11 for worker and controller nodes

The kubelet and the container runtime are configured to use the systemd cgroup driver

Cluster-autoscaler bumped to version 1.26.1

1.25.3+1 (2022-11-16)

  • calico -> v3.24.5
  • coredns -> v1.10.0
  • konnectivity -> v0.0.33
  • etcd -> v3.5.5
  • vultr-ccm -> v0.7.0
  • vultr-csi -> v0.7.0
  • cni -> v1.1.1
  • crictl -> v1.25.0
  • runc -> v1.1.4
  • containerd -> v1.6.9
  • Fixed bug that affected the VKE labels on worker nodes
  • Initial Release for v1.25 support

VKE on v1.24.x

1.24.11+2 (2023-03-22)

  • Resolving issue with RFC 3849 IPv6 space by switching to ULA space
  • Implement soft-eviction thresholds on worker nodes
    • Memory available <=250Mi for 1 minute

1.24.11+1 (2023-03-14)

    • Vultr CSI -> v0.9.0
      • Block stats now available
      • Block resizing introduced
  • Vultr CCM -> v0.9.0
    • Dual-Stack load-balancers implemented
  • konnectivity -> v0.0.37
  • Initial IPv6 Dual Stack support
    • All workers nodes are now provisioned with a public IPv6 address
    • Cluster networking setup to use ipv6 by default
  • Konnectivity fixes implemented to resolve issues with socket being unavailable
  • Added more strict memory and cpu accounting for resource management on nodes

1.24.10+2 (2023-01-26)

  • Fixed Cloud-init failing to complete bug causing VMs to not boot properly.

1.24.10+1 (2023-01-25)

  • calico -> v3.25.0
  • coredns -> v1.10.0
  • konnectivity -> v0.0.36
  • etcd -> v3.5.7
  • vultr-ccm -> v0.7.0
  • vultr-csi -> v0.7.0
  • cni -> v1.2.0
  • crictl -> v1.26.0
  • runc -> v1.1.4
  • containerd -> v1.6.15

Upgraded to Debian 11 for worker and controller nodes

The kubelet and the container runtime are configured to use the systemd cgroup driver

Cluster-autoscaler bumped to version 1.26.1

1.24.8+1 (2022-11-16)

  • calico -> v3.24.5
  • coredns -> v1.10.0
  • konnectivity -> v0.0.33
  • etcd -> v3.5.5
  • vultr-ccm -> v0.7.0
  • vultr-csi -> v0.7.0
  • cni -> v1.1.1
  • crictl -> v1.25.0
  • runc -> v1.1.4
  • containerd -> v1.6.9
  • Fixed bug that affected the VKE labels on worker nodes

1.24.4+1 (2022-09-26)

  • calico -> v3.24.1
  • coredns -> v1.10.0
  • konnectivity -> v0.0.33
  • etcd -> v3.5.5
  • vultr-ccm -> v0.7.0
  • vultr-csi -> v0.7.0
  • cni -> v1.1.1
  • crictl -> v1.25.0
  • runc -> v1.1.4
  • containerd -> v1.6.8

1.24.3+3 (2022-08-30)

  • calico -> v3.23.1
  • coredns -> v1.9.3
  • konnectivity -> v0.0.32
  • etcd -> v3.5.4
  • vultr-ccm -> v0.6.0
  • vultr-csi -> v0.7.0
  • cni -> v1.1.1
  • crictl -> v1.24.2
  • runc -> v1.1.3
  • containerd -> v1.6.6
  • Adjusted support RBAC rules to check for PDB issues prior to initiating upgrades
  • Added resolv-conf flag to kubelet

1.24.3+1 (2022-08-05)

  • Initial release of v1.24 support

Archived Versions

Versions listed here are no longer available for deployment. Changelogs are available for historical information.

VKE on v1.23.x

1.23.16+1 (2023-01-26)

  • Fixed Cloud-init failing to complete bug causing VMs to not boot properly.

1.23.16+1 (2023-01-25)

  • calico -> v3.25.0
  • coredns -> v1.10.0
  • konnectivity -> v0.0.36
  • etcd -> v3.5.7
  • vultr-ccm -> v0.7.0
  • vultr-csi -> v0.7.0
  • cni -> v1.2.0
  • crictl -> v1.26.0
  • runc -> v1.1.4
  • containerd -> v1.6.15

Upgraded to Debian 11 for worker and controller nodes

The kubelet and the container runtime are configured to use the systemd cgroup driver

Cluster-autoscaler bumped to version 1.26.1

1.23.14+1 (2022-11-16)

  • calico -> v3.24.5
  • coredns -> v1.10.0
  • konnectivity -> v0.0.33
  • etcd -> v3.5.5
  • vultr-ccm -> v0.7.0
  • vultr-csi -> v0.7.0
  • cni -> v1.1.1
  • crictl -> v1.25.0
  • runc -> v1.1.4
  • containerd -> v1.6.9
  • Fixed bug that affected the VKE labels on worker nodes

1.23.10+1 (2022-09-26)

  • calico -> v3.24.1
  • coredns -> v1.10.0
  • konnectivity -> v0.0.33
  • etcd -> v3.5.5
  • vultr-ccm -> v0.7.0
  • vultr-csi -> v0.7.0
  • cni -> v1.1.1
  • crictl -> v1.25.0
  • runc -> v1.1.4
  • containerd -> v1.6.8

1.23.9+1 (2022-08-30)

  • calico -> v3.23.1
  • coredns -> v1.9.3
  • konnectivity -> v0.0.32
  • etcd -> v3.5.4
  • vultr-ccm -> v0.6.0
  • vultr-csi -> v0.7.0
  • cni -> v1.1.1
  • crictl -> v1.24.2
  • runc -> v1.1.3
  • containerd -> v1.6.6
  • Adjusted support RBAC rules to check for PDB issues prior to initiating upgrades
  • Added resolv-conf flag to kubelet

1.23.7+1 (2022-06-13)

  • Implemented autoscaler support
  • calico -> v3.23.1
  • coredns -> v1.9.2
  • konnectivity -> v0.0.31
  • etcd -> v3.5.4
  • vultr-ccm -> v0.6.0
  • vultr-csi -> v0.7.0
  • Added Open-iSCSI support to worker nodes
  • cni -> v1.1.1
  • crictl -> v1.24.1
  • runc -> v1.1.2
  • containerd -> v1.6.4
  • Implemented reserved limits on worker nodes to prevent resource starvation to essential components
  • Resolved DNS issues on control-plane nodes

1.23.5+3 (2022-04-20)

  • CSI updated to v0.6.0 to support new block storage types
  • Updates to support more regions

1.23.5+2 (2022-04-07)

  • Added fix to disable swap memory on worker nodes
  • Added crictl config to worker nodes
  • Disabled resolvconf and set systemd-resolve as primary resolver

1.23.5+1 (2022-03-23)

  • Initial release of v1.23 support

VKE on v1.22.x

1.22.13+1 (2022-09-26)

  • calico -> v3.24.1
  • coredns -> v1.10.0
  • konnectivity -> v0.0.33
  • etcd -> v3.5.5
  • vultr-ccm -> v0.7.0
  • vultr-csi -> v0.7.0
  • cni -> v1.1.1
  • crictl -> v1.25.0
  • runc -> v1.1.4
  • containerd -> v1.6.8

1.22.12+1 (2022-08-30)

  • calico -> v3.23.1
  • coredns -> v1.9.3
  • konnectivity -> v0.0.32
  • etcd -> v3.5.4
  • vultr-ccm -> v0.6.0
  • vultr-csi -> v0.7.0
  • cni -> v1.1.1
  • crictl -> v1.24.2
  • runc -> v1.1.3
  • containerd -> v1.6.6
  • Adjusted support RBAC rules to check for PDB issues prior to initiating upgrades
  • Added resolv-conf flag to kubelet

1.22.10+1 (2022-06-13)

  • Implemented autoscaler support
  • calico -> v3.23.1
  • coredns -> v1.9.2
  • konnectivity -> v0.0.31
  • etcd -> v3.5.4
  • vultr-ccm -> v0.6.0
  • vultr-csi -> v0.7.0
  • Added Open-iSCSI support to worker nodes
  • cni -> v1.1.1
  • crictl -> v1.24.1
  • runc -> v1.1.2
  • containerd -> v1.6.4
  • Implemented reserved limits on worker nodes to prevent resource starvation to essential components
  • Resolved DNS issues on control-plane nodes

1.22.8+3 (2022-04-20)

  • CSI updated to v0.6.0 to support new block storage types
  • Updates to support more regions

1.22.8+2 (2022-04-07)

  • Added fix to disable swap memory on worker nodes
  • Added crictl config to worker nodes
  • Disabled resolvconf and set systemd-resolve as primary resolver

1.22.8+1 (2022-03-23)

  • K8 components updated to 1.21.8 (CP + Worker nodes)
  • Updated dependencies
    • ContainerD -> 1.6.1
    • Runc -> 1.1.0
    • Crictl -> 1.23.0
  • Vultr CCM updated to v0.5.0
  • Vultr CSI updated to v0.5.0
    • csi-provisioner -> v3.1.0
    • csi-attacher -> v3.4.0
    • csi-node-driver-registrar -> v2.5.0
  • Konnectivity updated to v0.0.30

1.22.6+1 (2022-01-26)

  • Initial release of v1.22 support

VKE on v1.21.x

1.21.13+1 (2022-06-13)

  • Implemented autoscaler support
  • calico -> v3.23.1
  • coredns -> v1.9.2
  • konnectivity -> v0.0.31
  • etcd -> v3.5.4
  • vultr-ccm -> v0.6.0
  • vultr-csi -> v0.7.0
  • Added Open-iSCSI support to worker nodes
  • cni -> v1.1.1
  • crictl -> v1.24.1
  • runc -> v1.1.2
  • containerd -> v1.6.4
  • Implemented reserved limits on worker nodes to prevent resource starvation to essential components
  • Resolved DNS issues on control-plane nodes

1.21.11+3 (2022-04-20)

  • CSI updated to v0.6.0 to support new block storage types
  • Updates to support more regions

1.21.11+2 (2022-04-07)

  • Added fix to disable swap memory on worker nodes
  • Added crictl config to worker nodes
  • Disabled resolvconf and set systemd-resolve as primary resolver

1.21.11+1 (2022-03-23)

  • K8 components updated to 1.21.11 (CP + Worker nodes)
  • Updated dependencies
    • ContainerD -> 1.6.1
    • Runc -> 1.1.0
    • Crictl -> 1.23.0
  • Vultr CCM updated to v0.5.0
  • Vultr CSI updated to v0.5.0
    • csi-provisioner -> v3.1.0
    • csi-attacher -> v3.4.0
    • csi-node-driver-registrar -> v2.5.0
  • Konnectivity updated to v0.0.30

1.21.9+1 (2022-01-26)

  • K8 components updated to 1.21.9 (CP + Worker nodes)
  • Vultr CSI updated to v0.4.0

1.21.7+2 (2022-01-11)

  • Konnectivity updated to v0.0.27
  • Improvements to the Kubernetes control plane for further stability and security

1.21.7+1 (2021-11-30)

  • K8 components updated to 1.21.7
  • CCM update to v0.4.0. This fixes an issue with LB + SSL deploys
  • Bump CoreDNS to 1.8.6
  • Improvements to the Kubernetes control plane for further stability and security

VKE on v1.20.x

1.20.13+2 (2022-01-11)

  • Konnectivity updated to v0.0.27
  • Improvements to the Kubernetes control plane for further stability and security

1.20.13+1 (2021-11-30)

  • K8 components updated to 1.20.13
  • CCM update to v0.4.0. Fixes an issue with LB + SSL deploys
  • Bump CoreDNS to 1.8.6
  • Improvements to the Kubernetes control plane for further stability and security

1.20.11+2 (2021-10-22)

  • Improvements to the Kubernetes control plane for further stability and security.

1.20.11+1 (2021-09-22)

  • Bumped Konnectivity to v0.0.24 (server + agent)
  • Bumped CCM to v0.0.3
  • Bumped all k8 services from 1.20.0 to 1.20.11
  • Konnectivity and Kube API Server performance tuning

1.20.0+1 (2021-08-19)

  • Konnectivity Support : Provides a TCP level proxy for the control plane to cluster communication
  • Aggregation Layer Support : Allows Kubernetes to be extended with additional APIs, beyond what is offered by the core Kubernetes APIs
  • Added NFS and CIFS support
  • Added new Storage Class "WaitForFirstConsumer"

1.20.0

Initial release.