Vultr Kubernetes Engine (VKE) Reference Guide
Introduction
Vultr Kubernetes Engine (VKE) is a fully managed Kubernetes product with predictable pricing. When you deploy VKE, you'll get a managed Kubernetes control plane that includes our Cloud Controller Manager (CCM) and the Container Storage Interface (CSI). In addition, you can configure block storage and load balancers or install add-ons such as Vultr's ExternalDNS and Cert Manager. We've made Kubernetes hosting easy, so you can focus on scaling your application.
Audience
This reference guide explains how to deploy a VKE cluster and assumes you have experience using Kubernetes. If you have comments about this guide, please use the Suggest an Update button at the bottom of the page.
Please see our changelog for information about supported versions of Kubernetes.
How to Deploy a VKE Cluster
You can deploy a new VKE cluster in a few clicks. Here's how to get started.
Navigate to the Kubernetes page in the Customer Portal.
Click Add Cluster.
Enter a descriptive label for the Cluster Name.
Select the Kubernetes version.
Choose a deployment location.
Create a Node Pool.
About Node Pools
When creating a VKE cluster, you can assign one or more Node Pools with multiple nodes per pool. For each Node Pool, you'll need to make a few selections.
- Node Pool Name: Enter a descriptive label for the node pool.
- Node Pool Type: Multiple Node Pool Types are available; Optimized Cloud will be the default Node Pool Type
- Choice of Node Pool Types consists of:
- Optimized Cloud
- Regular Cloud Compute
- High Frequency
- Intel High Performance
- AMD High Performance
- Plan: Choose a size appropriate for your workload. All nodes in a pool are the same plan, meaning the same amount of RAM, CPU, and so on. You can create more than one node pool if you need nodes with a different plan.
- Amount of Nodes: Choose how many nodes should be in this pool. It's strongly recommended to use more than one node.
The monthly rate for the node pool is calculated as you make your selections. If you want to deploy more than one, click Add Another Node Pool.
When ready, click Deploy Now.
Kubernetes requires some time to inventory and configure the nodes. When complete, the cluster status changes to Running.
How to Manage a VKE Cluster
After deploying your VKE cluster, you need to gather some information and manage it.
Navigate to the Kubernetes section of the customer portal, then click the cluster's name to open the management section. You'll find several tabs in this area.
Overview Tab
In the Overview tab, you'll see important information about your cluster, such as the IP address, endpoint URL, available subnets, and other configuration items. You can verify your cluster configuration and download the kubeconfig
file here.
Click the Download Configuration button in the upper-right corner to download your kubeconfig
file, a YAML file with the credentials and endpoint information you need to control your cluster. The file is named with the instance's UUID like this:
vke-8478867d-ffff-ffff-ffff-example00000.yaml
Install kubectl
on your local workstation and then test your access with:
kubectl --kubeconfig=/PATH/TO/KUBECONFIG get nodes
About kubeconfig
kubectl
uses a configuration file, known as the kubeconfig, to access your Kubernetes cluster.
A kubeconfig file has information about the cluster, such as users, namespaces, and authentication mechanisms. The kubectl
command uses kubeconfig to find and communicate with a cluster. The default kubeconfig is ~/.kube/config
unless you override that location on the command line or with an environment variable. You can have multiple kubeconfigs, and kubectl can use one or more merged together. The order of precedence is:
- If you set the
--kubeconfig
flag, kubectl loads only that file. You may use only one flag, and no merging occurs. - If you set the
$KUBECONFIG
environment variable, it is parsed as a list of filesystem paths according to the normal path delimiting rules for your system. - Otherwise, kubectl uses
~/.kube/config
file, and no merging occurs.
Please see this section of the Kubernetes documentation for more details about merging. Also, note the stern warning found there:
Warning: Only use kubeconfig files from trusted sources. Using a specially-crafted kubeconfig file could result in malicious code execution or file exposure. If you must use an untrusted kubeconfig file, inspect it carefully first, much as you would a shell script.
Nodes Tab
The Nodes tab is where you'll manage nodes and node pools.
You have several controls available:
- Click the Node Pool name to expand the pool and view the individual nodes. You can replace or remove nodes individually.
- Click Add Node Pool to add another pool.
- Click Resize Pool to decrease or increase the number of nodes. Keep reading to learn about the autoscaler feature.
- Click the trash icon to destroy the pool.
Important: You must use VKE Dashboard or the Kubernetes endpoints of the Vultr API to delete VKE worker nodes. Suppose you delete a worker node elsewhere in the customer portal or with Instance endpoints of the Vultr API. In that case, Vultr will redeploy the worker node to preserve the defined VKE Cluster node pool configuration.
About the Autoscaler
When you adjust the pool size, you can configure the size manually or use the autoscaler feature. Here's a little about both options.
The autoscaler, shown below, allows you to specify upper and lower limits on your pool's size. The example below has a minimum node count of 3 and a maximum of 6. As the name suggests, VKE will automatically scale the number of nodes between these values to keep your workload responsive. You cannot select specific nodes to be deleted when autoscaler downsizes a node pool. VKE will randomly select nodes to be deleted to reach the defined size. Nodes are considered disposable; they can be deleted and recreated anytime. If you'd like to learn more about the Cluster Autoscaler for Vultr, see the GitHub repository.
The manual option allows you to set a static value for the number of nodes, which is a good choice if your workloads are well-defined and predictable.
Linked Resources Tab
To manage the resources linked to VKE, such as Block Storage and Load Balancers, click the Linked Resources tab on the Manage Cluster page.
Manage Upgrades Tab
Use this tab to manage cluster upgrades. This is a complex subject, so we've written a separate guide about how to upgrade your VKE cluster that you should read first.
How to Delete a VKE Cluster
Navigate to the Kubernetes page in the Customer Portal.
Select the target Cluster.
Click delete on top-right of the overview page.
Click Destroy VKE Cluster on the confirmation prompt to permanently delete the target Kubernetes Cluster.
VPC Networks
When you create a VKE cluster, it automatically creates its own Virtual Private Cloud (VPC) for private communication between the nodes. You'll find this VPC in the Network section of the customer portal, in the VPC Networks menu, named VKE_Network_[UUID]
, where [UUID] is the same as the VKE cluster. You can attach other non-Kubernetes instances to this VPC by following the steps in our VPC guide.
When you destroy your VKE cluster, it also destroys the associated VPC, unless other non-Kubernetes instances are still attached to it. In that case, VPC is preserved for their use.
Features of the Managed Control Plane
When you deploy VKE, you automatically get several managed components. Although you don't need to deploy or configure them yourself, here are brief descriptions with links to more information.
Cloud Controller Manager
Vultr Cloud Controller Manager (CCM) is part of the managed control plane that connects Vultr features to your Kubernetes cluster. The CCM monitors the node's state, assigns their IP addresses, and automatically deploys managed Load Balancers as needed for your Kubernetes Load Balancer/Ingress services. Learn more about the CCM on GitHub.
Container Storage Interface
If your application persists data, you'll need storage. VKE's managed control plane automatically deploys Vultr Container Storage Interface (CSI) to connect your Kubernetes cluster with Vultr's high-speed block storage by default. Learn more about the CSI on GitHub.
- Note:
ReadWriteOnce
is the only allowable access mode for Vultr Block Storage. - Important: You should use Block Storage volumes for persistent data. The local disk storage on worker nodes is transient and will be lost during Kubernetes upgrades.
Vultr offers two block storage technologies: HDD and NVMe.
HDD Block Storage
HDD is an affordable option that uses traditional rotating hard drives and supports volumes larger than 10 TB.
- CSI Storage Class:
vultr-block-storage-hdd
- Minimum volume size: 40 GB
- Maximum volume size: 40 TB
- Technology: Rotating hard disk drive
- Availability: Most Vultr locations
- Key Feature: Affordable storage and largest volumes
NVMe Block Storage
NVMe is a higher-performance option for workloads that require rapid I/O.
- CSI Storage Class:
vultr-block-storage
- Minimum volume size: 10 GB
- Maximum volume size: 10 TB
- Technology: Solid-state NVMe
- Availability: Many Vultr locations
- Key Feature: Highest performance I/O
Block Storage Availability
Use the /v2/regions
API endpoint to discover which storage classes are available at your location.
block_storage_storage_opt
indicates HDD storage is available.block_storage_high_perf
indicates NVMe storage is available.
Some locations support both storage classes. If NVMe block storage is available in a location, our CSI uses that class by default.
Block Storage Usage
To use block storage with VKE, deploy a Persistent Volume Claim (PVC). For example, to deploy a 10Gi block on your account for VKE with NMVe-backed storage, use a PersistentVolumeClaim template like this:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: vultr-block-storage
To attach this PVC to a Pod, define a volume node in your Pod template. Note the claimName
below is csi-pvc, referencing the PersistentVolumeClaim in the example above.
kind: Pod
apiVersion: v1
metadata:
name: readme-app
spec:
containers:
- name: readme-app
image: busybox
volumeMounts:
- mountPath: "/data"
name: vultr-volume
command: [ "sleep", "1000000" ]
volumes:
- name: vultr-volume
persistentVolumeClaim:
claimName: csi-pvc
To learn more about Persistent Volumes, see the Kubernetes documentation. If you'd like to learn more about Vultr CSI, see our GitHub repository.
VKE Load Balancer
Load Balancers in VKE offer all the same features and capabilities as standalone managed Load Balancers. To deploy a VKE load balancer for your application, add a LoadBalancer
type to your service configuration file and use metadata annotations to tell the CCM how to configure VKE load balancer. VKE will deploy the Kubernetes service load balancer according to your service configuration and attach it to the cluster.
Here's an example service configuration file that declares a load balancer for HTTP traffic on port 80. The app selector app-name
matches an existing service set of pods on your cluster.
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/vultr-loadbalancer-protocol: "http"
name: vultr-lb-http
spec:
type: LoadBalancer
selector:
app: app-name
ports:
- port: 80
name: "http"
Notice the annotations in the metadata section. Annotations are how you configure the load balancer, and you'll find the complete list of available annotations in our GitHub repository.
Here is another load balancer example that listens on HTTP port 80, and HTTPS port 443. The SSL certificate is declared as a Kubernetes TLS secret named ssl-secret
, which this example assumes was already deployed. See the TLS Secrets documentation to learn how to deploy a TLS secret.
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/vultr-loadbalancer-protocol: "http"
service.beta.kubernetes.io/vultr-loadbalancer-https-ports: "443"
# You will need to have created a TLS Secret and pass in the name as the value
service.beta.kubernetes.io/vultr-loadbalancer-ssl: "ssl-secret"
name: vultr-lb-https
spec:
type: LoadBalancer
selector:
app: app-name
ports:
- port: 80
name: "http"
- port: 443
name: "https"
As you increase or decrease the number of cluster worker nodes, VKE manages their attachment to the load balancer. If you'd like to learn general information about Kubernetes load balancers, see the documentation at kubernetes.io.
VKE Cert Manager
VKE Cert Manager adds certificates and certificate issuers as resource types in VKE and simplifies the process of obtaining, renewing, and using those certificates. Our Cert Manager documentation is on GitHub, and you can use Vultr's Helm chart to install Cert Manager.
VKE ExternalDNS
ExternalDNS makes Kubernetes resources discoverable via public DNS servers. For more information, see our tutorial to set up ExternalDNS with Vultr DNS.
Frequently Asked Questions
What is Vultr Kubernetes Engine?
Vultr Kubernetes Engine is a fully-managed product with predictable pricing that makes Kubernetes easy to use. Vultr manages the control plane and worker nodes and provides integration with other managed services such as Load Balancers, Block Storage, and DNS.
What versions of Kubernetes does VKE Support?
Please see our changelog for information about supported versions of Kubernetes.
How much does Vultr Kubernetes Engine cost?
Vultr Kubernetes Engine includes the managed control plane free of charge. You pay for the Worker Nodes, Load Balancers, and Block Storage resources you deploy. Worker nodes and Load Balancers run on Vultr cloud server instances of your choice with 2 GB of RAM or more. See our hourly rates.
Is there a minimum size for Block Storage volumes?
Yes, the minimum size for a Block Storage volume is 10GB.
Can I deploy a Bare Metal server to my Kubernetes cluster?
Yes, Vultr Kubernetes Engine supports both Cloud and Bare Metal servers.
Does VKE come with an ingress controller?
No, VKE does not come with an ingress controller preconfigured. Vultr Load Balancers will work with any ingress controller you deploy. Popular ingress controllers include Nginx, HAProxy, and Traefik.
What Container Network Interface (CNI) does VKE use?
VKE uses Calico.
- Connect to Vultr Cloud Compute using SSH
- Expand Instance Storage with Vultr Block Storage
- Configure Automatic Backups for Vultr Cloud Compute
- Store and Serve Data with Vultr Object Storage
- Route TCP / HTTP Traffic through Vultr Load Balancer
- Migrate Existing Databases to Vultr Managed Databases
- Build and Deploy Applications on Vultr Kubernetes Engine
- Getting Started with Vultr API