Deploy Serverless Workloads on Vultr Kubernetes Engine with Knative Serving
Introduction
Knative is an open-source project that offers a set of components to simplify the configuration of services on Kubernetes. It allows developers to deploy, run, and manage serverless cloud-native applications fast without managing the underlying infrastructure directly. This reduces the execution time for common routine tasks such as creating pods, load balancing, auto-scaling, routing traffic, among other cluster operations.
Knative Serving offers a set of components that allow you to deploy and manage serverless workloads on Kubernetes. You can deploy an application and enable automatic scaling based on the incoming user traffic load. Below are the key components of Knative Serving:
- Service: A top-level resource that defines and manages serverless workloads. You can define container images, environment variables, and scaling settings in service resources
- Route: Maps external network traffic to a specific Knative Service. A route can distribute incoming requests among different revisions of the same service
- Revision: A snapshot of a specific Knative service version. A new revision is created after every deployment which enables versioning and allows you to roll back changes
- Configurations: Defines the state of a service that is associated with one or more revisions
In this tutorial, deploy serverless workloads on a Vultr Kubernetes Engine (VKE) cluster with Knative Serving. You are to deploy an Express application as a serverless workload using Knative serving.
Prerequisites
Before you begin:
- Deploy a Vultr Kubernetes Engine (VKE) cluster with at least
5
nodes - Deploy a Ubuntu Server to work as your management machine
- Using SSH, access the server as a non-root sudo user
- Install Kubectl on your local machine to access the cluster
Install the Knative CLI Tool
Download the latest Knative CLI latest release for Linux systems
$ wget https://github.com/knative/client/releases/download/knative-v1.11.0/kn-linux-amd64
When using a different operating system, visit the Knative CLI release page to download the latest version
Move the downloaded binary file to the
/usr/local/bin/
directory to enable it as a system-wide command$ sudo mv kn-linux-amd64 /usr/local/bin/kn
Make the
kn
binary file executable$ sudo chmod +x /usr/local/bin/kn
Verify the Knative CLI version
$ kn version
Output:
Version: v1.11.0 Build Date: 2023-07-27 07:42:56 Git Revision: b7508e67 Supported APIs: * Serving - serving.knative.dev/v1 (knative-serving v1.11.0) * Eventing
Install Knative Serving
To deploy and manage serverless applications in your Vultr Kubernetes Engine (VKE) cluster, install Knative Serving as described in the steps below.
- This article uses the Knative Serving version
1.11.0
. Visit the Knative Serving Releases page to verify the latest version to install in your cluster.
Install the Knative Custom Resource Definitions (CRDs) to define and control how your serverless workload behavior in the cluster
$ kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.11.0/serving-crds.yaml
Install the Knative Serving core components
$ kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.11.0/serving-core.yaml
Install the Knative Kourier controller and enable its Knative integration
$ kubectl apply -f https://github.com/knative/net-kourier/releases/download/knative-v1.11.0/kourier.yaml
The above command installs the Kourier controller that works as a networking layer to expose Knative applications to an external network. Knative Serving also supports other networking layers such as Istio, and Contour.
Using Kubectl, edit the
config-network
ConfigMap and configure Knative Serving to use Kourier as the networking layer$ kubectl patch configmap/config-network --namespace knative-serving --type merge --patch '{"data":{"ingress-class":"kourier.ingress.networking.knative.dev"}}'
Wait for at least 3 minutes to provision a load balancer, then, view the external address assigned to the Kourier Controller
$ kubectl --namespace kourier-system get service kourier
Output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kourier LoadBalancer 10.105.102.208 172.20.2.1 80:32638/TCP,443:31165/TCP 2m41s
Install the Knative Serving DNS configuration to use the default domain
sslip.io
$ kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.11.0/serving-default-domain.yaml
To replace the default
sslip.io
domain, point a domain record to your load balancer IP address to access the Knative Service servicesVerify that the Knative Serving components are active and running
$ kubectl get pods -n knative-serving
Output:
NAME READY STATUS RESTARTS AGE activator-5c48bb4df9-btsx2 1/1 Running 0 4m58s autoscaler-85b4ddb94b-hdfgx 1/1 Running 0 4m56s controller-575457d5c-swzr4 1/1 Running 0 4m53s default-domain-pl9fp 0/1 Completed 0 87s net-kourier-controller-7c7f588b78-n9m8k 1/1 Running 0 4m29s webhook-6859dd7cbf-9764l 1/1 Running 0 4m48s
Deploy a Serverless Application with Knative Serving
To implement Knative Serving in your cluster, deploy a serverless application and verify access using the default domain record. They're two methods you can apply to deploy a serverless application with Knative serving, using the Knative CLI or Kubernetes manifest files. In this section, deploy an Express application using any of the methods described below.
Deploy a Serverless Application Using Knative CLI (Recommended)
The Knative CLI interacts with Knative components installed in your cluster. The tool enables the fast deployment and management of applications in a cluster. Deploy an Express application using Knative CLI as described below.
Create a new
knative-app
namespace for your application$ kubectl create namespace knative-app
Using Knative CLI, deploy your Express application to the
knative-app
namespace. Replacekarnadocker/express-app
with your desired Docker image source$ kn service create express-service --image karnadocker/express-app --port 8080 --namespace knative-app
Your output should look like the one below:
Creating service 'express-service' in namespace 'knative-app': 4.821s Configuration "express-service" is waiting for a Revision to become ready. 4.821s Ingress has not yet been reconciled. 4.966s Waiting for load balancer to be ready 5.099s Ready to serve. Service 'express-service' created to latest revision 'express-service-00001' is available at URL: http://express-service.knative-app.172.20.2.1.sslip.io
List Knative services and verify that the application is successfully created
$ kn service list --namespace knative-app
Output:
NAME URL LATEST AGE CONDITIONS READY REASON express-service http://express-service.knative-app.172.20.2.1.sslip.io express-service-00001 27s 3 OK / 3 True
Describe the service to view information about the Express application
$ kn service describe express-service --namespace knative-app
Output:
Name: express-service Namespace: knative-app Age: 2m URL: http://express-service.knative-app.172.20.2.1.sslip.io Revisions: 100% @latest (express-service-00001) [1] (2m) Image: karnadocker/express-app (pinned to 8ab6c7) Replicas: 0/0 Conditions: OK TYPE AGE REASON ++ Ready 1m ++ ConfigurationsReady 1m ++ RoutesReady 1m
View the application URL
$ kn route list --namespace knative-app
Output:
NAME URL READY express-service http://express-service.knative-app.172.20.2.1.sslip.io True
Using Curl, query the application URL and verify that it displays a result
$ curl http://express-service.knative-app.172.20.2.1.sslip.io
Output:
Express Hello World Application!
To further test the application status, use a web browser and visit the application URL
To delete the application, run
kn service
with thedelete
option$ kn service delete express-service --namespace knative-app
Deploy a Serverless Application Using a YAML File
You can deploy serverless applications to your cluster using YAML files. This method allows you to implement version control for your application workloads. Deploy an Express application to your cluster as described below.
Create a new
knative-app
namespace$ kubectl create namespace knative-app
Using a text editor such as Vim, create a new YAML resource file
knative-service.yaml
$ nano knative-service.yaml
Add the following configurations to the file. Replace
karnadocker/express-app
with your desired Docker image sourceapiVersion: serving.knative.dev/v1 kind: Service metadata: name: knative-express-service namespace: knative-app spec: template: metadata: name: knative-express-service-v1 spec: containers: - image: docker.io/karnadocker/express-app ports: - containerPort: 8081
Save and close the file
Apply the resource to your cluster
$ kubectl apply -f knative-service.yaml
Verify that the service is available in the
knative-app
namespace$ kubectl get ksvc --namespace knative-app
Output:
NAME URL LATESTCREATED LATESTREADY READY REASON express-service http://express-service.knative-app.172.20.2.1.sslip.io express-service-00001 express-service-00001 True knative-express-service http://knative-express-service.knative-app.172.20.2.1.sslip.io knative-express-service-v1 knative-express-service-v1 True
View the application URL
$ kn route list --namespace knative-app
Output:
NAME URL READY express-service http://express-service.knative-app.172.20.2.1.sslip.io True knative-express-service http://knative-express-service.knative-app.172.20.2.1.sslip.io True
Using Curl, visit the application URL
$ curl http://knative-express-service.knative-app.172.20.2.1.sslip.io
Output:
Express Hello World Application!
The above output verifies that the application is running correctly in your cluster
Scaling a Knative Service in Kubernetes
Knative can scale services automatically based on incoming traffic and the configured scaling policies. It uses the Knative Pod Autoscaler (KPA) to scale the number of pods automatically. When incoming traffic increases, KPA scales up by creating new pods based on the available configuration. When there is no incoming traffic, KPA scales down by deleting pods to save cluster resources.
The KPA offers many configuration options to control the autoscaling behavior as implemented in this section.
Set Scaling Limits
Knative Serving allows you to set scaling limits for your application to control the number of pods created for a revision to handle the requests. This allows you to manage the cluster resource utilization and prevent unexpected resource consumption. You can set scaling limits using scale-min
and scale-max
options as described below.
Set your
express-service
application minimum and maximum scaling limit$ kn service update express-service --scale-min 1 --scale-max 20 --namespace knative-app
Output:
Updating Service 'express-service' in namespace 'knative-app': 1.447s Traffic is not yet migrated to the latest revision. 1.447s Ingress has not yet been reconciled. 1.448s Waiting for load balancer to be ready 1.449s Ready to serve. Service 'express-service' updated to latest revision 'express-service-00002' is available at URL: http://express-service.knative-app.172.20.2.1.sslip.io
Verify the application scaling limit
$ kn revision describe express-service-00002 --namespace knative-app
Output:
Name: express-service-00002 Namespace: knative-app Annotations: autoscaling.knative.dev/max-scale=20, autoscaling.knative.dev/min-scale=1 Age: 1m Image: index.docker.io/karnadocker/express-app@sha256:8ab6c7d772bdc8f8121b88754a4b82ba5083660d2a66cd8a2c3b93d6a1c660a3 (at 8ab6c7) Replicas: 1/1 Port: 8080 Scale: 1 ... 20 Service: express-service Conditions: OK TYPE AGE REASON ++ Ready 1m ++ ContainerHealthy 1m ++ ResourcesAvailable 1m ++ Active 1m
As displayed in the above output, the max scale and min scale annotations match your scaling limits
Set Concurrency Limits
Using concurrency limits, you can control the number of concurrent requests each pod can process. When the concurrency rate exceeds the defined limit, Knative scales up the application by creating additional pods to handle the load. If the concurrency drops below the given limit, the application scales down by deleting unused pods. In this section, set the concurrency limits as described below.
To set the express-service
application concurrency limit to 10
, update the service with the concurrency-limit
value
$ kn service update express-service --concurrency-limit 10 --namespace knative-app
Output:
Updating Service 'express-service' in namespace 'knative-app':
2.360s Traffic is not yet migrated to the latest revision.
2.360s Ingress has not yet been reconciled.
2.361s Waiting for load balancer to be ready
2.420s Ready to serve.
Service 'express-service' updated to latest revision 'express-service-00003' is available at URL:
http://express-service.knative-app.172.20.2.1.sslip.io
Verify Auto Scaling
To test auto scaling on your application, use a concurrent request tool such as Hey to send continuous requests to your application as described below.
Install the hey CLI tool.
$ sudo apt install hey -y
View the list of running pods in the
knative-app
namespace$ kubectl get pods -n knative-app
Output:
NAME READY STATUS RESTARTS AGE express-service-00002-deployment-6fbdc5b5b8-4nvb6 1/1 Running 0 3m39s express-service-00003-deployment-5776757969-l5tbk 1/1 Running 0 34m
Send
300
seconds of traffic while maintaining5000
concurrent requests to your application using the following command$ hey -z 300s -c 5000 "http://express-service.knative-app.172.20.2.1.sslip.io?sleep=100&prime=10000&bloat=500" --namespace knative-app
Verify that the pods are scaling up
$ kubectl get pods -n knative-app
Your output should look like the one below:
NAME READY STATUS RESTARTS AGE express-service-00001-deployment-7dcd484688-kwnjx 2/2 Running 0 9s express-service-00001-deployment-7dcd484688-qst6c 2/2 Running 0 12s express-service-00002-deployment-6fbdc5b5b8-4nvb6 2/2 Running 0 3m39s express-service-00003-deployment-5776757969-l5tbk 2/2 Running 0 34m
Route Traffic to Revisions
Each time you create or update a service, Knative creates a new revision. Route incoming traffic to revisions as described in the steps below.
List all
express-service
application revisions$ kn revision list --namespace knative-app
Output:
NAME SERVICE TRAFFIC TAGS GENERATION AGE CONDITIONS READY REASON express-service-00003 express-service 100% 3 82s 4 OK / 4 True express-service-00002 express-service 2 5m25s 3 OK / 4 True express-service-00001 express-service 1 34m 3 OK / 4 True
Verify the percentage of requests routed to specific revisions in the
TRAFFIC
column. In the above output, Knative has mapped all requests by100%
to only theexpress-service-00003
revision.View detailed information about the
express-service-00003
revision$ kn revision describe express-service-00003 --namespace knative-app
Output:
Name: express-service-00003 Namespace: knative-app Annotations: autoscaling.knative.dev/max-scale=20, autoscaling.knative.dev/min-scale=1 Age: 25s Image: index.docker.io/karnadocker/express-app@sha256:8ab6c7d772bdc8f8121b88754a4b82ba5083660d2a66cd8a2c3b93d6a1c660a3 (at 8ab6c7) Replicas: 1/1 Port: 8080 Scale: 1 ... 20 Concurrency: Limit: 10 Service: express-service Conditions: OK TYPE AGE REASON ++ Ready 23s ++ ContainerHealthy 23s ++ ResourcesAvailable 23s ++ Active 23s
To distribute traffic among all three service revisions, apply the
--traffic
option to update the service using the following command$ kn service update express-service --traffic express-service-00001=25 --traffic express-service-00002=35 --traffic express-service-00003=40 --namespace knative-app
Verify that the traffic is distributed among all revisions
$ kn revision list --namespace knative-app
Output:
NAME SERVICE TRAFFIC TAGS GENERATION AGE CONDITIONS READY REASON express-service-00003 express-service 40% 3 23m 4 OK / 4 True express-service-00002 express-service 35% 2 27m 4 OK / 4 True express-service-00001 express-service 25% 1 56m 3 OK / 4 True
Verify the traffic distribution values displayed in the
TRAFFIC
column
Conclusion
You have deployed a serverless application on a Vultr Kubernetes Engine (VKE) cluster. You implemented the different deployment methods and scaled the serverless application using concurrency and scaling limits. By using Knative Serving, you can streamline the process of building, deploying, and scaling serverless workloads in your cluster. For more information about Knative Service, visit the Knative Serving documentation.