How to Deploy an Express.js Application on Vultr Kubernetes Engine
Introduction
Express is a web application framework for Node.js that provides features for creating robust, fast, and scalable web applications. It simplifies the process of routing, handling requests and responses, and managing middleware. It also allows for integration with various databases and template engines. It streamlines the development process and enables developers to create efficient web applications.
Deploying Express on Kubernetes offers several benefits, such as scalability, high availability and deployment of multiple environments. You can scale your Express application on the Vultr Kubernetes Engine, ensuring the application can handle increased traffic. Kubernetes deployment also ensures your application's high availability, making it more fault tolerant. It also allows deploying and managing multiple environments such as development, staging, production, and so on.
This article demonstrates the steps to create an example application using Express, containerize it and deploy it on the Vultr Kubernetes Engine.
Prerequisites
Before you begin, you should:
- Deploy a Vultr Kubernetes Engine cluster.
- Have access to the DNS settings of a domain name. This article uses express.example.com for demonstration.
- Deploy a Cloud Compute instance with the Docker marketplace application to use as a management workstation. On the management workstation:
- Install Kubectl.
- Download your VKE configuration and configure Kubectl.
Create a Web Application
This section walks you through the steps to create a basic application using Express for demonstration. You can skip to the next section to containerize your existing application.
Install the nodejs
and the npm
packages.
# apt install nodejs npm
The above command installs the nodejs
and the npm
packages on the server.
Create and enter a new directory named express-demo
.
# mkdir express-demo
# cd express-demo
The above command creates and enters the express-demo
directory. You use this directory to store all the files related to this application.
Initialize the npm
package.
# npm init
The above command initiates the package and creates a new file named package.json
in the directory that contains all the information about the project, such as name, version, dependencies and so on.
Install the Express library using npm
.
# npm install express
The above command installs the Express library. It stores the data in the node_modules
directory and updates the dependencies section in the package.json
file.
Create a new file named app.js
.
# nano app.js
Add the following contents to the file.
const express = require('express')
const app = express()
app.get('/', (req, res) => {
res.send('<h1>Hello World, Greetings from Vultr</h1>')
})
app.listen(3000, () => {
console.log('Server Listening on Port 3000')
})
The above code creates a single endpoint /
, which responds Hello World, Greetings from Vultr to incoming GET requests on port 3000.
Disable the firewall.
# ufw disable
The above command disables the firewall to allow incoming connections on 3000.
Run the Express application.
# node app.js
The above command starts the Express server. You can verify the access by opening http://PUBLIC_IP:3000
in your web browser. Stop the server using Ctrl + C.
Containerize the Express Application
You must build a container image for your Express application to deploy it on the Kubernetes cluster. A container image contains all the dependencies and the application files essential for running the service. To use the container image in the Kubernetes cluster, you must ensure it can fetch the image. This section demonstrates the steps to containerize an Express application and push it on the DockerHub in a private repository.
Create a new file named Dockerfile
.
# nano Dockerfile
The DockerFile
declares the steps to build the container image.
Add the following contents to the file.
FROM node:latest
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "app.js"]
The above instructions inherit the official Node.js container image as the base image. It uses /app
as the working directory and copies the package.json
and the package-lock.json
files using a wildcard symbol before running the npm install
command to install all the dependencies. It then copies all the other application files, exposes port 3000, and starts the Express server using the node app.js
command.
Create a new file named .dockerignore
.
# nano .dockerignore
The .dockerignore
file declares the list of files and directories to ignore while building the image to prevent the inclusion of unnecessary data.
Add the following content to the file.
node_modules/
This prevents copying libraries installed on the host machine and forces Docker to install all the libraries under the dependencies section. You can add any other file or directory in this file to exclude it from the container image.
Create a new private repository on DockerHub.
- Go to the DockerHub website and log in to your account.
- Navigate to the "Repositories" tab in the top menu.
- Click the "Create Repository" button.
- Enter a name for your repository and select "Private" from the visibility options.
- Click the "Create" button to create your new private repository.
Log in to the DockerHub account.
# docker login
The above command prompts you to enter your DockerHub credentials to build and push the image on the DockerHub.
Build & push the image.
# docker build -t DOCKERHUB_USERNAME/REPO_NAME:latest .
# docker push DOCKERHUB_USERNAME/REPO_NAME:latest
The above commands build and push the image to the DockerHub. You need to push the image to a container registry so that any server can pull the image with authorization. You use the DockerHub repository for fetching the container image in a Kubernetes manifest later in the next sections.
Prepare the Kubernetes Cluster
You must prepare the Kubernetes cluster for deploying the Express application by installing the required plugins and creating a few resources. This section demonstrates the steps to install the ingress-nginx
controller, install the cert-manager
plugin, and create a ClusterIssuer
resource for issuing Let's Encrypt certificates.
Install the ingress-nginx
controller and the cert-manager
plugin.
# kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.5.1/deploy/static/provider/cloud/deploy.yaml
# kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.10.0/cert-manager.yaml
The above commands install the ingress-nginx
controller and the cert-manager
plugin on the Kubernetes cluster using the official manifest files. The ingress-nginx
controller provisions a load balancer add-on to handle incoming HTTP requests on the Ingress
resources.
Fetch the load balancer IP address.
# kubectl get services/ingress-nginx-controller -n ingress-nginx
It may take up to 5 minutes before the load balancer is ready. You can confirm the deployment by going to the customer portal, opening the cluster page and navigating to the Linked Resources tab. You should see a new load balancer resource linked to the cluster. You must point the A
record for express.example.com to the IP address of the load balancer.
Create a new file named le_clusterissuer.yaml
.
# nano le_clusterissuer.yaml
Add the following contents to the file.
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: "YOUR_EMAIL"
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
The above manifest creates a ClusterIssuer
resource for issuing Let's Encrypt certificates. It uses the HTTP01
challenge solver to verify the ownership. You must change the spec.acme.email
value to your email address.
Apply the manifest file.
# kubectl apply -f le_clusterissuer.yaml
Verify the deployment.
# kubectl get clusterissuer letsencrypt-prod
Create the secret resource for DockerHub credentials.
# kubectl create secret docker-registry regcred --docker-username=DOCKERHUB_USER --docker-password=DOCKERHUB_PASS --docker-email=DOCKERHUB_EMAIL
The above command creates a new secret resource with your DockerHub credentials, which you will use in the next section for fetching the image you built in the previous section.
Deploy the Express Application
You install the required plugins and add-ons in the previous section, such as the cert-manager
plugin and the ingress-nginx
controller. You also created a new secret resource containing the DockerHub credentials. This section demonstrates the steps to create a new Deployment
resource which uses the container image to spawn the Express server inside the pods. Additionally, you create a Service
resource to expose the connections within the cluster, and an Ingress
resource for setting up the external access.
Create a new file named express-deployment.yaml
.
# nano express-deployment.yaml
Declare the Deployment
resource.
apiVersion: apps/v1
kind: Deployment
metadata:
name: express-deployment
spec:
replicas: 3
selector:
matchLabels:
name: express-app
template:
metadata:
labels:
name: express-app
spec:
imagePullSecrets:
- name: regcred
containers:
- name: express
image: DOCKERHUB_USERNAME/REPO_NAME:latest
imagePullPolicy: Always
ports:
- containerPort: 3000
This part creates a new deployment resource named express-deployment
that creates 3 initial pods using the container image you built in the previous sections. It uses the regcred
secret resource to fetch the credentials. You must replace the DOCKERHUB_USERNAME
and the REPO_NAME
values with your details.
Declare the Service
resource.
---
apiVersion: v1
kind: Service
metadata:
name: express-service
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 3000
selector:
name: express-app
This part creates a new service resource named express-service
that exposes the connection to the pods running the Express server using the name: express-app
selector. This resource can detect the new pods using the same selector even when the deployment scales in the future.
Declare the Ingress
resource.
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: express-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- secretName: express-tls
hosts:
- express.example.com
rules:
- host: express.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: express-service
port:
number: 80
This part creates a new ingress resource named express-ingress
that enables external access to the express-service
resource. It uses the letsencrypt-prod
cluster issuer resource to issue a new SSL certificate for express.example.com and store it as the express-tls
secret resource.
Save the file using Ctrl + X then Enter.
Apply the manifest file.
# kubectl apply -f express-deployment.yaml
The above command creates the deployment, service and ingress resources on the cluster.
Verify the deployment.
# kubectl get deployment express-deployment
# kubectl get svc express-service
# kubectl get ingress express-ingress
You can now access the Express application by opening https://express.example.com
in your web browser. The Let's Encrypt Certificate stored in the express-tls
secret resource secures the application. The ingress-nginx
controller handles the automatic renewals of the certificate when it comes close to the expiry date.
Scale the Express Deployment
This section demonstrates the steps to increase or decrease the number of replicas running for the Express deployment resource. You can scale your deployment to efficiently handle the incoming traffic and avoid interruptions.
Increase the number of replicas.
# kubectl scale deployment/express-deployment --replicas=6
The above command increases the number of deployment/express-deployment
replicas to 6.
Verify the change.
# kubectl get deployment express-deployment
# kubectl get pods
Decrease the number of replicas.
# kubectl scale deployment/express-deployment --replicas=2
The above command increases the number of deployment/express-deployment
replicas to 2.
Verify the change.
# kubectl get deployment express-deployment
# kubectl get pods
You can also test the fault tolerance of your deployment by deleting any running pod using the kubectl delete pod
command. Kubernetes detects the state change and instantly creates a new pod to keep the deployment in a healthy state. New pods get detected by the ingress resource that start serving the incoming requests in no time.
Conclusion
This article demonstrated the steps to create an example application using Express, containerize it, and deploy it on the Vultr Kubernetes Engine. It also walked you through installing the cert-manager
plugin for issuing Let's Encrypt certificates and the ingress-nginx
controller for setting up external access.
Express is primarily focused on handling HTTP requests. It does not include any specific database functionality because database handling is a separate concern and can vary depending on the project requirements and the database used. This article did not cover the deployment of the database server for the same reasons. You can refer to the MongoDB on Vultr Kubernetes Engine article or use Vultr Managed Database service as the database backend for your application.