Use Karmada to Orchestrate Vultr Kubernetes Engine in Multiple Locations
Introduction
Karmada, which stands for "Kubernetes Armada," is an open-source, multi-cloud, and multi-cluster Kubernetes orchestration system. It allows you to run cloud-native applications across multiple Kubernetes clusters and regions. It has advanced scheduling capabilities, high availability, centralized multi-cloud management, failure recovery, and traffic scheduling. Karmada supports deployments on-prem, at the edge, and on public clouds.
This guide explains using Karmada with Vultr Kubernetes Engine (VKE). When you follow this guide, you will:
- Create three VKE clusters in Vultr's Mumbai, Paris, and New York regions.
- Create a fourth VKE cluster for Karmada to manage the rest of the nodes.
- Install the Karmada management system on the cluster manager and Karmada agent on other clusters.
- Deploy an example application and distribute the workload across all clusters.
Prerequisites
Before beginning this guide, you should:
- Have the
kubectl
CLI installed and configured on your local machine. - Have the Helm client installed on your local machine.
Install Karmada API Server
Deploy four VKE clusters at Vultr with at least three nodes each and download the respective kubeconfig files.
- Deploy one cluster in Mumbai. Name the kubeconfig
kubeconfig-mumbai
. - Deploy one cluster in Paris. Name the kubeconfig
kubeconfig-paris
. - Deploy one cluster in New York. Name the kubeconfig
kubeconfig-newyork
. - Deploy one cluster in any other location. Name the kubeconfig
kubeconfig-cluster-manager
.
- Deploy one cluster in Mumbai. Name the kubeconfig
Add the Helm repository to your local machine.
# helm repo add karmada-charts https://raw.githubusercontent.com/karmada-io/karmada/master/charts
Next, verify the added repository.
# helm repo list
Sample output:
NAME URL karmada-charts https://raw.githubusercontent.com/karmada-io/karmada/master/charts
Karmada API server must be reachable from other clusters. So you will require a public IP address of the cluster manager node to expose it for external access. First, retrieve the IP address of the cluster manager.
# kubectl get nodes -o jsonpath='{.items[0].status.addresses[?(@.type=="ExternalIP")].address}' --kubeconfig=kubeconfig-cluster-manager
The example output is shown below. Note that
192.0.2.100
is an example IP address for documentation. You'll substitute that address with the actual address of your cluster throughout this guide.192.0.2.100
Install the Karmada control plane on the cluster manager node.
# helm install karmada karmada-charts/karmada --kubeconfig=kubeconfig-cluster-manager --create-namespace --namespace karmada-system -- version=1.2.0 --set apiServer.hostNetwork=false --set apiServer.serviceType=NodePort --set apiServer.nodePort=32443 --set certs.auto.hosts[0]="kubernetes.default.svc" --set certs.auto.hosts[1]="*.etcd.karmada-system.svc.cluster.local" --set certs.auto.hosts[2]="*.karmada-system.svc.cluster.local" --set certs.auto.hosts[3]="*.karmada-system.svc" --set certs.auto.hosts[4]="localhost" --set certs.auto.hosts[5]="127.0.0.1" --set certs.auto.hosts[6]="192.0.2.100"
> Note: Replace the IP
192.0.2.100
with the IP address of cluster manager.Sample output:
NAME: karmada LAST DEPLOYED: Sat Nov 19 22:18:54 2022 NAMESPACE: karmada-system STATUS: deployed REVISION: 1 TEST SUITE: None
Download the kubeconfig configuration file from the cluster manager to connect to the Karmada API:
# kubectl get secret karmada-kubeconfig --kubeconfig=kubeconfig-cluster-manager -n karmada-system -o jsonpath={.data.kubeconfig} | base64 -d > karmada-config
Edit the downloaded kubeconfig file:
# nano karmada-config
Find the following line:
server: https://karmada-apiserver.karmada-system.svc.cluster.local:5443 # <- this works only in the cluster
Replace that line with the cluster manager node's IP address so that it will work outside the cluster network.
server: https://192.0.2.100:32443
Install Karmada Agent on Three Clusters
In this section, you will install the Karmada agent using Helm on other clusters and join them to the Karmada cluster.
First, verify the active status of all clusters.
# kubectl get pods -A --kubeconfig=kubeconfig-mumbai # kubectl get pods -A --kubeconfig=kubeconfig-paris # kubectl get pods -A --kubeconfig=kubeconfig-newyork
Next, copy the
agent.kubeconfig.caCrt
,agent.kubeconfig.crt
, andagent.kubeconfig.key
value from thekarmada-config
file, decode the value using base64 and store the decoded values in variables namedca
,crt
andkey
.Run the following commands to install the Karmada agent on each cluster and link it to the cluster manager.
# helm install karmada karmada-charts/karmada --kubeconfig=kubeconfig-mumbai --create-namespace --namespace karmada-system --version=1.2.0 --set installMode=agent --set agent.clusterName=mumbai --set agent.kubeconfig.caCrt="$ca" --set agent.kubeconfig.crt="$crt" --set agent.kubeconfig.key="$key" --set agent.kubeconfig.server=https://192.0.2.100:32443 # helm install karmada karmada-charts/karmada --kubeconfig=kubeconfig-paris --create-namespace --namespace karmada-system --version=1.2.0 --set installMode=agent --set agent.clusterName=paris --set agent.kubeconfig.caCrt="$ca" --set agent.kubeconfig.crt="$crt" --set agent.kubeconfig.key="$key" --set agent.kubeconfig.server=https://192.0.2.100:32443 # helm install karmada karmada-charts/karmada --kubeconfig=kubeconfig-newyork --create-namespace --namespace karmada-system --version=1.2.0 --set installMode=agent --set agent.clusterName=newyork --set agent.kubeconfig.caCrt="$ca" --set agent.kubeconfig.crt="$crt" --set agent.kubeconfig.key="$key" --set agent.kubeconfig.server=https://192.0.2.100:32443
> Note: Replace the
--kubeconfig
value with your kubeconfig file of each cluster,agent.clusterName
value with each cluster name, and192.0.2.100
with the IP address of the cluster manager.Verify the Karmada agent installation.
# kubectl get clusters --kubeconfig=karmada-config
Sample output:
NAME VERSION MODE READY AGE mumbai v1.25.4 Pull True 95s newyork v1.25.4 Pull True 7s paris v1.25.4 Pull True 31s
Create Karmada Policy for Orchestrating Multicluster Deployment
In this section, you will create a test deployment, define a policy to assign a workload, submit a workload to Karmada then distribute it across all clusters.
First, create a test deployment with three replicas to distribute equally across the three clusters.
# nano deployment.yaml
Add the following configurations.
apiVersion: apps/v1 kind: Deployment metadata: name: hello spec: replicas: 3 selector: matchLabels: app: hello template: metadata: labels: app: hello spec: containers: - image: stefanprodan/podinfo name: hello --- apiVersion: v1 kind: Service metadata: name: hello spec: ports: - port: 5000 targetPort: 9898 selector: app: hello
Apply the deployment to the Karmada API server.
# kubectl apply -f deployment.yaml --kubeconfig=karmada-config
Sample output:
deployment.apps/hello created service/hello created
Run the following command to verify the deployment.
# kubectl get deployments --kubeconfig=karmada-config
Sample output:
NAME READY UP-TO-DATE AVAILABLE AGE hello 0/3 0 0 54s
Let's describe the deployment with detailed information.
# kubectl describe deployment hello --kubeconfig=karmada-config
Sample output:
Name: hello Namespace: default CreationTimestamp: Wed, 23 Nov 2022 14:25:56 +0530 Labels: <none> Annotations: <none> Selector: app=hello Replicas: 3 desired | 0 updated | 0 total | 0 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=hello Containers: hello: Image: stefanprodan/podinfo Port: <none> Host Port: <none> Environment: <none> Mounts: <none> Volumes: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ApplyPolicyFailed 106s resource-detector No policy match for resource
As you can see, there isn't a policy to match resources. So Karmada doesn't know what to do with the deployments.
You will create a policy to assign a replica to each cluster and allocate workloads to clusters.
# nano policy.yaml
Add the following configuration to define an equal weight for each cluster.
apiVersion: policy.karmada.io/v1alpha1 kind: PropagationPolicy metadata: name: hello-propagation spec: resourceSelectors: - apiVersion: apps/v1 kind: Deployment name: hello - apiVersion: v1 kind: Service name: hello placement: clusterAffinity: clusterNames: - mumbai - paris - newyork replicaScheduling: replicaDivisionPreference: Weighted replicaSchedulingType: Divided weightPreference: staticWeightList: - targetCluster: clusterNames: - newyork weight: 1 - targetCluster: clusterNames: - paris weight: 1 - targetCluster: clusterNames: - mumbai weight: 1
Apply the policy to the Karmada cluster.
# kubectl apply -f policy.yaml --kubeconfig=karmada-config
Sample output:
propagationpolicy.policy.karmada.io/hello-propagation created
Verify whether the Karmada assigned a pod to each cluster using the following commands.
# kubectl get deployments --kubeconfig=karmada-config
Sample output:
NAME READY UP-TO-DATE AVAILABLE AGE hello 3/3 3 3 6m20s
To verify the Mumbai cluster, run:
# kubectl get pods --kubeconfig=kubeconfig-mumbai
Sample output:
NAME READY STATUS RESTARTS AGE hello-588ddff5d8-xlw5b 1/1 Running 0 75s
To verify the Paris cluster, run:
# kubectl get pods --kubeconfig=kubeconfig-paris
Sample output:
NAME READY STATUS RESTARTS AGE hello-588ddff5d8-qntt5 1/1 Running 0 90s
To verify the New York cluster, run:
# kubectl get pods --kubeconfig=kubeconfig-newyork
Sample output:
NAME READY STATUS RESTARTS AGE hello-588ddff5d8-d9pb8 1/1 Running 0 104s
As you can see, Karmada assigned an equal number of pods to each cluster.
Next, run the following command to scale the deployment to 10 replicas.
# kubectl scale deployment/hello --replicas=10 --kubeconfig=karmada-config
Verify the deployment.
# kubectl get deployments --kubeconfig=karmada-config
Sample output:
NAME READY UP-TO-DATE AVAILABLE AGE hello 10/10 10 10 12m
Verify the age of each cluster.
For the Mumbai cluster, run the following:
# kubectl get pods --kubeconfig=kubeconfig-mumbai
Sample output:
NAME READY STATUS RESTARTS AGE hello-588ddff5d8-2xtlb 1/1 Running 0 83s hello-588ddff5d8-65qdx 1/1 Running 0 83s hello-588ddff5d8-p4gdr 1/1 Running 0 83s hello-588ddff5d8-xlw5b 1/1 Running 0 7m32s
For the Paris cluster, run the following:
# kubectl get pods --kubeconfig=kubeconfig-paris
Sample output:
NAME READY STATUS RESTARTS AGE hello-588ddff5d8-bc7ct 1/1 Running 0 97s hello-588ddff5d8-qntt5 1/1 Running 0 7m46s hello-588ddff5d8-vkcf8 1/1 Running 0 97s
For the New York cluster, run the following:
# kubectl get pods --kubeconfig=kubeconfig-newyork
Sample output:
NAME READY STATUS RESTARTS AGE hello-588ddff5d8-d9pb8 1/1 Running 0 8m1s hello-588ddff5d8-fm7df 1/1 Running 0 112s hello-588ddff5d8-lxk2j 1/1 Running 0 112s
Now, edit the
policy.yaml
file so that the Mumbai and Paris clusters hold 40% of the pods and only 20% is left to the New York cluster.# nano policy.yaml
Change the configuration as shown below.
apiVersion: policy.karmada.io/v1alpha1 kind: PropagationPolicy metadata: name: hello-propagation spec: resourceSelectors: - apiVersion: apps/v1 kind: Deployment name: hello - apiVersion: v1 kind: Service name: hello placement: clusterAffinity: clusterNames: - mumbai - paris - newyork replicaScheduling: replicaDivisionPreference: Weighted replicaSchedulingType: Divided weightPreference: staticWeightList: - targetCluster: clusterNames: - newyork weight: 1 - targetCluster: clusterNames: - paris weight: 2 - targetCluster: clusterNames: - mumbai weight: 2
Apply the policy to the Karmada cluster.
# kubectl apply -f policy.yaml --kubeconfig=karmada-config
Verify the pod distribution again using the following command.
For the Mumbai cluster, run the following:
# kubectl get pods --kubeconfig=kubeconfig-mumbai
Sample output:
NAME READY STATUS RESTARTS AGE hello-588ddff5d8-2xtlb 1/1 Running 0 4m14s hello-588ddff5d8-65qdx 1/1 Running 0 4m14s hello-588ddff5d8-p4gdr 1/1 Running 0 4m14s hello-588ddff5d8-xlw5b 1/1 Running 0 10m
For the Paris cluster, run the following:
# kubectl get pods --kubeconfig=kubeconfig-paris
Sample output:
NAME READY STATUS RESTARTS AGE hello-588ddff5d8-qntt5 1/1 Running 0 10m hello-588ddff5d8-vkcf8 1/1 Running 0 4m30s
For the New York cluster, run the following:
# kubectl get pods --kubeconfig=kubeconfig-newyork
Sample output:
NAME READY STATUS RESTARTS AGE hello-588ddff5d8-d9pb8 1/1 Running 0 10m hello-588ddff5d8-fm7df 1/1 Running 0 4m42s hello-588ddff5d8-lxk2j 1/1 Running 0 4m42s hello-588ddff5d8-vd9lv 1/1 Running 0 89s
At this point, pods are running and distributed across three clusters. You can inspect the service in Karmada using the following command.
# kubectl describe service hello --kubeconfig=karmada-config
Sample output:
Name: hello Namespace: default Labels: propagationpolicy.karmada.io/name=hello-propagation propagationpolicy.karmada.io/namespace=default Annotations: <none> Selector: app=hello Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.101.68.149 IPs: 10.101.68.149 Port: <unset> 5000/TCP TargetPort: 9898/TCP Endpoints: <none> Session Affinity: None Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ApplyPolicyFailed 16m resource-detector No policy match for resource Normal ApplyPolicySucceed 11m resource-detector Apply policy(default/hello-propagation) succeed Normal SyncSucceed 11m execution-controller Successfully applied resource(default/hello) to cluster mumbai Normal SyncSucceed 11m execution-controller Successfully applied resource(default/hello) to cluster newyork Normal SyncSucceed 11m execution-controller Successfully applied resource(default/hello) to cluster paris Normal SyncWorkSucceed 2m4s (x9 over 11m) binding-controller Sync work of resourceBinding(default/hello-service) successful. Normal AggregateStatusSucceed 2m4s (x9 over 11m) binding-controller Update resourceBinding(default/hello-service) with AggregatedStatus successfully. Normal ScheduleBindingSucceed 2m4s (x13 over 11m) karmada-scheduler Binding has been scheduled
Conclusion
You've finished deploying multi-location, multi-cluster Kubernetes Orchestration with Karmada. You can now use Karmada to automate multi-cluster application management in multi-cloud and hybrid-cloud scenarios. For more information, check out the Karmada official documentation.