How to Apply Kubernetes Policies Using Kyverno
Kyverno is a Kubernetes tool that validates resources and tests Kubernetes policies. This tool is essential because it blocks objects that do not meet the set policy requirements from being applied to the cluster. In addition, it enables you to mutate objects that do not have metadata information, such as labels.
In this tutorial, you will learn how to install Kyverno and apply Kubernetes policies and rules using Kyverno.
Prerequisites
You need:
- A running Kubernetes cluster
- Krew
- Kubectl
What Are Kubernetes Policies and Rules?
Kubernetes policies are a group of rules that enforce cluster security compliance and best practices. Policies add the governance layer in Kubernetes as they enable Kubernetes administrators to choose what end users can apply to the cluster. They block manifests that are not compliant when someone tries to apply them to the cluster. Policies can be applied at the cluster level or namespace level. There are different kinds of policies in Kubernetes. Here are examples of Kubernetes policies:
- Network policies
- Volume policies
- Resource usage and consumption policies
- Access control policies
- Security policies
How to Install and Set Up Kyverno
Use the following command to install Kyverno using Kubectl:
$ kubectl create -f https://raw.githubusercontent.com/kyverno/kyverno/release-1.5/definitions/release/install.yaml
After successfully installing the Kyverno plugin, you will have to apply Kyverno policies using the
kubectl apply
command. However, Kyverno has a CLI that you can use; instead of using thekubectl apply
command, you will use thekubectl kyverno apply
command. This tutorial will detail how to use these commands in the next sections. Use the following command to install Kyverno CLI:$ kubectl krew install kyverno
You will get the following output:
Updated the local copy of plugin index. Installing plugin: kyverno Installed plugin: kyverno \ | Use this plugin: | kubectl kyverno | Documentation: | https://github.com/kyverno/kyverno | Caveats: | \ | | The plugin requires access to create Policy and CustomResources | / /
Use the following command to check if Kyverno was installed successfully and also to see the details of the Kyverno namespace:
$ kubectl get all -n kyverno
You will get the following output that shows you the details of the Kyverno namespace created when installing Kyverno:
NAME READY STATUS RESTARTS AGE pod/kyverno-56646d79c4-sgp99 0/1 Running 0 19s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kyverno-svc ClusterIP 10.107.200.47 <none> 443/TCP 22s service/kyverno-svc-metrics ClusterIP 10.101.157.245 <none> 8000/TCP 23s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/kyverno 0/1 1 0 22s NAME DESIRED CURRENT READY AGE replicaset.apps/kyverno-56646d79c4 1 1 0 21s
Use the following command to know more about the Kyverno pod configurations and details:
$ kubectl describe pod -l app=kyverno -n kyverno
How to Apply Kubernetes Policies Using Kyverno
Kyverno can be used to check if the resource being applied to the cluster has the optimum resource limits set in the policy. Here is a Kyverno policy that validates pods based on resource limits; all pods being created in the cluster will have a CPU resource limit of 200m and a memory limit of 1Gi. If a certain pod exceeds these limits, it will be blocked. Create a YAML file and add the following contents:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-pod-requests-and-limits
namespace: earth
spec:
validationFailureAction: enforce
rules:
- name: validate-resource-limits
match:
resources:
kinds:
- Pod
validate:
message: "Please make sure that you have resource limits with a max of 200m of CPU and 1Gi of memory."
pattern:
spec:
containers:
- resources:
limits:
memory: "<=1Gi"
cpu: "<=200m"
Apply the above policy using the following command:
$ kubectl apply -f kyverno-policy.yaml
You will get the following output:
clusterpolicy.kyverno.io/require-pod-requests-and-limits created
The Kyverno policy you just created has enforced the validationFailureAction
field, which will block the resource being applied to the cluster if Kyverno validation fails. The resource map states what resources will be validated by Kyverno policies. In this case, the require-pod-requests-and-limits
policy will be applied to pods only.
The message field carries the message that will be displayed when a resource has been blocked from being applied to the cluster:
message: "Please make sure that you have resource limits with a max of 200m of CPU and 1Gi of memory."
Use the following command to see all policies that have been created successfully:
$ kubectl get clusterpolicies
You will get the following output if the policy has been created successfully:
NAME BACKGROUND ACTION READY
require-pod-requests-and-limits true enforce true
Because the require-pod-requests-and-limits
policy has been created successfully, test it by creating a pod that has resource limits that pass the resource limits set in the previous policy:
apiVersion: v1
kind: Pod
metadata:
name: nginx
namespace: earth
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "100m"
limits:
memory: "2Gi"
cpu: "100m"
Apply the above pod to your cluster using the following command:
$ kubectl apply -f new-pod.yaml
You will get the following error message that shows that the pod has exceeded the resource limits set by Kyverno; the pod was supposed to have a memory limit of 1Gi as stated in the previous policy, but it had a memory limit of 2Gi:
Error from server: error when creating "new-pod.yaml": admission webhook "validate.kyverno.svc-fail" denied the request:
resource Pod/earth/nginx was blocked due to the following policies
require-pod-requests-and-limits:
validate-resource-limits: 'validation error: Please make sure that you have resource
limits with a max of 200m of CPU and 1Gi of memory. Rule validate-resource-limits
failed at path /spec/containers/0/resources/limits/cpu/'
Kyverno Mutations
Suppose a certain resource is applied to a cluster that does not have necessary labels or an ImagePolicy. You can patch the resource using a Kyverno mutation. In this tutorial, you will learn how to add labels to objects being applied to the cluster using Kyverno mutations.
Create a YAML file for a policy and add the following contents that will add labels to objects that do not have the author and state labels:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-labels
spec:
validationFailureAction: enforce
rules:
- name: add-labels
match:
resources:
kinds:
- Pod
- Service
- ConfigMap
- Secret
mutate:
patchStrategicMerge:
metadata:
labels:
author: boemo
state: average
The above policy will be applied at the cluster level because it uses the ClusterPolicy kind. The labels will be added to the following objects that do not have the labels:
- Pod
- Service
- ConfigMap
- Secret
The patchStrategicMerge
map contains the labels you want to add to your cluster. You can even add labels such as color and deadline. In this case, add the author label, which will state the name of the author who wrote the manifest, and the state label, which will specify the state and quality of the cluster.
Apply the above policy using the following command:
$ kubectl apply -f labels.yaml
You will get the following output:
clusterpolicy.kyverno.io/add-labels created
Create a service that will be mutated by the policy you just created previously:
apiVersion: v1
kind: Service
metadata:
name: boemo-app
namespace: default
spec:
type: LoadBalancer
ports:
- name: boemo-app-http
port: 80
protocol: TCP
targetPort: 80
selector:
app: boemo-app
Apply the above service. If you describe the above service using the following command:
$ kubectl describe service boemo-app
You will get the following output that shows that the service you created has new labels added by the add-labels
policy:
Name: boemo-app
Namespace: default
Labels: author=boemo
state=average
Annotations: policies.kyverno.io/last-applied-patches: add-labels.add-labels.kyverno.io: added /metadata/labels
Selector: app=boemo-app
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.104.227.203
IPs: 10.104.227.203
Port: boemo-app-http 80/TCP
TargetPort: 80/TCP
NodePort: boemo-app-http 31850/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
How to Use the Kyverno CLI
The Kyverno CLI validates and tests resources that are outside of the cluster. Use the following command to validate resources using Kyverno policy:
$ kubectl kyverno apply [add kyverno policy yaml here] --resource [add the yaml file you want to validate here]
For example, test the previous pod using the require-pod-requests-and-limits
policy you created earlier on:
$ kubectl kyverno apply kyverno-policy.yaml --resource new-pod.yaml
You will get the following output if the resource does not violate the Kyverno policy:
Applying 1 policy to 1 resource...
(Total number of result count may vary as the policy is mutated by Kyverno. To check the mutated policy please try with log level 5)
pass: 1, fail: 0, warn: 0, error: 0, skip: 2
If your resource violates the Kyverno policy you will get the following output:
(Total number of result count may vary as the policy is mutated by Kyverno. To check the mutated policy please try with log level 5)
policy require-pod-requests-and-limits -> resource earth/Pod/nginx failed:
1. validate-resource-limits: validation error: Please make sure that you have resource limits with a max of 200m of CPU and 1Gi of memory. Rule validate-resource-limits failed at path /spec/containers/0/resources/limits/memory/
pass: 0, fail: 1, warn: 0, error: 0, skip: 2
Error: exit status 1
Learn More
To learn more about Kyverno, see the project documentation.