How to Automate DNS/TLS with External DNS and Let’s Encrypt on Vultr Kubernetes Engine

Updated on September 23, 2022
How to Automate DNS/TLS with External DNS and Let’s Encrypt on Vultr Kubernetes Engine header image

By default, Kubernetes requires you to manage some portions of your application by hand, such as DNS records and TLS certificates. But wouldn't it be better to define your desired DNS and TLS alongside the application manifests?

The good news is that you can use two open-source Kubernetes plugins to automate the process. This guide explains how to install and configure ExternalDNS for DNS management in your manifests and cert-manager to handle certificate management.

Things you'll need

To follow this guide, you'll need:

You should also know how to:

  • Manage a VKE cluster with Kubectl
  • Use Helm
  • Create YAML definition files

Step 1: Set up the Domain

Log in to the registrar where you purchased your domain and set the nameserver (NS) records to Vultr's name servers: * ns1.vultr.com * ns2.vultr.com

Use vultr-cli to create a new DNS zone at Vultr.

$ vultr-cli dns domain create -d example.com

DOMAIN      DATE CREATED              DNS SEC
example.com 2022-03-19T19:12:00+00:00 disabled

Verify the zone with vultr-cli.

$ vultr-cli dns record list example.com

ID                                    TYPE  NAME DATA      PRIORITY  TTL
87be33b9-24fb-4502-9559-7eace63da9f7  NS    ns1.vultr.com  -1        300
de8edb75-7061-4c50-be79-4b67535aeb92  NS    ns2.vultr.com  -1        300

About Cert-manager

Cert-manager, at a high level, is a custom Kubernetes resource that allows for certificate management natively within Kubernetes.

cert-manager diagram

Cert-manager can:

  • Add certificates and certificate issuers as resource types in Kubernetes clusters
  • Simplify the process of obtaining, renewing, and using certificates
  • Issue certificates from a variety of sources, like Let's Encrypt, HashiCorp Vault, and Venafi, as well as private PKI
  • automatically renew certificates

Vultr offers custom cert-manager webhooks so users can issue certificates as YAML manifests. In the specific use case for this guide, you'll use the Vultr cert-manager-webhook plugin, which handles TLS certificates for your domains.

Step 2: Cert-manager Installation

In this step, you'll install the base cert-manager and the Vultr-specific cert-manager-webhook.

First, install the base cert-manager with kubectl apply as described in the cert-manager documentation.

$ kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.7.1/cert-manager.yaml

After applying the YAML, uou can inspect the related resources in the cert-manager namespace.

Next, create a secret that contains your Vultr API key. Cert-manager uses this secret to create the DNS entries required for domain validation.

    $ kubectl create secret generic "vultr-credentials" --from-literal=apiKey=<VULTR API KEY> --namespace=cert-manager

Now install the Vultr-specific cert-manager-webhook with Helm.

    $ helm install --namespace cert-manager cert-manager-webhook-vultr ./deploy/cert-manager-webhook-vultr

Verify that the Vultr webhook is running by inspecting the cert-manager namespace.

Step 3: Create YAML Definitions

To issue certificates, you must create YAML definitions for a ClusterIssuer and grant permissions to the service account.

ClusterIssuer YAML

A ClusterIssuer represents the Certificate Authority (CA) used to create the signed certificates.

This example uses the LetsEncrypt staging environment. For production, use https://acme-v02.api.letsencrypt.org/directory.

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-staging
spec:
  acme:
    # You must replace this email address with your own.
    # Let's Encrypt will use this to contact you about expiring
    # certificates, and issues related to your account.
    email: {YOUR EMAIL ADDRESS}
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      # Secret resource that will be used to store the account's private key.
      name: letsencrypt-staging
    solvers:
    - dns01:
        webhook:
          groupName: acme.vultr.com
          solverName: vultr
          config:
            apiKeySecretRef:
              key: apiKey
              name: vultr-credentials

Grant permissions to the service account

You must grant permissions to the service account for it to grab the secret. Deploy Role-based access control (RBAC), like this:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: cert-manager-webhook-vultr:secret-reader
  namespace: cert-manager
rules:
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["vultr-credentials"]
  verbs: ["get", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: cert-manager-webhook-vultr:secret-reader
  namespace: cert-manager
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: cert-manager-webhook-vultr:secret-reader
subjects:
  - apiGroup:""
    kind: ServiceAccount
    name: cert-manager-webhook-vultr

Apply the YAML. After you deploy the ClusterIssuer and RBAC, you can request TLS certificates from LetsEncrypt for domains hosted on Vultr.

Step 4: Request a Certificate

The Certificate resource is the human-readable certificate request definition that is honored by an issuer and kept up-to-date. Here's an example:

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: staging-cert-example-com
spec:
  commonName: example.com # REPLACE THIS WITH YOUR DOMAIN
  dnsNames:
  - '*.example.com' # REPLACE THIS WITH YOUR DOMAIN
  - example.com # REPLACE THIS WITH YOUR DOMAIN
  issuerRef:
    name: letsencrypt-staging
    kind: ClusterIssuer
  secretName: example-com-staging-tls # Replace this to have your domain

Here's a description of the key fields:

  • commonName: Your base domain.
  • dnsNames: The certificates you're requesting. You must encase wildcard (*) certificates in single quotes ('').
  • issuerRef: The name of the clusterIssuer you defined in the previous step's YAML. This example is for the Let's Encrypt staging environment. Yours might be different if you defined the production LetEncrypt service in your YAML.
  • secretName: The secret where you store the TLS certificates.

Kubernetes create a few more resources after creating the Certificate kind. They are:

  • CertificateRequests: The namespaced resource in cert-manager is used to request X.509 certificates from an Issuer.
  • Orders: Resources used by the ACME issuer to manage the lifecycle of an ACME order for a signed TLS certificate.
  • Challenges: Resources used by the ACME issuer to manage the lifecycle of an ACME challenge, which must be completed for an authorization for a single DNS name.

Here is an example kubectl output:

$ kubectl get certificates

NAME                       READY   SECRET                    AGE
staging-cert-example-com   False   example-com-staging-tls   47s

$ kubectl get certificateRequests

NAME                             APPROVED   DENIED   READY   ISSUER                REQUESTOR                                         AGE
staging-cert-example-com-qvjvj   True                False   letsencrypt-staging   system:serviceaccount:cert-manager:cert-manager   55s

$ kubectl get orders

NAME                                        STATE     AGE
staging-cert-example-com-qvjvj-3598131141   pending   59s

$ kubectl get challenges

NAME                                                   STATE     DOMAIN          AGE
staging-cert-example-com-qvjvj-3598131141-1598866100   pending   example.com     61s

See the Concepts section of the cert-manager documentation to learn more about these resources.

The validation of the certificate takes a few minutes. You can check the status with Kubectl:

$ kubectl get certificates

NAME                        READY   SECRET                    AGE
staging-cert-example-com    True    example-com-staging-tls   5m11s

Check the Ready state of the certificate. When it returns True, your valid TLS certificate from LetsEncrypt is stored in the secret name you defined in the certificate YAML.

$ kubectl get secrets | grep "example-com-staging-tls"

example-com-staging-tls    kubernetes.io/tls                     2      33m

Remember that this example ClusterIssuer points to the LetsEncrypt staging environment. For production, please use https://acme-v02.api.letsencrypt.org/directory.

You have automated TLS for your domain on Kubernetes!

Step 5: External DNS

Next, you'll set up ExternalDNS to automatically create DNS entries and relate them to the service IP addresses for the ingress or loadbalancer services in Kubernetes.

The ExternalDNS installation at Vultr is straightforward. You can install ExternalDNS with the following YAML manifest.

apiVersion: v1    
kind: ServiceAccount
metadata:
  name: external-dns
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: external-dns
    rules:
- apiGroups: [""]
  resources: ["services","endpoints","pods"]
  verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
  resources: ["ingresses"]
  verbs: ["get","watch","list"]
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["list"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: external-dns-viewer
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: external-dns
subjects:
- kind: ServiceAccount
  name: external-dns
  namespace: default
    ---
    apiVersion: apps/v1    
kind: Deployment
metadata:
  name: external-dns
spec:
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: external-dns
  template:
    metadata:
      labels:
        app: external-dns
    spec:
      serviceAccountName: external-dns
      containers:
      - name: external-dns
        image: k8s.gcr.io/external-dns/external-dns:v0.10.2
        args:
        - --source=ingress  #service is also possible
        - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above.
        - --provider=vultr
        - --registry=txt
        - --txt-owner-id=your-user-id
        env:
        - name: VULTR_API_KEY
          value: "{API KEY}" # Enter your Vultr API Key

Here's a description of the key fields in the Deployment Spec args section:

  • source: You can use ingress, or pair ExternalDNS with a regular loadbalancer service.
  • domain-filter: This filter limits ExternalDNS to the supplied domain.
  • provider: If you use a provider other than Vultr, set it here.
  • registry: Use txt to create a TXT record that accompanies each record created by external-dns.
  • txt-owner-id: A unique value that doesn't change for the lifetime of your cluster.

Apply the external-dns YAML and verify it's running correctly by inspecting the pod.

$ kubectl get pods | grep "external-dns"

external-dns-8cb7f649f-bg8m5   1/1     Running   0          10m

After ExternalDNS is running, you can add annotations to your service manifests for DNS entries.

Tying it together

Deployment and Service

External-dns and Cert-manager ensure you always have valid TLS certificates. To test this, deploy a simple application and expose it to the public internet on HTTPS. For example, this is a single replica deployment of Nginx and a clusterIP service that routes to the deployment.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  selector:
    app: nginx
  type: ClusterIP
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

Ingress

To expose the Nginx deployment to the internet, use ingress-nginx, or a loadbalancer service type instead of an ingress. If you use loadbalancer, change the type from ingress to service in your external DNS YAML.

To use Kubernetes Nginx ingress, apply the prepared manifests from the ingress controller quick start guide.

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml

This creates a new namespace, ingress-nginx, for the ingress resources. Here's an example ingress entry that exposes and Nginx app.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-nginx
  annotations:
    # use the shared ingress-nginx
    external-dns.alpha.kubernetes.io/hostname: www.example.com
spec:
  tls:
    - hosts:
      - example.com
      secretName: example-com-prod-tls
  ingressClassName: nginx
  rules:
  - host: www.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
        service:
          name: nginx
          port:
            number: 80
  • The external-dns.alpha.kubernetes.io/hostname: www.example.com annotation defines what entry ExternalDNS should create. In this case, it creates an A record for www that points to the load balancer deployed by the ingress.
  • The tls.hosts section defines which domain the ingress should treat as HTTPS.
  • secretName has the issued TLS certificates.
  • The rules.host section defines what URL should route to which service. In this example, www.example.com/ should go to this Nginx service deployment.

After you deploy the ingress, inspect it with Kubectl.

$ kubectl get ingress

NAME            CLASS   HOSTS             ADDRESS           PORTS     AGE
ingress-nginx   nginx   www.example.com   192.0.2.123       80, 443   13h

Kubernetes and the DNS system will take a few minutes to propagate the requests and domain records, and then you should have a domain backed with HTTPS.

Wrapping up

To recap what you've accomplished:

  • You created TLS certificates for your application with cert-manager.
  • Creating DNS entries and updating those records are part of your application's manifest and don't require adjusting records by hand.
  • Finally, to expose these applications, you created an ingress resource that ties it all together.

With these three tools at your disposal, you can define your application's entire state in YAML manifests and let Kubernetes handle the rest.

For more information, see these useful resources: