How to Monitor Application Logs in Kubernetes using Loki

Updated on July 25, 2024
How to Monitor Application Logs in Kubernetes using Loki header image

Introduction

Loki is a log aggregation system. The Loki stack comprises of three components that work with Grafana and Promtail to enable application log monitoring within a Kubernetes cluster.

Loki is the core component responsible for log ingestion, storage, and query processing while Promtail acts as the agent responsible for collecting log information from cluster sources. In addition, the Grafana dashboard serves as the main interface for querying, visualizing, and exploring the log data stored in Loki.

This article explains how to monitor application logs in Kubernetes using Loki. You are to deploy the Loki stack in a Vultr Kubernetes Engine (VKE) cluster, deploy a sample application, and monitor logs using the Grafana dashboard.

Prerequsites

Before you begin:

Install the Loki stack

  1. Add the grafana Helm repository to your sources.

    console
    $ helm repo add grafana https://grafana.github.io/helm-charts
    
  2. Install the Loki stack to your cluster using the Helm chart.

    console
    $ helm install loki grafana/loki-stack -n loki-stack --set grafana.enabled=true --set grafana.service.type=LoadBalancer --create-namespace
    

    The above command installs the Loki stack with Grafana enabled, and sets up a LoadBalancer service which in turn creates a Vultr Load Balancer for external access. In addition, a new namespace loki-stack is created to centralize all Loki resources.

    When successful, your output should be similar to the one below:

    The Loki stack has been deployed to your cluster. Loki can now be added as a datasource in Grafana.
    See http://docs.grafana.org/features/datasources/loki/ for more detail.
  3. Wait for at least 2 minutes for the deployment process to complete. Then, view all Pods in the loki-stack namespace.

    console
    $ kubectl get pods -n loki-stack
    

    Output:

    NAME                            READY   STATUS    RESTARTS   AGE
    loki-0                          1/1     Running   0          2m
    loki-grafana-84d9c8dd87-sqtjr   2/2     Running   0          2m
    loki-promtail-mft9z             1/1     Running   0          2m

Create a Sample Cluster Application

To test the Loki stack functionalities, follow the steps below to create a sample Golang application to run and monitor within your Kubernetes cluster.

  1. Create a new application directory.

    console
    $ mkdir vke-loki
    
  2. Switch to the directory.

    console
    $ cd vke-loki
    
  3. Create a new main application file main.go using a text editor such as Nano.

    console
    $ nano main.go
    
  4. Add the following contents to the file.

    go
    package main
    
    import (
        "log"
        "net/http"
    )
    
    func main() {
        http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
            log.Println("Received request for path:", r.URL.Path)
            w.Write([]byte("Hello, World!"))
        })
    
        http.HandleFunc("/error", func(w http.ResponseWriter, r *http.Request) {
            log.Println("Simulating an error...")
            http.Error(w, "simulated error", http.StatusInternalServerError)
        })
    
        log.Println("Starting server on :8080")
        if err := http.ListenAndServe(":8080", nil); err != nil {
            log.Fatal(err)
        }
    }
    

    Save and close the file.

    The above application code defines an HTTP server that runs on the host port 8080 using the net/http standard library package. The main function sets up two HTTP handlers, the root (/) and /error paths. When triggered, the root handler responds with a "Hello, World!" message and logs information about the received request, while the /error handler simulates an error by returning a simulated internal server error (HTTP status code 500). The server is configured to listen on port 8080.

  5. Create a new dependency file go.mod.

    console
    $ nano go.mod
    
  6. Add the following contents to the file.

    go
    module go-web-app
    
    go 1.21.4
    

    Save and close the file.

Containerize the Application

  1. Create a Dockerfile to define the application container environment.

    console
    $ nano Dockerfile
    
  2. Add the following contents to the file.

    dockerfile
    FROM golang AS build
    
    WORKDIR /app
    COPY go.mod ./
    
    COPY main.go ./
    RUN CGO_ENABLED=0 go build -o /go-app
    
    FROM gcr.io/distroless/base-debian10
    WORKDIR /
    COPY --from=build /go-app /go-app
    EXPOSE 8080
    USER nonroot:nonroot
    ENTRYPOINT ["/go-app"]
    

    Save and close the file.

    The above configuration defines a multi-stage build for the Go application. Within the first stage, it uses the official Golang base image to build the application. Then, it uses /app as the working directory, copies the go.mod and main.go files, and builds the Go application with CGO disabled, to create the go-app binary.

    The second stage uses the gcr.io/distroless/base-debian10 image, / as the working directory. Then. it copies the compiled binary from the build stage to /go-app that works as the application entry point that listens for connections on port 8080.

  3. Build the Docker image.

    console
    $ docker build -t go-web-app .
    
  4. Export your Vultr Container Registry access credentials as environmental variables to store Docker images.

    console
    $ export VULTR_CONTAINER_REGISTRY_USERNAME=<enter the Vultr Container Registry username>
    $ export VULTR_CONTAINER_REGISTRY_API_KEY=<enter the Vultr Container Registry API key>
    $ export VULTR_CONTAINER_REGISTRY_NAME=<enter the Vultr Container Registry name>
    
  5. Log in to your Vultr Container Registry.

    console
    $ docker login https://sjc.vultrcr.com/$VULTR_CONTAINER_REGISTRY_NAME -u $VULTR_CONTAINER_REGISTRY_USERNAME -p $VULTR_CONTAINER_REGISTRY_API_KEY
    

    Output:

    Login Succeeded
  6. Tag the local Docker image with the Vultr Container Registry repository name.

    console
    $ docker tag go-web-app:latest sjc.vultrcr.com/$VULTR_CONTAINER_REGISTRY_NAME/go-web-app:latest
    
  7. Push the image to your registry.

    console
    $ docker push sjc.vultrcr.com/$VULTR_CONTAINER_REGISTRY_NAME/go-web-app:latest
    

Apply Vultr Container Registry Credentials to the Kubernetes Cluster

  1. Access your Vultr Container Registry control panel in the Vultr Customer Portal.

  2. Navigate to the Docker/Kubernetes tab.

    Download the Vultr Container Registry Secret YAML file

  3. Click Generate Kubernetes YAML within the Docker Credentials For Kubernetes section to generate a new YAML file with your access details.

  4. Open the downloaded file and copy all contents to your clipboard.

  5. Create a new Secret resource file secret.yaml.

    console
    $ nano secret.yaml
    
  6. Paste your generated Kubernetes YAML contents to the file. For example:

    yaml
    apiVersion: v1
    kind: Secret
    metadata:
      name: vultr-cr-credentials
    data:
      .dockerconfigjson: eyJhifX19
    type: kubernetes.io/dockerconfigjson
    
  7. Apply the Secret to your VKE cluster.

    console
    $ kubectl apply -f secret.yaml
    

    When successful, all cluster resources can access and pull data from your Vultr Container Registry repositories.

Deploy the Sample Application to your VKE cluster

  1. Create a new Deployment resource file app.yaml.

    console
    $ nano app.yaml
    
  2. Add the following contents to the file. Replace testreg with your actual Vultr Container Registry name.

    yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: go-web-app
      labels:
        app: go-web-app
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: go-web-app
      template:
        metadata:
          labels:
            app: go-web-app
        spec:
          containers:
          - name: go-web-app
            image: sjc.vultrcr.com/testreg/go-web-app:latest
            imagePullPolicy: Always
          imagePullSecrets:
          - name: vultr-cr-credentials
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: go-web-app-service
      labels:
        app: go-web-app-service
    spec:
      type: LoadBalancer
      ports:
      - port: 8080
      selector:
        app: go-web-app
    

    Save and close the file.

    The above configuration creates a new Deployment and Service resource for the Go web application with the following specifications:

    • The Deployment specifies a container within a Pod that uses the Go web application image from your Vultr Container Registry.
    • The Service uses the LoadBalancer resource to expose the application port 8080 and direct traffic to Pods with the go-web-app label.
    • The imagePullSecrets section uses the vultr-cr-credentials Secret variable to authenticate and pull the container image from your Vultr Container Registry repository.
  3. Deploy the application to your cluster.

    console
    $ kubectl apply -f app.yaml
    
  4. Wait for at least 1 minute for the deployment process to complete, then, view all Pods with the go-web-app label.

    console
    $ kubectl get pods -l=app=go-web-app
    

    Your output should be similar to the one below:

    NAME                          READY   STATUS             RESTARTS   AGE
    go-web-app-76c9ffdf67-dgh86   1/1     Running            0          20s

Access the Grafana Dashboard

  1. Retrieve the default Grafana login password from the loki-grafana Secret.

    console
    $ kubectl get secret -n loki-stack loki-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
    

    Copy the generated administrator password similar to the one below:

    AABBHHS662SHH
  2. View the loki-grafana Service and retrieve the assigned Vultr Load Balancer IP Address for external access.

    console
    $ kubectl get svc loki-grafana -n loki-stack -o=jsonpath='{.status.loadBalancer.ingress[0].ip}'
    

    Your output should be similar to the one below:

    192.168.0.240
  3. Visit your cluster Load Balancer IP to access the Grafana web interface using a web browser such as Chrome.

    http://192.168.0.240/login

    When prompted, enter the following details in the respective fields to log in.

    • Username: admin
    • Password: Your Generated Password

    When successful, navigate to Explore within the Grafana dashboard.

    Access the Grafana Explore Tab

Invoke the application

  1. Open your SSH session and view the go-web-app Load Balancer Service IP Address. Access the application endpoint using Vultr Load Balancer. To get the Load Balancer IP, run the below command:

    console
    $ kubectl get svc go-web-app-service -o=jsonpath='{.status.loadBalancer.ingress[0].ip}'
    

    Your output should be similar to the one below:

    102.168.0.200
  2. Test access to the application on port 8080 using the Curl utility.

    console
    $ curl http://192.168.0.200:8080
    

    Output:

    Hello, World!
  3. To generate more HTTP traffic, invoke the application endpoint using the for loop.

    console
    $ for i in {1..100}; do curl http://192.168.0.200:8080/ & done; wait
    
  4. To simulate an error scenario, invoke a different endpoint such as error.

    console
    $ curl http://192.168.0.200:8080/error
    
  5. Generate more HTTP traffic for the error scenario using the for loop.

    console
    $ for i in {1..50}; do curl http://192.168.0.200:8080/error & done; wait
    

Query the Application Logs with Grafana

  1. Switch to your web browser session, and access the Grafana dashboard.

  2. Within the Explore section, enter the following query in the Log browser field.

    {app="go-web-app", namespace="default"}

    Run a Grafana Query

  3. Click Run query to execute the statement and view all application access metrics.

    Grafana Dashboard

Conclusion

You have deployed the Loki stack to a VKE cluster and monitored application logs using Grafana. In the process, you containerized a sample application image to the Vultr Container Registry to experiment with different Loki stack functionalities. For more information and implementation samples, visit the Loki documentation.

More Information

For more information, visit the following resources.