Kubernetes Service: A Deeper Look

Updated on November 21, 2023
Kubernetes Service: A Deeper Look header image

Introduction

A Service in Kubernetes is an abstraction that provides a network interface to access a set of Pods. These Pods can either be a component of an application or the whole application itself.

The Service can expose the set of Pods:

  • Internally within the cluster to enable inter-communication between nodes and Pods.
  • Externally to accept incoming traffic from outside the cluster.

Different components of an application may need to communicate with each other. These components live on Pods that have an internal IP address assigned to them. In Kubernetes, Pods are not a permanent entity. Pods may fail and get replaced by new ones with different IP addresses. To keep the application functioning, the components of an application that rely on these Pods have to be updated with their new IP addresses. Manually updating the IP address in these components is not practical. Instead, you can use a network entity with a fixed IP address, such as a Service that acts as a proxy for non-permanent Pods. Furthermore, the application that serves requests to external clients, like an Nginx web server, has to be exposed as a single endpoint, even if the cluster has multiple instances of that application.

Prerequisites

To test out the Kubernetes manifest files of this article, you will need:

  • A Kubernetes cluster. You can use Vultr Kubernetes Engine to deploy a Kubernetes cluster.
  • A kubectl client on your local workstation that is configured to work with your Kubernetes cluster.

Understanding a Kubernetes Service

The following is the manifest file that creates a Kubernetes Service Object:

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: web-server
spec:
  containers:
  - name: nginx
    image: nginx:stable
    ports:
      - containerPort: 80
        name: pod-port

---
apiVersion: v1
kind: Service
metadata:
  name: first-service
spec:
  selector:
    app: web-server
  ports:
  - name: service-port
    protocol: TCP
    port: 80
    targetPort: pod-port

This manifest file creates two resources:

  • A Pod named nginx.
  • A Service named first-service.

The .spec.selector field specifies a list of labels to look for in a Pod. Pods that have all the labels of the Service become its endpoint and are added to the Endpoints object of the service. The Endpoints object represents the set of Pods that a Service is exposing. The Service forwards the incoming request to any one of its endpoints. Kubernetes automatically adds any new pod as an endpoint that has all the matching labels with the service.

The .spec.ports field defines the network port of the Service, Service protocol, and other necessary network-related information. It contains the following sub-fields:

  • name: This defines the name of the port through which the Service accepts the incoming network requests.
  • protocol: This defines the network protocol of the Service. The default protocol is TCP. The other two supported protocols are SCTP and UDP.
  • port: This specifies the port number through which the Service accepts the incoming network requests.
  • targetPort: This specifies the port of the endpoints to which the Service forwards the incoming network traffic. You can specify the port number of the endpoint or its port name. In the example above, the value of this field matches the Pod's .spec.ports.name field. By default, the targetPort and the port field have the same value.

Note: The --- inside the above manifest files between the definition of the Pod and the Service allows you to group many definitions in a single file.

To POST this manifest file to the Kubernetes API, use:

$ kubectl apply -f first-service.yaml

Expected output:

pod/nginx created
service/first-service created

Once a Service is created, it is assigned an IP address. To check the IP address of a service, use:

$ kubectl get services

Expected output:

NAME            TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
first-service   ClusterIP   10.107.13.79   <none>        80/TCP    52m
kubernetes      ClusterIP   10.96.0.1      <none>        443/TCP   101d

This lists all the Services in your cluster. Under the CLUSTER-IP field, you can see the assigned IP address of a Service.

Types of Kubernetes Services

A Service can expose its endpoints to other internal components of the cluster, or it can expose them to traffic from outside the cluster. This section discusses how you can do that and more using the different Service types.

The Service manifest has a .spec.type field. The value of this field determines how the Service works.

The type property can have one of the following values:

  1. ClusterIP
  2. NodePort
  3. LoadBalancer
  4. ExternalName

ClusterIP

This is the default value of the .spec.type field. The Service is assigned an internal IP address of the cluster. The Service is only reachable from within the cluster. The two example Service definitions that are mentioned above in this article are of type ClusterIP by default.

NodePort

A NodePort type Service is an extension of ClusterIP type, meaning that the Service is still assigned an internal IP address that exposes the endpoints internally within the cluster. The additional feature of NodePort type Service is that the nodes that host the endpoint Pods are assigned a random port from --service-node-port-range. The default range is from 30000 through 32767.

Note: You can specify a custom port number in the .spec.ports.nodePort field to the NodePort Service from this range. However, it is recommended to let Kubernetes assign the nodePort value. This can prevent a possible collision with Services that pre-exist.

These nodes are directly accessible through their public IP address.

Working of NodePort Service:

  • The assigned port number is opened on all the nodes.
  • This port number is then proxied to the Service.
  • Any request made to the IP address of the node and its port is then proxied by the Service to the targetPort of the node's Pod.
  • For example, a node hosts an endpoint Pod of a Service. Its IP address is 192.168.1.101 and the NodePort Service assigned port 30200 to this Node. The targetPort of the endpoint Pod of this Service is 80. Then, any external request that is made at 192.168.1.101:30200 is directed to the Pod's port 80 which resided on the node with the same IP address.

Example:

apiVersion: v1
kind: Service
metadata:
    name: nodeport-service
spec:
    type: NodePort
    selector:
        app: webapp
    ports:
      - port: 8080
        targetPort: 80
        nodePort: 30007

This Service accepts internal traffic at port 8080 and forwards the traffic to the endpoints at port 80. The optional .spec.ports.nodePort field specifies the port number of the Node at which it accepts the external traffic.

Note: NodePort Service is not very secure as it exposes the Nodes directly through their IP addresses.

LoadBalancer

Using type LoadBalancer creates a load balancer in the infrastructure that hosts the cluster. For example, deploying a LoadBalancer type Service in Vultr creates a Vultr Load Balancer that distributes the traffic to different endpoints. The distribution mechanism depends on the cloud service provider.

The control plane of the Kubernetes cluster has a component called cloud-controller-manager. This component connects the cluster to the cloud service provider's API and implements its specific mechanism of creating and handling objects. It includes a service controller that is responsible to implement components like a load balancer.

Working of LoadBalancer Service:

  1. Kubernetes creates a Service that is similar to a NodePort type Service, meaning that the Service has an internal cluster IP address that is accessible locally and each node in the cluster is assigned a nodePort value.
  2. The service controller component configures the load balancer to send traffic to individual nodes on their assigned nodePort.
  3. A LoadBalancer type Service can be accessed externally using the Kubernetes Ingress object. You can assign an external IP address to the Service in its .status.loadBalancer.ingress field.

Example:

apiVersion: v1
kind: Service
metadata:
  name: load-balancer-service
spec:
  selector:
    app: web-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9350
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: 192.0.2.127

Refer to the Vultr Load Balancer with VKE article for more information.

ExternalName

This type of Service uses a DNS name to redirect the incoming traffic. The traditional .spec.selector field is not used to select the endpoints. Instead, the Service uses an external endpoint that is specified using a DNS name in the .spec.externalName field. This Service type is generally used to abstract a component that is not present inside the cluster.

Example:

apiVersion: v1
kind: Service
metadata:
  name: external-service
spec:
  type: ExternalName
  externalName: endpoint.database.site.com

Basic Operations

The operations described in this section are performed on a Service named test-service for demonstration purposes. You can replace it with any Service object's name.

  1. List all Service objects

     $ kubectl get services

    Expected output:

     NAME                    TYPE           CLUSTER-IP       EXTERNAL-IP                  PORT(S)          AGE
     external-service        ExternalName   <none>           endpoint.database.site.com   <none>           26h
     kubernetes              ClusterIP      10.96.0.1        <none>                       443/TCP          103d
     load-balancer-service   LoadBalancer   10.106.55.89     <pending>                    80:32075/TCP     6h21m
     nodeport-service        NodePort       10.97.164.108    <none>                       8080:30007/TCP   23h
     test-service            LoadBalancer   10.105.111.220   <pending>                    80:31297/TCP     2m29s
  2. Describe a Service object

     $ kubectl describe service test-service

    Expected output:

     Name:                     test-service
     Namespace:                default
     Labels:                   <none>
     Annotations:              <none>
     Selector:                 app=web-app
     Type:                     LoadBalancer
     IP Family Policy:         SingleStack
     IP Families:              IPv4
     IP:                       10.105.111.220
     IPs:                      10.105.111.220
     Port:                     <unset>  80/TCP
     TargetPort:               9350/TCP
     NodePort:                 <unset>  31297/TCP
     Endpoints:                <none>
     Session Affinity:         None
     External Traffic Policy:  Cluster
     Events:                   <none>
  3. Delete a Service object

     $ kubectl delete service test-service

    Expected output:

     service "test-service" deleted

Expose Multiple Ports

A Service can accept traffic on multiple ports. You can define multiple port definitions under the .spec.ports field.

apiVersion: v1
kind: Service
metadata:
  name: multi-port-service
spec:
  selector:
      app: nginx
ports:
    - name: http-port
      protocol: TCP
      port: 80
      targetPort: 9376
    - name: https-port
      protocol: TCP
      port: 443
      targetPort: 9377

Note: You have to define the name property for every port if your Service has multiple port definitions.

Set a Custom IP Address

Kubernetes allows you to set a custom IP Address for your Service. You can specify the IP address in the .spec.clusterIP field.

The value should:

  • Lie within the service-cluster-ip-range
  • Be a valid IPv4 or IPv6 address.

Not following these rules will cause the API server to throw an error.

To check the default service-cluster-ip-range, use:

$ kubectl cluster-info dump | grep -m 1 service-cluster-ip-range | sed -e 's/^[ \t]*//'

Sample output:

"--service-cluster-ip-range=10.96.0.0/12",

You may want to use a custom IP address in certain situations like:

  • You have a preconfigured client that relies on a specific IP address and reconfiguring it may turn out to be a tedious task.
  • You have the IP address set as a DNS entry somewhere and you wish to reuse it.

Service Discovery

It is the process of discovering the network location of a Service. It can be done using the environment variables and cluster DNS.

Environment Variables

Kubernetes adds a set of environment variables to the Pod. These variables contain information about each active Service which helps the Pod to communicate with the Service. This way, you do not have to hard-code the IP address of the Service in each Pod. The disadvantage of using this method is that the Pod will not be updated with the data about Services that are created after the Pod. Using DNS for Service discovery eliminates this issue.

DNS

In Kubernetes, Service and Pod objects get DNS records. A DNS server like CoreDNS creates a set of DNS records for a new Service and Pod object. To use DNS service in a cluster, you have to set it up. To set up a DNS service, check out the Kubernetes documentation. Pods can use DNS records to Discover Services.

Session Stickiness

In Kubernetes, you can route the network traffic from a specific client to a fixed Pod during the entirety of its session. To set this up, you have to set the .spec.sessionAffinity field of the Service. Refer to the Session Stickiness section for more information.

Conclusion

Kubernetes Service is a vital component of Kubernetes clusters, enabling efficient communication and load balancing between pods. With different types of services and service discovery options, it allows access to services within and outside the cluster, making it a powerful tool for building scalable and reliable applications.