Architecting Kubernetes: How to Choose a Cluster Deployment Strategy

Updated on December 23, 2022
Architecting Kubernetes: How to Choose a Cluster Deployment Strategy header image

Introduction

This guide walks you through some architectural decisions you should consider when choosing a cluster deployment strategy for Vultr Kubernetes Engine (VKE) and the advantages and disadvantages of each approach.

A VKE cluster is a group of servers that provide an infrastructure for running applications and services in a highly available, distributed, and scalable manner. A cluster consists of one control plane, which is responsible for managing the cluster, and one or more worker nodes, which are responsible for running the actual applications and services. The control plane provides core scheduling, networking, and storage services. It manages the infrastructure, such as virtual machines and containers. And it supervises communication between the worker nodes to guarantee they function optimally. Worker nodes are responsible for running the containers, referred to as pods, that host the cluster's various workloads.

There are different ways to configure the cluster architecture to host your applications that offer benefits such as flexible scaling, cost efficiency, application, and environment isolation, ease of management, and more. Fortunately, VKE offers a free control plane and only charges for the total number of worker nodes, which removes any cost considerations when choosing the best architecture for your application. However, there are other important things you should consider besides cost when selecting a deployment strategy. This guide explores each approach's most essential pros and cons to help you choose wisely.

Size-based Architectural Patterns

One way to organize and group your clusters is by size. Kubernetes clusters can be scaled up or down by adding and removing worker nodes or deploying worker nodes with different amounts of compute resources, such as CPU, memory, and storage. These worker nodes are the physical or virtual servers that run the applications. With VKE, you can easily add or remove nodes to adjust the cluster's size. Kubernetes clusters are incredibly powerful and efficient, allowing users to quickly scale up or down their computing resources based on their needs.

The cluster's size is the combination of the total number of worker nodes and each of their compute resources, which they use to deploy more pods (horizontal scaling) or to assign different amounts of computing resources to each pod (vertical scaling).

A Few Large Clusters

You provision a few clusters to host your workloads in this architectural configuration. You can even use a single cluster. Each is relatively large, with many worker nodes and compute resources available to run workloads in many pods.

There are benefits to using a few large clusters:

  • You can optimize resource utilization. When you host applications on a few large clusters, the worker nodes efficiently share compute resources with the pods, minimizing wasted computing power.
  • You can efficiently manage the infrastructure because you do not need to interact with multiple clusters to carry out administrative or routine tasks.
  • You can reuse cluster-wide resources like load balancers, ingress controllers, and more, which makes their management simpler and more efficient. A

However, having a few large clusters has some drawbacks:

  • If you need to separate different applications, a single control plane can only have soft multitenancy. Kubernetes namespaces and role-based access control provide the separation between tenants, but they share the same control plane components, such as the DNS server for service discovery.
  • With fewer clusters, you have less fault tolerance because more services are concentrated on each cluster. Any service interruption or malfunction can cause a lot of capacity to be lost.
  • It's more challenging to rebuild a large cluster if it breaks. Large clusters may have a variety of different applications that require a complex configuration process.
  • Having too many different tenant applications on a single cluster can put a strain on the control plane components, leading to unexpected errors. If you plan to host many applications, it may be better to spread them across multiple clusters.

Many Small Clusters

In this architectural pattern, you spread your workload across a larger group of clusters with smaller worker nodes. This allows flexible scaling, cost efficiency, application and environment isolation, and ease of management. If a cluster is overloaded, some pods can be moved to another cluster, making it a more dynamic and efficient solution.

Using many small clusters is good if you need:

  • Hard multitenancy: You can completely isolate your applications by spreading them across different clusters to prevent sharing of the control plane components.
  • Fault tolerance: This architecture is more fault tolerant. You retain a significant chunk of capacity in case of a single cluster failure.
  • Lower complexity: Rebuilding a broken small cluster is less complex because it hosts fewer applications.

There are some drawbacks to this pattern:

  • You must repeatedly interact with many different clusters to perform administrative or routine tasks such as monitoring, updates, and more.
  • Spreading applications across many different clusters can waste resources like load balancers and ingress controllers, which can handle many applications concurrently.

Utility-based Architectural Patterns

An application may consist of several components for the front end, database, business logic, and so on. You might find yourself in a situation where you need to deploy more than one instance of an application to create different environments such as production, development, testing, staging, and so on. Each of these environments may have very different requirements to meet the application's needs, which can lead to inefficient resource use.

Fortunately, several other architectural patterns can be used to address this issue. These focus on the utility of the Kubernetes cluster instead of its size. For example, you might find that one application is more CPU intensive while another requires a large amount of memory. Additionally, your production environment might require an ingress controller, while development does not, and so on. By choosing the right configurations, you can ensure that your applications are running as efficiently as possible while meeting each environment's needs.

Cluster Per Application

The cluster-per-application approach runs each application and all its environments -- such as development, testing, and production -- in a single cluster. This architectural configuration offers many advantages, such as application isolation and ease of administration, as all the related components of an application can be found together. Additionally, it allows for greater scalability as clusters can be provisioned to match the application's exact requirements, including different compute resources and Kubernetes versions.

However, because all environments are on the same cluster, this can have an adverse effect on performance and reliability. Suppose a problem arises in one environment, such as a faulty code being executed in the test environment. In that case, it can result in service interruptions for the other environments, most notably the production environment. Therefore, it is important to take extra precautions when hosting multiple environments on the same cluster.

Cluster Per Environment

The cluster-per-environment approach allows you to host multiple applications that share an environment, such as testing or production, on a single cluster. The benefits of this approach are:

  • Efficient resource use: You can provision each cluster's compute resources according to the environment's requirements. Your development and testing environments may be smaller than production.
  • Access isolation: It's easier to restrict access to production clusters or others hosting sensitive environments.

However, managing a single application is more complicated when compared to the cluster-per-application approach, as you spread environments on different clusters. There may be better ways to group applications with highly variable requirements. For instance, applications requiring different Kubernetes versions cannot be grouped together using this approach.

Furthermore, with this approach, you must manage multiple clusters and their resources, which can become a laborious task. This is because each cluster must be provisioned and maintained separately, and any changes to applications or environments must be replicated across all clusters. Additionally, if one cluster is down, it can affect the other clusters and their applications, leading to potential downtime.

Fortunately, there are ways to minimize the complexity of this approach. For instance, you can use automated tools to manage clusters and resources, thus simplifying the process of managing multiple clusters. Additionally, you can use containerization technologies to ensure that applications and environments can run on multiple clusters, thus providing a degree of redundancy that can help minimize downtime risk.

Select the Right Architecture

There is no invalid approach when it comes to hosting your infrastructure. All the architectures discussed above are valid. However, the selection between the two architectural configurations depends on how you prioritize the following factors.

  • Scaling
  • Cost Efficiency
  • Tenant Isolation
  • High Availability
  • Ease of Management

Scaling a group of large clusters is expensive as the cost per cluster is significantly high when compared to spinning up many small clusters as the resource requirements increase over time. You can easily scale the infrastructure by adding more clusters to the group of small clusters to grow your capacity infinitely.

Cost efficiency depends on how you utilize the resources. Hosting your infrastructure on a few large clusters is more efficient by reusing cluster-wide resources such as ingress controllers, load balancers, and so on. Having many clusters may introduce resource gaps and cost you more than you utilize.

Tenant isolation has two types: soft and hard. You implement the soft tenant isolation on a large cluster by using Kubernetes namespaces, role-based access control, and more, but due to shared control plane components application can still discover other applications running on the cluster. You can achieve hard tenant isolation by hosting chunks of your infrastructure on many small clusters, eliminating the shared control plane components.

Kubernetes has security and resource limitation features such as NetworkPolicy resource to control traffic flow, Pod Security Admission for isolation at the namespace level, resource limitations with ResourceQuota object or LimitRange resource and much more. However, these methods require additional configuration instead of hard isolation and can not protect your infrastructure against every security breach.

High availability is hard to achieve with few large clusters as it is less fault tolerant than having many small clusters. You can lose significant capacity due to service interruptions in a large cluster. You can easily rebuild broken clusters in a group of small clusters compared to rebuilding a large cluster.

Ease of management is an appealing feature of having a few large clusters. Managing a group of many small clusters is complex compared to large clusters, as you need to interact with each cluster individually. You must perform routine tasks like upgrading the Kubernetes version and monitoring cluster health many times if you manage a large group of clusters.

Conclusion

This guide walked you through the different architectural configurations for hosting your applications. It also compared the different configurations with reference to scaling, cost efficiency, application/environment isolation, ease of management, and more. You can select the right cluster architecture for your application by analyzing and evaluating the requirements against the discussed pointers. Refer to the Kubernetes Components overview to learn more about the individual components in a Kubernetes cluster.