As more and more aspects of human life continue to move online, the need to dramatically scale the internet is only increasing. This trend began many years ago (we could say during the dotcom boom) and has seen many iterations of technological advancement.
AWS, launched in 2002 as the first public cloud offering, opened the door for businesses to outsource IT operations and scale resource consumption up and down as needed. Virtual machines began abstracting application software away from physical hardware, and new patterns of deployment were soon needed.
Microservices are collections of isolated and loosely coupled services that can be maintained and configured independently from their surroundings. They can be deployed at scale when packaged into containers (commoditized in 2014 by Docker), which have become the building blocks for a new, distributed generation of infrastructures.
Different technologies, such as Rancher, Docker Swarm, and Mesos, competed to take the lead in container orchestration. But it was ultimately Kubernetes (open sourced by Google in 2014) that became the champion of containerized microservices.
While businesses clearly saw the benefits of Kubernetes, its innate complexity and steep learning curve have always been barriers to entry. Smaller companies lacked the operational expertise and resources to successfully manage the behemoth technology. Larger enterprises struggled to integrate cloud-native tools and processes into legacy infrastructures.
Grappling with Kubernetes complexity
Over the years, several solutions have appeared in the industry with the goal of helping organizations adopt Kubernetes and optimize container orchestration. Rancher, OpenShift, and public cloud managed services such as Azure Kubernetes Service, Elastic Kubernetes Service, and Google Kubernetes Engine are a few examples. These solutions have dramatically simplified the deployment and management of Kubernetes clusters, accelerating the shift to cloud-native applications while making them more scalable and resilient.
For that reason, Kubernetes has achieved massive adoption. In 2021, Traefik Labs surveyed more than 1,000 IT professionals about their use of the technology. Over 70% of respondents reported using Kubernetes for a business project. Yet, businesses that have only just overcome the challenges of adopting container technologies now face hurdles in scaling their deployments.
As Kubernetes adoption continues, new challenges are starting to appear. Businesses are now supporting more and larger Kubernetes clusters to meet the needs of an increasing number of containerized applications. More clusters, however, means more components to manage and keep up to date. Problems that are relatively straightforward to solve within a single Kubernetes deployment are exponentially more difficult in larger, multi-cluster environments. The complexity of Kubernetes compounds as it scales. Yet, multi-cluster orchestration is inevitably the next frontier for engineers to tackle.
Kubernetes multi-cluster requirements
Developers need the proper tools to manage multi-cluster challenges, from contextual alerting to new deployment strategies and beyond. Let’s break it down:
- Federation tools provide mechanisms for expressing which clusters have their configuration managed and what that configuration should look like. A single set of APIs in a hosting cluster coordinates the configuration of multiple Kubernetes clusters across distributed environments. Federated cloud technologies bolster the interconnection of two or more geographically separate computing clouds, making complex multi-cluster use cases easier for engineering teams to address.
- It’s extremely complex to maintain multiple clusters and have them work together as one unit. Connectivity makes it possible to do so. The right tools can help you handle interconnections between clusters, control routing to clusters, load balance across geographically distributed pools (with global server load balancing, or GSLB), and manage application updates across multiple clusters.
- Security challenges are compounded in complex, distributed IT environments but can be resolved when cloud-native security tools and processes are adopted. This means asking new questions. How do you handle security in zero-trust environments? How do you manage the end-to-end encryption of connections? How do you control access to your applications? How do you maintain TLS certificate management in distributed infrastructures? When security is integrated into the cluster, distributed applications become more secure.
- Observability allows you to quickly see the big picture of a distributed infrastructure, so you can quickly and easily diagnose issues. Grafana and Prometheus are examples of widely used tools to this end. As you scale the number of clusters deployed, observability and contextual alerting become even more important as there are more ways things can go wrong. Having the right tools in place to enable developers to see exactly where issues are will not only keep apps running smoothly, but reduce significant guesswork and save valuable time.
The Kubernetes multi-cluster future
Ensuring clusters, services, and network traffic work seamlessly together in the cloud-native world is a major challenge. Kubernetes has won the orchestration war and continues to be widely adopted by organizations around the world, but the technology is also naturally maturing. With that maturity comes new problems and new challenges that are compounded in multi-cluster deployments.
Development, engineering, and operations teams (of all skill-levels) who build and operate applications on Kubernetes need easier ways to achieve visibility, scalability, and security of their clusters and networks. When looking for tools to manage standard microservices architectures, developers must prioritize solutions that provide capabilities such as instant observability, out-of-the-box contextual alerting, geographically aware content delivery, and built-in service meshing.
The challenges of multi-cluster orchestration is becoming increasingly prevalent, but by adapting to the cloud-native world with the right tools, development and operations teams will be able to wrangle multi-cluster Kubernetes complexity and see the immense benefits that come with Kubernetes like never before.
Emile Vauge is founder and CEO of Traefik Labs.
New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to email@example.com.
Copyright © 2021 IDG Communications, Inc.