Berlin

November 4 & 5, 2024

New York

September 4 & 5, 2024

London

June 16 & 17, 2025

Kubernetes for engineering managers

If your organization is modernizing how it develops software, you are probably going to need a container orchestration platform like Kubernetes. Here's why.
March 10, 2023

You have 1 article left to read this month before you need to register a free LeadDev.com account.

If your organization is modernizing how it develops software, you are probably going to need a container orchestration platform like Kubernetes. Here’s why.

The days of huge, monolithic applications are numbered. Today’s applications consist of numerous container-based microservices that interact via APIs. This journey from monoliths to microservices is the technical ying to the process yang of DevOps and CI/CD practices, which ideally work hand-in-hand to make fast, iterative software development possible.

But with software, there’s always a tradeoff. While individual microservices are easier to maintain and update than features within a tangled monolith, orchestrating those microservices to work together smoothly has long proved a challenge for anyone running a large, distributed application.

What is Kubernetes?

That’s where Kubernetes comes in. The open source project emerged out of Google in 2014 to solve its internal container orchestration problems. Then, in 2015, Google joined forces with the Linux Foundation to form the Cloud Native Computing Foundation, which became the home and sponsor of the Kubernetes project. Since then, it has emerged as the leading container orchestration platform, largely beating out other attempts to solve the problem, such as Docker’s Swarm and Red Hat’s Ansible.

Nearly a decade later, Kubernetes (or k8s for short) is the de-facto container orchestration platform, specifically designed to solve the problem of making containerized microservices work together.

Kubernetes not only handles these tasks (more on this later), but also helps organizations use distributed architectures to their full potential, building failover and load-bearing features directly into the environment where your application runs, so your team doesn’t have to.

What are containers, and what is container orchestration?

Before we get into how Kubernetes works its magic, let’s take a step back and define some of these key terms, starting with containers. 

In software, a container is an executable package that includes an application and all its libraries and dependencies. A container can be run in isolation, ported across physical, virtual, or cloud infrastructure, and is more lightweight than a virtual machine because it doesn’t necessarily include its own operating system.

Containers are a neat way to package up microservices – small, discrete units of system functionality. They can then communicate with one another through application programming interfaces (APIs), which helps update or debug parts of an application without breaking the whole service.

While that sounds relatively simple in theory, large, complex applications create many opportunities for failure. What happens if one of the containers crashes, or a segment of the network can’t be reached? How can you figure out which container is causing an error?

A container orchestration platform like Kubernetes aims to solve these kinds of problems. Once configured, Kubernetes can deploy and manage your containers, scale things up or down as needed, and handle the underlying networking and availability issues. 

Of course, no platform magically takes all the administrative burden off your shoulders, and Kubernetes is somewhat notorious for its steep learning curve. But once you’re up to speed with it, it becomes responsible for a lot of the plumbing of your application.

How does Kubernetes work?

That notoriously steep learning curve is down in large part to the need for developers to create an architecture that will be captured in a series of configuration files written in a markup language like YAML

These configuration files explain how various containers will interact, how many of each container should be executing, and how you want the application to react to problems it might encounter, like too many users logging in at once. Reference instances of the containers themselves should be stored in a local or remote registry.

The Kubernetes platform uses those configuration documents to launch and run your application, grabbing container images from a registry, and striving to align the real-world architecture with the ideal expressed in the configuration documents. It allocates needed resources on the various hosts of your infrastructure as needed, coordinating the containers with one another, as well as with any other container-based applications it may be managing.

If real events diverge, or drift, from the desired state, Kubernetes works to fix things automatically – restarting crashed containers, for instance, or finding more server resources.

Kubernetes architecture and components

How does all that work under the hood? While this isn’t the place to take a deep dive into the technical underpinnings of Kubernetes, we can take a look at some of the most important components:

  • Pod. A pod is the basic unit of Kubernetes. It’s a logical grouping of one or more containers, which share storage and resources. If some of the containerized microservices in your application are more tightly coupled together than others, they probably should live together in the same pod. 
  • Node. A node is a host (either physical or virtual) that runs one or more pods. The collection of nodes under Kubernetes’ purview is called a cluster.
  • Deployment. This is the configuration file or files we discussed earlier, which describe the goal state of the cluster of nodes.
  • Control plane. The collective name for the set of tools that do the heavy lifting in Kubernetes. These include the scheduler, which matches pods to available nodes, the DNS server that manages network connectivity across the whole application, and the API gateway that allows outside services to communicate with the application and containers to communicate internally with one another.

Why use Kubernetes?

At this point, you should be getting a sense of what Kubernetes is and what it’s used for. If you’re looking to roll out a big containerized application, there are a number of advantages Kubernetes provide that make it the de facto standard here:

  • Kubernetes automates much of your application’s plumbing. The control plane monitors application health, replicates containers and nodes, performs load balancing, and allocates hardware resources for you. Kubernetes can even pull this off across a distributed architecture in a multicloud or hybrid cloud environment.
  • Kubernetes simplifies management of application-related resources. One of the trickiest aspects of dealing with containerized applications is that containers don’t store state between sessions. Kubernetes manages the storage of data and other important stateful information for the whole application, including security secrets like API keys or service passwords. 
  • Kubernetes is open source. That isn’t just an abstract philosophical advantage; it means that you can deploy Kubernetes in any one of a number of ways to meet your team’s size, skillset, and needs. You could download the open source code and deploy it yourself, for instance, or take advantage of managed enterprise Kubernetes offerings like Red Hat OpenShift, or CoreOS Tectonic, which come with additional features and support services. Managed Kubernetes offerings are also provided by all of the major public cloud providers, such as Amazon’s Elastic Kubernetes Service (EKS), Microsoft’s Azure Kubernetes Service (AKS), and Google Kubernetes Engine (GKE) .

Kubernetes challenges

Ready to embark on your Kubernetes journey? Don’t go in looking at things through rose-tinted glasses. A number of challenges await you:

  • The Kubernetes learning curve is real. Once your application is up and running, Kubernetes takes care of much of the heavy lifting for you. But fine tuning the Kubernetes configuration for your application is an art, and it takes practice to get good at it. Those skills are also in high demand: in VMware’s State of Kubernetes 2022 report, half of respondents said they were challenged by lack of internal experience and expertise to run Kubernetes.
  • Kubernetes isn’t a magic bullet. Even the best Kubernetes configuration won’t magically turn a broken application into a winner – and in fact, Kubernetes is less forgiving of a “patch on the fly” attitude than other platforms. Kubernetes works best in the context of a team already using best development practices, including CI/CD and automated testing. 
  • Kubernetes platforms aren’t all created equal. As we noted above, you have a lot of options for your Kubernetes rollout, from roll-your-own to full-service from a vendor, but they can’t just be swapped for one another mid-stream. You’ll need to research the different features offered by each, and also decide where you want to land on the spectrum from totally self-hosted, to vendor-managed offerings.

With all these challenges in mind, as well as the overhead involved in rolling out any complex platform, you need to make sure the effort is worth it before you deploy.

In many cases, especially for enterprise-class applications, it absolutely is, but your particular use case very well may not fit that description, in which case there may be a lighter weight option to consider.

Making that assessment may be your most important decision as an engineering leader overseeing a distributed, containerized application project.

Read more: