Simplify Software Delivery Using Containers and the Cloud, Part 2: Kubernetes

Last time, we talked about the benefits of packaging your applications in containers. Today, we’ll discuss how Kubernetes builds on top of Docker.

Initially, the kinds of technologies that underpin the cloud focused on two fundamental benefits:

  1. Freeing up your IT team from having to maintain infrastructure (computation, storage, databases, and networking resources).
  2. Making your infrastructure (and therefore applications) infinitely flexible and scalable.

Think virtual machines, storage, and networking resources hosted outside of your data center, with tools that allow you to remotely create and operate these resources as easily as if they were in your data centre. Add to that a billing model that bills you for consumption of these resources and you have the fundamentals of today’s big cloud players.

 

 

Kubernetes is a container orchestration platform that builds on top of Docker - allowing large numbers of containers to scale and work together, reducing operational burden.

 

 

With the advent of containers and container orchestration solutions, all the major cloud players added cloud platform services around Docker and Kubernetes to help customers move their applications to the cloud.

In short, Kubernetes is a container orchestration platform that builds on top of Docker, allowing large numbers of containers to scale and work together to reduce operational burden.

We install Kubernetes on a group of machines (either bare metal or virtual) known as a cluster of nodes. One of the nodes will be the master. The others are workers, which is where your containers will run.

The fundamental unit in Docker at runtime is a container. The equivalent in Kubernetes is a pod. A pod is made up of a set of Docker containers that are deployed together, started and stopped together, and share storage and network.

Let's get a feel for what our Kubernetes cluster allows us to do. Kubernetes has a powerful UI (Dashboard), but here we’ll explore its features using its command-line tool – kubectl.

 

 

kubectl get nodes

provides you with an overview of the nodes in your cluster, displaying their status and how long they have been running for.

 

 

kubectl describe node

lets you zoom in on a specific node and provides a full report on its configuration, role, and capacity, how it is doing in terms of resources (disk, memory, processes), what kind of host it is running on, and what its workflow of pods is with their CPU and memory behavior and limits.

 

 

kubectl create pod

lets you instantiate a pod using a simple text file description of its containers (in YAML format) and the resources they need. kubectl get pods lets you see all the pods in your cluster (across all nodes).

 

Let’s pause briefly and understand that pods can be scaled (multiple instances) using a replica set.

 

 

kubectl create replicaset

will start up a specified number of instances of a pod, distributed across your cluster based on where it makes most sense. You can then update the number of pods by adjusting the replica set.

 

Let’s further understand that a deployment will allow us to do a rolling update of replica sets.

 

 

kubectl create deployment

creates a deployment in your cluster that will start up a set of pods/replica sets. Adjusting the deployment will allow you to achieve rolling updates, i.e. it will coordinate spinning down the old pods/replica sets and spinning up new replacement pods/replica sets.

 

 

This is a big deal! Applications and their load are constantly changing, but now the platform can decide for us how to best utilize our resources across all of our applications and their changing workload requirements.

 

 

So, now that we have looked at a few of the Kubernetes commands, we can see it gives us simple-to-use tools to manage our cluster of machines, as well as manage and scale the application components deployed to that cluster. Let's understand the benefits this brings us.

Let’s start by considering that you no longer need to worry about which physical machine your application components end up running on.

This is a big deal! Applications and their load are constantly changing, but now the platform can decide for us how to best utilize our resources across all the applications and their changing workload requirements.

Now let's understand that we have a common way of scaling any, and all, of our application components—irrespective of the development language, frameworks, and application platform that each of them has been implemented with.

This removes a lot of potential complexity when considering your systems’ end-to-end. This consistency also allows the platform to make trade-offs between different types of application components (maybe front-end vs back-end) when it comes to scaling and provisioning instances of the component.

 

 

Pods are removed from existing nodes and moved to the new node without any downtime for the users. This makes life easier for our operations teams whose reason for existence is to keep the application up and providing a good quality of service.

 

 

Each pod has a mechanism for reporting its "health" back to the platform. This allows the platform to detect when a process has died or the pod is misbehaving in any way, and allows the platform to un-provision the bad pod and spin up a replacement.

It is at this point that you will appreciate you much more robust your applications have become. And again, because the platform takes care of this in a standardized way, we haven't introduced a lot of additional complexity for our developers.

When a cluster does start running short on resources, extending your infrastructure is dead simple. The new machine (bare metal or virtual) has the worker node software installed, it gets registered with the cluster, and immediately it can be made available to start taking on some of the load from the existing nodes. Many cloud providers offer the underlying services that will allow the cluster to do this automatically.

Importantly, let's note that the cluster can do this without any service outage. Pods are removed from existing nodes and moved to the new node without any downtime for the users. This makes life easier for our operations teams whose reason for existence is to keep the application up and provide a high quality of service.

Pods, replica sets, and deployments (and all other resources in the cluster) are all created from simple text file definitions.

This has many benefits. It is easier to understand each application component and all the its resource needs. We can put them into source control and include them in CI/CD pipelines. And Kubernetes allows these CI/CD pipelines to easily achieve advanced deployment practices such as blue/green deploys, canary releases, and deploys to support A/B testing.

In addition to providing a fantastic platform for running your application containers, it is also a fantastic platform for running operations tools to manage your applications. You’ll often have containers running for services such as logging (for example, using the ElasticStack) and monitoring (commonly using Prometheus).

This makes it easier to create richer application platforms, where the additional platform components benefit from the same scalability, robustness, and flexibility as your application components, while using common mechanisms.

 

 

Welcome to the future—it is now!

 

 

As a reminder of where we started: we all want to be able to focus on innovation and writing great software.

Containers provide us with a simple model for removing the hassle of reliably building and shipping our software components.

Cloud-enabling platform services such as Kubernetes greatly simplify scaling applications, achieving near-zero downtime, hosting platform services such as logging and monitoring, and achieving the best utilization of your resources, whether they are in the cloud or in your own data center.

Given all these benefits, why would you want to do things any other way? Welcome to the future—it is now!

 

Greg Hodgkinson
Practice Director

Gregory Hodgkinson is the Lifecycle Tools and Methodology Practice Director at Prolifics and an IBM Champion for Rational. Previous to that he was a Founder, Director, and the SOA Lead at 7irene, a visionary software solutions company in the United Kingdom. He has 16 years of experience in software architecture , initially specializing in the field of component-based development (CBD), then moving seamlessly into service-oriented architecture (SOA).

His extended area of expertise is the Software Development Lifecycle (SDLC), and he assists Prolifics and IBM customers in adopting agile development processes and SOA methods. He is still very much a practitioner, and has been responsible for service architectures for a number of FTSE 100 companies. He presents on agile SOA process and methods at both IBM (Rational and WebSphere) and other events, has also co-authored a Redbook on SOA solutions, and contributes to DeveloperWorks.