In this article

Stay relevant in today's data center and cloud services environments.

Kubernetes (sometimes referred as k8s), is a Google-sponsored Open Source project for orchestrating containers at scale. Launched in 2014 as a modern approach for managing and operating microservices-based applications, Kubernetes has become the de facto container orchestration engine across organizations.

Kubernetes market

So, why is it that 70 percent of organizations looking for a container orchestration framework are preferring Kubernetes-based solutions?

From a startup and OEM perspective, it is easy to see how the versatile and extensible features of Kubernetes played a major role in its adoption. The extensibility inherent in Kubernetes allows any OEM to offer their existing products and services portfolio as a container-aware service which is easily consumable by microservices running in Kubernetes.

From the startups perspective, Kubernetes provides an innovative approach for developers to build new products and services eliminating the need to worry about the underlying infrastructure limitations or the other particulars we had to deal with in the past.

With Kubernetes, it does not matter if the services are run on-premise, on cloud or in hybrid cloud architecture. It is exactly the same service and the same containers across all environments. They don't have to worry if the customer wants to deploy the service over VMware, OpenStack, Azure Stack, AWS or Google Cloud. From the application and developer perspective, it is completely transparent.

When it comes to the enterprise service providers and SMB perspective, Kubernetes provides a truly polyglot infrastructure providing developers the freedom to choose their tools and programming languages they find best suitable to accomplish a particular job. They can change those selections as often as they want.

For the platform administrators, they manage the same framework on-premise or in the cloud with native hybrid cloud and multi-data center capabilities. As long as they can run Linux, they can run Kubernetes.

Containers limitations at scale

Developers love having the ability to run the exact same Docker image on a laptop, servers or in the cloud. A simple "docker run …" command is all it takes to keep launching them as often as desired. This easy-to-run capability is also the operational nightmare of running containers at scale.

To understand some of the limitations, you should also understand the "behind the scenes" of containers. Start with this post: Deep Dive into Linux & Docker Containers.

When you launch a container, you can either specify the ports or let the Docker engine dynamically allocate the ports it will use to publish a particular service provided by the container.

For example, if you have an NGINX container to serve a web page, we could manually specify it to use port 8000 for the first instance. Because port 8000 is already in use by the first instance, we need a different port for the second instance, for example, port 8001. Then, the next one might be 8002, the next after that port 8003, and so on. Let's assume we have 100 instances, so it will be consuming ports 8000 – 8099 of the Docker node.

This scenario presents various challenges.

If all the instances are providing the same service, then we can setup a load balancer to distribute the load among them. Now, that does not offer node failure protection. This is where the operators have to deal with making sure copies are distributed among multiple nodes. Think about this:  you're deploying 100 copies among multiple nodes, with each container instance using its own port and now we also need to keep the load balancer configuration updated. If a single container fails, how can we track which one failed to replace it by a new one? If a new container instance is deployed, how can we determine which node to use and how can we track which port it should use?

All those complications are happening with a single containerized service example. Consider a small enterprise with a microservices based app, outside a lab or test environment, with five to eight microservices.

Sample microservices application (implementation)

Take the sample microservices application in the above diagram. To deploy 100 copies of that app, there is going to be from 500 – 800 containers minimum and that is without any kind of node or geo redundancy considered in the picture.

As you can see, even the simplest containerized app is extremely susceptible to containers sprawl and that does not even consider the redundancies or scheduling challenges, and even less, having to maintain the configurations for the load balancers.

Going back to the 10 copies of a single container app in a single node. Let's say it is consuming ports from 8000 – 8009. That means we have node:8000, node:8001, node:8002, node:8003 and so on until node:8009. Each of those representing a different copy of the app which could be serving differing customers or business units. This breaks the pretty name and address resolutions our traditional DNS understands. Our traditional DNS knows only how to resolve IP address to and from names. If example.com  à node:8000, example2.com à node:8001, etc., our traditional DNS's cannot handle these cases. It will need a reverse proxy or load balancer to handle the traffic for this type of translation.

These are just two examples on how complex it is to operate and manage containers at scale with traditional or manual approaches. This is where container orchestration platforms come into the picture to abstract and simplify all these complexities.

There are various container orchestration solutions. Each solution improves over the other and some solutions specialize in certain workload types or target a particular team. For modern app containers, the main categories are Docker Swarm, Apache Mesos and Kubernetes. According to research published by CoreOS in May 2017, 71 percent of organizations are looking into Kubernetes based solutions as their container orchestration and management platform.

As of August 2017, there are more than 70 Kubernetes based distributions. A non-exhaustive list of 67 of them are here and that list does not include distributions that have since hit the market.

Kubernetes architecture

Kubernetes is a microservices architecture (MSA) platform for automating deployment, operations and scaling of containerized applications. It can run anywhere that Linux runs and supports on-premise, hybrid and public cloud deployments.

Kubernetes provides features like storage orchestration, self-healing of containers that fail, automated rollout and rollbacks, service discovery and load-balancing among others.

Kubernetes architecture have what can be considered control nodes and worker nodes.

Kubernetes architecture (control nodes and worker nodes)

Under the control nodes we have the Kubernetes masters and the etcd cluster nodes.

The Kubernetes master host the scheduler and API servers. The scheduler selects and designates the nodes where a particular group of containers will run. In case of node failure, it redeploys them to a new node. The API servers provide the API entry points for managing, operating and consuming the infrastructure. There can be one or more master nodes.

The etcd cluster contains the configuration and state of all services running in the Kubernetes cluster. It is used by other internal Kubernetes services (e.g. SkyDNS) for the auto discovery of services running in the infrastructure.

Kubernetes platform concepts

Kubernetes introduce some concepts and abstractions on top of containers which help in defining a modular extensible framework.

Kubernetes platform concepts

Kubernetes concepts are built around microservices architectures and I'll explain these using the above image as a reference starting from right to left.

The first concept to the right it is not a Kubernetes concept, but rather the architectural development model which Kubernetes was designed to support. An application following microservices architectures, consist of a set of extremely focused services, independently deployable and independently scalable (e.g. supporting horizontal scalability). In other words, what you see or experience as an application is the result of a well-orchestrated composition of microservices. One of the outcomes of properly designed microservices applications is that even with the failure of one or more of the microservices, we can still use the rest of the application.

A good example of a well-designed microservices app is Netflix services. We might not be able to read the reviews, see the thumbnails of the movies or get our list, but we can probably still watch a movie. Also, when one of their API gateways fails, we might not be able to watch a movie in one platform (e.g. tablets) but it is still working for other platforms.

The first Kubernetes specific concept is "Service." We can map a Serviceto a microservice in the microservices model. A Service is a logical abstraction of pods. A pod is comprised of one or more strictly dependent containers sharing storage, network (e.g. IP address) and running options. Pods are the minimum deployable unit in Kubernetes and it runs in a single node. Kubernetes restart pods in case of pod failure and re-schedule them to other nodes in case of node failure.

Now, going back to Services. A Service is what is exposed outside the Kubernetes cluster. It has an IP address that does not change no matter how many pods are created or re-instantiated to support a particular Service.

There is a series of Kubernetes capabilities and functionalities supporting a Service and those take care of the tracking of pods, adding or removing pods for auto-scaling the Service and even supporting deployment models like rolling upgrades, canary deployments and blue/green deployments.

So, all these Services and the abstraction of pods start addressing the issues of managing and operating containers at scale. Going back to our previous microservices app example, it will look something like the following diagram.

Sample microservices app in Kubernetes

Contrary to when managing individual containers where the operator has to track every single instance and distribute them, with Kubernetes, you can describe what you want and it takes care of the rest.

Creating a service in Kubernetes

The above illustration shows how a service is created. In this case, from line 17, we can see this Service will front end pods with the label "app=nginx" and as long as the pod has that label, it will be part of the targets during the load balancing. Notice this is just a label, so I can have pods running NGINX, Apache or Tomcat, but as long as they have the "app=nginx" label, this Service will load balance the traffic.

We can define the actual pods and label so the Service discovers them or you can create a ReplictionController which will create the number of pods needed and monitor that there is always the minimum requested number of pods available.

Repliction-Controller example

In the above illustration on line 25, you can see the specification for the minimum number of pods required, and line 32 specifying the container image (a Docker image in this example) to use for this pod. In this definition, we can also identify features like line 37, the livenessProbecapability where we tell Kubernetes the type of test we want to perform and pass to validate the pod is still operational. Should this test fail, the system can automatically create a new pod and destroy the bad pod.

In line 43 you can see another Kubernetes capability, the readinessProbe. This specifies a test to perform and pass to validate the pod is ready to service requests. The system won't associate the pod to the Service until the readinessProbe is successful. Similarly, it will remove the pod association to the Service if it fails the test at any given time during its lifetime.

All these are some of the basic features and capabilities in Kubernetes.

Kubernetes extensible architecture

As mentioned at the beginning of this blog, Kubernetes is an extensible architecture. The below illustration highlights in red dotted circles, some of the modules that are extended or replaced by OEMs.

Kubernetes extensible architecture

For example, network OEMs can extend Kube Proxy and the Kubernetes networking modules and provide additional networking capabilities or integration with their existing products. This is the case of VMware NSX or Cisco ACI.

The Kubernetes load balancing services can be replaced or extended by the cloud provider. This is why you can see software based load balancers like HAProxy, Traefik, F5 and others integrated into it. Or, in the case of providers like Google, it is replaced by their own offering.

Storage OEMs usually extend Kubelet to identify persistent or ephemeral storage requests and properly map the storage to the correct nodes. Example OEMs for this would be NetApp Trident, DriveScale, Flocker and others.

At the end, Kubernetes provides a strong and robust container orchestration platform for anyone to support microservices based applications and containers at scale. Even when OEMs create their own distributions, the applications deploying to Kubernetes are not tied to a specific OEM. For example, you can create an application in your local Kubernetes cluster and use exactly the same YAML files to deploy to Google GKS or Azure Container Services.

Even better, with Kubernetes, worker nodes can exist anywhere (in another data center, across clouds, etc.) and you can control the application across all locations. I'll dive deeper into these advanced topics in the future.

Kubernetes ecosystem

Kubernetes provides a robust container orchestration platform but that is not everything developers, operations or infrastructure teams will need for a successful utilization and operation of such an environment. New container running solutions keep optimizing from previous ones.

In order to help organizations with the successful adoption of Kubernetes and its integration with the emerging ecosystem of tools and solutions complementing Kubernetes capabilities, the Cloud Native Compute Foundation (CNCF) was created under the Linux Foundation.

The Cloud Native Compute Foundation (CNCF) by the Linux Foundation, was created to support open source technologies enabling cloud portability without vendor lock-in. This includes projects about containers, microservices, programmable infrastructure, CI/CD networking, storage, logging, nodes, services, monitoring, visualization and more. The CNCF official container orchestration project is Kubernetes.

Additional resources

Technologies