The Tale of Two Kubernetes
In this article
During 2017 we saw the adoption of Kubernetes by every major cloud or technology provider. Today, Kubernetes is used for all kinds of use cases, from infrastructure to applications, to specialized appliances and platforms for distributed services in embedded products. In fact, at this point we can say Kubernetes is the de-facto modern containers orchestration platform across the board because the ecosystem has pretty much consolidated.
This is not to say that Kubernetes is the only option or the best option for every use case, but it is the best container orchestration option for the most common use cases. This is both a blessing and a curse.
Kubernetes is just part of the story of application and infrastructure modernization. What we've seen so far is a focus and excitement from dozens of startups around the installation of Kubernetes clusters, while ignoring the rest of a larger solution.
This has led to where I see the industry: dozens of startups providing yet another way to deploy Kubernetes and, in many cases, even deploying from upstream repos. This is effective for environments working to extend Kubernetes or build service appliances that use Kubernetes as the core, but doesn't provide the developers experience or day 2 operations.
Think about all the tools and their integrations in a toolset, supporting the experience in modern agile development environments. Take for example, the need for source control systems to trigger CI/CD pipelines; for the pipelines to carve identical environments for each stage in the pipeline; for each stage to maintain and report real-time metrics around the application and the impact to the infrastructure, the requirement for supporting development driven by A/B testing or deploying techniques like canary deployment and rolling upgrades, among others. Now, think about all day 2 operations tasks required to maintain management and visibility of such an environment.
Kubernetes provides the platform to support the modern agile development techniques and support modern microservices architectures, but falls short from a complete solution aspect. Someone has to put together all the tools and integrate them with Kubernetes to deliver the developers' experience and provide the day 2 operations visibility and manageability. We can either dedicate teams to accomplish this, or we can adopt a Platform as a Service (PaaS) built on top of Kubernetes and focus our efforts on what differentiates us. I have adopted a simple mantra I tell customers when they ask what they should build or buy:
This is the first Kubernetes, the one that is commonly described as the enabler for new developers and delivery experiences. If this is what you need, consider what I call a Kubernetes superset. These are products or services that are built on top of Kubernetes but focus on providing developers and day 2 operations experiences. This is where I classify products like Amazon Fargate, OpenShift 3.x and PCF 2.0.
Now, as the title of this post alludes to, there is more than one Kubernetes. We've been seeing a new trend from traditional hardware OEM's and service providers using Kubernetes as an infrastructure automation tool for service delivery platforms. The use cases for this implementation vary from edge computing and the Internet of Things (IoT), to service providers NFVi and many more. The requirements for these types of use cases are somewhat different, however. They require a long term (5-10+ years) stable core, very light but extensible platforms, strict controls on service deployments and rollbacks, a guarantee of the integrity of microservices in hostile infrastructures and the support for modern techniques.
What do I mean by "hostile infrastructures"? Consider the amount of attacks an IoT device is exposed to during its lifetime, or the types of attacks that devices delivering services on customer premises are subject to every day.
Industrial IoT devices are deployed with the vision that they will be low maintenance and will continue working years after their components are considered legacy devices. In this case, a Kubernetes (or containers) infrastructure for these environments has to be supported beyond the buzz of a current moment.
In the case of services delivery, let's say a service provider uses Kubernetes based appliances or infrastructure to render and provision value added services like VoIP, data replication, SD-WAN, traffic filtering or any kind of VNF's. The service provider demands long term assurance that the core delivery platform won't change each day, while also knowing it will be supported for many years in the future. These requirements must be met while maintaining the security and integrity of the deployed services to maintain a certain level of revenue assurance.
This kind of infrastructure is reflected in actual designs and experimentations we are seeing based on the characteristics of long term support and safeguarding microservices in hostile infrastructures. Yet, after studying over 70+ Kubernetes offerings and products (so far), I haven't found one willing to supply these demands.
The common pattern we are seeing is that organizations requiring these characteristics are building their own "distributions" by taking existing tools, platforms and practices and using them in completely unexpected ways. This is what the "second Kubernetes" is all about. Modern tools and platforms trickling throughout the organization enabling innovative approaches in unexpected places.
The main issue with this approach is that it ends up creating "snowflake" Kubernetes, for which long term support is still not viable.
If we learned anything from OpenStack it was that it's great to be able to customize the core to fit these organizations needs, but without a larger community or long-term support strategy, it becomes another snowflake that in a few years, becomes the main obstacle for innovation.
I still remember the first reaction to OpenStack: "I don't want to be tied to a vendor." Many forgot to pose the question: "How many of those 'Jedi masters' working today will still be working there in five years?" That was the approach that gave OpenStack a bad reputation. If we take a close look at the really successful OpenStack environments, we can see a balance of long term support and ease of day 2 operations, with OpenStack acting as a single part of a variety of solution components, instead of attempting to be the end-goal. This is what I'm seeing missing in the Kubernetes community of providers.
If you are reading this post, you might find yourself in one of these two Kubernetes. They are the same core but two different target markets with different needs.
If you need to support application transformation and the developer experience, consider the "first Kubernetes" with the supersets and build only what differentiates you. There is no better training than that. Do not spend too much time building your own – use it as a training vehicle to learn and understand how everything works and how the pieces fit together. Then, identify an integrated solution that accelerates your work by providing most of what you need and then build the few missing parts.
If you need the infrastructure automation or services appliances based on Kubernetes, keep in mind you either lock-in to a vendor or to your own people. Vendors come and go, but so do people. Identify a vendor willing to work with you and co-develop a solution that fits your needs while providing sustainability.
To the vendors and niche players focusing on how to deploy Kubernetes, remember this is like the Linux Kernel – the value is in creating the whole experience, not just deploying a platform. If you want to focus only on the Kubernetes stack, consider the "second Kubernetes" as a niche market needing that but with long term support.
Remember, innovation won't happen everywhere, but it can happen anywhere.