?

Observations from KubeCon + CloudNativeCon 2019: A Network Architect's Journey to the Land of Phippy and Friends

A Network Architect provides perspectives from KubeCon + CloudNativeCon 2019.

December 18, 2019 9 minute read

Once upon a time, in land far away, there was a PHP app named Phippy who set on a journey in a container to discover an environment where she could live safely. Along the way she met a kindly whale and a captain (Captain Kube) piloting a large ship.  

children's drawing for Kubernetes
Phippy in a container encountering a kindly whale and Captain Kube -  phippy.io

You may be wondering, what does this have to do with networking?

Phippy and her friends represent a shift in application development towards microservices-based applications in containers. This shift in application architecture is having a big impact in the way that a network will support those applications.  

These types of applications are referred to as cloud native applications that are orchestrated using Kubernetes software (as portrayed by Captain Kube). Phippy’s container is setup by the kindly whale (Docker).   

Further details are simply and clearly explained in “The Illustrated Children’s Illustrated Guide to Kubernetes.”

On to the observations

KubeCon  + CloudNativeCon is a 12,000+ strong combined conference held this year in San Diego that highlights Kubernetes and the Cloud Native Computing Foundation’s open-source software projects.

Attending as a network architect, I was expecting that companies represented there would be common cloud native players such as Rancher, Red Hat, Pivotal, Canonical, sysdig and Mirantis. This was certainly the case, but what was surprising was the presence of traditional I/T and networking vendors such as Cisco, VMware, Arista and F5 (NGINX). 

Why were they at the conference? I intend to answer that question and discuss the implications on the design and deployment of network infrastructure for the future.

In my discussions with session presenters and vendors, I observed three themes:

  1. Network services for cloud native applications are an integral part of the application deployment pipeline.
  2. Service meshes will be the method for implementing network services in the near future.
  3. Virtual network functions are evolving to a container-based architecture known as Cloud Native Network Functions.

Lets take a look at those observations and their implications.

Network services for cloud native applications are an integral part of the application deployment pipeline.

Networking for applications is changing significantly with the advent of cloud native applications, Kubernetes and containers.

As network architects we observed the impact that hypervisors (ESXi, KVM, HyperV) and VMs have had on networking over the past 15 years. VMs moved us from physical switches to virtual distributed switches in a hypervisor. Application containers, in many ways, are the next evolution of the virtual machine.    

You may wonder, how different could the networking be? The answer is that the protocols of networking have not changed (IP addresses, TCP or UDP ports, etc.), but where the networking exists and more importantly, how it is deployed has changed significantly. 

To understand this change, let’s examine how container-based applications are deployed. The applications are deployed through the use of the Kubernetes Orchestrator that consumes a manifest file that details how the application should be deployed.  

This manifest file (in YAML format) specifies details the application name, the network details, the Docker image and other pertinent information. Kubernetes processes this information to instantiate the application container in a Kubernetes pod. The use of the manifest files provides for the rapid deployment of an application (typically, in a few minutes or less).

The simplified diagram below illustrates the process.

application deployment process
Application deployment process

You may be asking, so what, as a network architect, why do I care?

In the past the network services for an application server was provisioned separately using a combination of command line interface (CLI), user interface (UI) or most recently, utilizing an application programming interface (API) with an automation tool such as Ansible or Terraform.   

With cloud native applications and Kubernetes, the definition of the networking between containers and Kubernetes clusters is contained in the manifest file. The ability to design, define and troubleshoot network issues between container requires an understanding of not only the manifest file parameters, but also the network architecture of Kubernetes.

When an application is deployed in Kubernetes, it is dynamically assigned to a worker node in the Kubernetes cluster. The IP addresses of the application containers are also dynamically assigned.  This means that the application will be deployed across multiple servers with ephemeral IP addresses. There is network communication between the containers on a worker node and between worker nodes.

When comparing with the application environments of the past, where we knew the server, the port and the switch for troubleshooting purpose, the new cloud native application environment requires networking professionals to have a sold grasp of the Kubernetes deployment methodology, the application manifest file and Kubernetes networking as a whole.

For the networking professional of today and the future, understanding Kubernetes networking and application deployment methodologies is critical for career growth and competency.

Service Mesh is the upcoming method for implementing network services.

Service mesh 

During the course of the conference, there was a great deal of conversation regarding service mesh, but what is a service mesh?  

From the Istio.io site, we find this definition: “The term service mesh is used to describe the network of microservices that make up such applications and the interactions between them. “ The Red Hat developer site lists the functions of a service mesh: “A service mesh provides traffic monitoring, access control, discovery, security, resiliency, and other useful things to a group of services."

Essentially, a service mesh is a framework that provides discovery, security, access control and resiliency to a group of applications. Although the term service mesh is typically applied to a group of microservices (containers), it could also apply to applications that are deployed across both containers and VMs. 

One of the more prevalent service mesh frameworks is Istio. Although Istio the most widely known framework, there are other frameworks such as Hashi Consul Connect and Linkerd. Refer to this link for a more detailed comparison of these three services meshes.

Service mesh frameworks

In the diagram below, the main components of Istio are Pilot, Mixer and Citadel. These components function with Proxy “sidecars” to deliver the Istio functions. Read a further description of Istio.

A screenshot of a cell phone

Description automatically generated
Istio architecture
Network service mesh

In the course of conversations on the show floor, there was talk of the “network service mesh."  Conversations with an engineer from Cisco shed light on this term. A network service mesh defines how services are provided between clusters of Kubernetes nodes or between Kubernetes clusters and endpoints external to the cluster.  

The service mesh functions that Istio, Consul Connect and LINKERD provide are for applications (microservices) in a single Kubernetes cluster. For clarity purposes, we could refer to those as an application service meshes. Network service mesh is a Cloud Native Computing Foundation (CNCF) sandbox project that is sponsored by Cisco, VMware, Juniper and doc.ai.

As shown in the diagram below, the network service mesh (NSM) allows for applications in different Kubernetes clusters to seamlessly communicate. This can be accomplished via NSM (as shown with DB replication) or in conjunction with Istio instances.

Application communication NSM
CNCF Network Service Mesh Webinar, 2019-10-02, page 19

While application service meshes (Istio, Consul, LINKERD) are currently utilized by production customers, network service mesh is a developing effort. Currently NSM can provide inter-domain (AWS EKS, Azure AKS, Google GKE) connectivity with Auto-Healing and DNS, but much work is yet to be completed. 

You might ask, are organizations deploying service meshes today in production?

During the conference there were presentations on the use of service mesh by Yahoo, Lyft, Freddie Mac and the US Air Force. So, yes, organizations are using service mesh today. That being said, the companies would be considered early adopters as service mesh is maturing as a technology.

The implication for the network professional is that the delivery and monitoring of network services for applications is changing from configurations applied to a network appliance to manifest files definitions used at the time of application deployment. The access controls, encryption and monitoring are handled by the service mesh that exists within the Kubernetes host nodes. Tighter integration and cooperation with application teams will be required for the successful deployment of essential business functions, whether in an on-premise data center or at a cloud provider.  

As was the case with networking for cloud native applications, an understanding of modern application architectures (ie, microservices), source code repositories (ie, Git) and application deployment pipeline tools such as Jenkins will be invaluable to the networking team to support applications with service meshes (either application service mesh or network service mesh).

Virtual network functions are evolving to a container-based architecture known as Cloud Native Network Functions.

A virtual network function (VNF), is a software implementation of a network function traditionally performed on a physical device. Examples include IPv4/v6 routing, VPN gateway, L2 bridge/switch, firewall, NATs and tunnel encap/decap. For a period of time, we have seen virtual routers (ie, Cisco CSR, vyOS, etc) and virtual firewalls.

Cloud Native Network Functions (CNFs) are containerized instances of a classic physical or virtual network functions (VNF), as explained by Ligato.

These “containerized” network functions allow the creation of entire network services completely in software that can be instantiated in a manner of minutes. The promise of this technology is that an entire application infrastructure including networking services could be defined in a manifest in Git.  This allows for quick and agile deployment of the entire application. Ongoing maintenance is simplified by the use of Git.

At the conference, there was a prototype demo of a carrier 5G network implemented using CNFs at the keynote. The demonstration involved making a phone call over the 5G network from three separate sites in the United States, Canada and Europe. Although just a prototype, it illustrated the capabilities of the CNF technology.

Summary

With the impact of cloud native applications, the world of networking is changing at a rapid pace. It will require the network and security professionals to understand new application technologies such as Kubernetes, Git, Jenkins and continuous integration / continuous deployment (CI / CD) pipelines. At first it may appear overwhelming, but it can be learned in a step-by-step approach.  

As my colleague, Joel King, stated: “It is an exciting time to be in networking.”  

For further information on how to get started with Container Platforms, get an overview of the technology and how we can help achieve your business outcomes.   

Share this

Comments