OpenShift 4 Platform Features Automation, Upgrades and Lifecycle Management
OpenShift 4 release and new features, including automation, upgrades and lifecycle management and full integration of CoreOS.
Now that the dust has settled, and OpenShift 4 has been released to the public, I'd like to highlight some of the key features of the platform!
The first major highlight is the full integration of CoreOS into OpenShift. Red Hat acquired CoreOS in January of 2018, and the full integration of the product has been completed, and is released in OpenShift 4. This brings the best of both worlds into one product. Dubbed "Red Hat Enterprise Linux CoreOS" or "RHCOS" for short, it brings the following capabilities to OpenShift:
Automated installation, upgrades and lifecycle management for every part of your container stack
The installation is done with a single command that asks you a few questions about your environment, and you're off to the races. Terraform takes care of the infrastructure provisioning, and your cluster configuration is managed by Kubernetes Operators. Everything from logging to monitoring is installed with a single command, and updating OpenShift has never been easier.
Navigating to "Cluster Settings" and clicking "Update now" is all that's required to get your cluster to the latest version.
Machine sets are another cool feature of OpenShift 4, as your nodes are treated the same way that pods are treated in Kubernetes. Need more compute? Create a Machine set, (example yaml is located here ) configure the replicas and labels you want, and essentially press go. The infrastructure provisioning fires away!
Support for Developer Productivity
Built on the open Eclipse Che project, Red Hat CodeReady Workspaces provides developer workspaces, which include all the tools and the dependencies that are needed to code, build, test, run, and debug applications. The entire product runs in an OpenShift cluster hosted on-premises or in the cloud and eliminates the need to install anything on a local machine.
CodeReady integrates with your enterprise SSO, and all the source code is stored within OpenShift, so there's no need to worry about work being lost when someone's laptop gets run over by a bus.
Microservices architectures can cause communication between services to become complicated to encode and manage. The OpenShift service mesh abstracts the logic of interservice communication into a dedicated infrastructure layer, so communication is more efficient and distributed applications are more resilient.
The biggest highlights with the OpenShift Service Mesh, is that it brings all of the most popular parts of a Service Mesh deployment together, and of course, installed with one click. OpenShift Service Mesh contains three parts, installed as one utility: Istio for the mesh itself, Jaeger for tracing, and Kiali for the dashboard.
What's an Operator? An Operator is a method of packaging, deploying and managing a Kubernetes application. A Kubernetes application is an application that is both deployed on Kubernetes and managed using the Kubernetes APIs and kubectl tooling.
Operators were developed by CoreOS,and gained traction into the mainstream Kubernetes project. With OpenShift 4, everything is deployed as an Operator. And to make things simple, Red Hat has integrated the Kubernetes Operator Hub into OpenShift 4. I encourage you to go have a look at the Operator Hub, and see all the different applications you can deploy onto your OpenShift cluster.
Need help getting OpenShift installed?
We're here to help. Whether it's on-prem or in the cloud, we can get you up and running on OpenShift. We also have labs built out to get you the basics of containerizing applications on Openshift, to help speed you along in your journey.