?

Securing Container Infrastructure with Red Hat OpenShift

This article explores how an enterprise can get started on the journey to securing its container application deployments by creating a secure container infrastructure architecture based on the Red Hat OpenShift container platform. It is the second in a series of articles on container security.

Security requirements and hence container security solution architecture for every enterprise is unique. The application nature, business needs, container orchestration, external and internal access frameworks and DevOps models are all unique to each enterprise. 

Red Hat OpenShift considers all needs 

The first step towards developing a comprehensive container security solution is to review all the above factors and requirements to understand where the potential threats and vulnerabilities may arise. This article focuses on building a container security solution covering the container infrastructure with Red Hat OpenShift container platform. 

Red Hat OpenShift platform provides default security policies and hardening out-of-the-box that gives enterprises a jump start in securing the container deployments. Red Hat OpenShift certified ecosystem products such as Sysdig and Twistlock can expand the security capabilities of the OpenShift clusters.

Reference architectures

The SP 800-190 container application security guide published by the National Institute of standards and technology (NIST) is a great starting point to start building a container security architecture. 

The following diagram taken from OpenShift common presentation recently shows the 5 pillars of the NIST SP 800-190 and how they match with Red Hat OpenShift and Quay features.

OpenShift and NIST SP 800-190

Securing external connectivity

The next diagram below shows networking security layers above the container infrastructure in a typical enterprise securing the access into the enterprise where the application containers are hosted. This is not specific to Red Hat OpenShift and applicable to any container platform.

External access to Kubernetes

A web access firewall (WAF) can be used to protect against threats such as SQL injection and cross-site scripting (XSS). DDOS prevention devices can limit access rates for application endpoints hosted in the Kubernetes cluster. These are traditional security measures and address the network-based threats to both containerized and non-containerized applications.

Securing access to the container platform

As mentioned in a previous article, securing access to the Kubernetes console and API is an essential part of any security architecture.

Authentication for the console and API should be integrated with OAuth or LDAP provider for an enterprise identity and access management backend. Red Hat OpenShift 4 enforces console admin passwords, and it is mandatory that you set this up during the cluster installation process. OpenShift 4 comes with a builtin OAuth server and supports integration with:

  • Keystone
  • LDAP
  • GitHub
  • GitLab
  • GitHub Enterprise (new with 3.11)
  • Google
  • OpenID Connect
  • Security Support Provider
  • Interface (SSPI) to support SSO
  • Flows on Windows (Kerberos)

SSL/TLS should be enabled for dashboard and API endpoints with preferably non self-signed certificates. Care should be taken to renew and rotate certificates to make sure they do not expire. In addition to securing the console access, all the internal communications between the masters, app nodes and the internal registry should be encrypted. 

Red Hat OpenShift 4 by default encrypts all internal control plane communications. Certificate rotation for the above encryption is done automatically with Red Hat OpenShift 4. 

Securing container host OS

It is very important to address the security threat to the container host OS in an event of compromised containers getting access, as well as accessing other containers residing in the same container host. To minimize the attack surface, the container host OS should ideally be a container-focused OS that is:

  • secure;
  • minimal;
  • immutable; and
  • easy to manage and to keep up to date.

A good example of this type of OS streamlined for container deployment is the Red hat Core OS. Red Hat Core OS is immutable, read-only and lockdown minimal, which is the base for Red Hat OpenShift 4. 

The following diagram shows a Kubernetes container host running on RHEL/RHCOS with numerous security features enabled. 

Container host security stacks

With SELinux turned on by default, it provides an additional layer of security to the host OS.

SElinux context for container runtime

The above screen capture from Red Hat OpenShift 4.2 app node shows the SELinux Multi category security (MCS) enforced on CRIO container runtime on Kubernetes pods.

Seccomp is a sandboxing facility in the Linux kernel that acts as a firewall for system calls (syscalls). It uses the Berkeley Packet Filter (BPF) rules to filter syscalls and control how they are handled. These filters can significantly limit a container's access to the container Host's Linux kernel. As an additional security measure, custom seccomp profiles can be defined to filter out various undesirable system calls within the containers.

Multi-tenancy and RBAC

It is best practice to deploy applications with different security sensitivity levels to different namespaces or projects for better security and isolation in a multi-tenant environment. 

A project is a Kubernetes namespace with additional annotations and is the central vehicle by which access to resources for regular users is managed. A project allows a community of users to organize and manage their content in isolation from other communities. 

Users must be given access to projects by administrators, or if allowed to create projects, should automatically have access to their own projects. OpenShift 4 projects coupled with network isolation provided by the OpenShift SDN provide complete isolation for applications.

Projects can be designed based on teams, groups, enterprise departments or other considerations to provide isolation.

In addition, applications with sensitive security levels can be scheduled on their own worker nodes if further isolation is desired. This can be done using labels during application deployment.

OpenShift RBAC restricts access by the need-to-know basis. OpenShift roles can be project scope or cluster scope. The graphic below shows the role bindings for project scope and cluster scope users and groups. 

OpenShift RBAC model


Protecting resources

It is important to protect the compute, memory and storage resources during an attack to not to starve all the container pods from their required resources. This is evident during cryptocurrency mining exploits where the attackers run resources intensive cryptocurrency mining operations on hijacked hosts. 

Using quotas and limit ranges, cluster administrators can set constraints to limit the number of objects or amounts of computing resources that are used in each project. This helps cluster administrators better manage and allocate resources across all projects, and ensure that no projects are using more than is appropriate for the cluster size.

Securing pod connectivity

Inter-pod connectivity in a container deployment is as important to secure as much as external connectivity to the containers. In an event of a compromise, attackers might try to explore the adjacent container pods to infiltrate and expand the attack surface. 

Therefore, it is important to secure and control inter-pod connectivity. Network policies can be used to have fine-grained control over the inter-pod communication with respect to the ports and pods they communicate with.

Pods are non-isolated by default on vanilla Kubernetes; they accept traffic from any source. Pods become isolated by having a NetworkPolicy that selects them. Once there is any NetworkPolicy in a namespace selecting a particular pod, that pod will reject any connections that are not allowed by any NetworkPolicy. (Other pods in the namespace that are not selected by any NetworkPolicy will continue to accept all traffic.)

The OpenShift 4 multitenant mode provides project-level isolation for pods and services. Pods from different projects cannot send packets to or receive packets from pods and services of a different project. You can disable isolation for a project, allowing it to send network traffic to all pods and services in the entire cluster and receive network traffic from those pods and services.

The NetworkPolicy mode allows project administrators to configure their own isolation policies using NetworkPolicy objects. NetworkPolicy is the default mode in OpenShift Container Platform 4.2.

Below is an example of a simple network policy in OpenShift SDN that allows traffic only to TCP port 443 for pod blue.

A sample network policy

The diagram below illustrates the enforcement of the above network policy for two Kubernetes projects, A and B. 

Scenarios based on network policy enforcement


Cloud native firewalls

There are next-generation cloud-native firewalls that provide similar functionality to traditional firewalls, but run natively in a Kubernetes cluster. They provide extra filtering and monitoring over the network policies discussed above.

ISTIO/Service mesh

ISTIO is a platform framework that allows operators to successfully and efficiently run a distributed microservice architecture, while providing a uniform way to secure, connect and monitor microservices. You can see an outline of ISTIO's core features — traffic management, security, observability — on their website.

While Istio is platform independent, using it with Kubernetes (or infrastructure) network policies increases your benefits, including the ability to secure pod-to-pod or service-to-service communication at the network and application layers. The policy enforcement component of Istio can be extended and customized to integrate with existing solutions for ACLs, logging, monitoring, quotas, auditing and more.

Securing registries

It is recommended to restrict the external container registry access to have a better control of the images deployed. A proxy or firewall rules can be used to control access to the untrusted container registries such as the docker hub.

For internal enterprise registries, it is recommended to implement the following security features:

  • Secure API/dashboard endpoints access with SSL/TLS certificates
  • Enterprise LDAP/AD or OAuth authentication mechanisms enabled to enforce authentication, especially write access (push)
  • Ensuring hosts can only connect to the registry over encrypted channels
  • All write access audited and read actions logged
  • Automated pruning of outdated  and vulnerable stale images
  • Integrated automated scanning of images

How WWT can help

WWT security experts can sit with you and evaluate the existing container security processes, infrastructure , application and security requirements. Whether it is a greenfield or brownfield deployment, we'll work with you to architect container security solutions to meet your application and compliance security requirements.

Take a look at the last article in this series on container security: Securing Container Images and Builds with Red Hat OpenShift and Quay.

References

[1] https://csrc.nist.gov/publications/detail/sp/800-190/final

[2] https://blog.openshift.com/openshift-protects-against-nasty-container-exploit/

[3] https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/container_security_guide/linux_capabilities_and_seccomp

[4] https://www.twistlock.com/platform/cloud-native-firewall/

[5] https://neuvector.com/network-security/next-generation-firewall-vs-container-firewall/

[6] https://istio.io/docs/concepts/what-is-istio/

[7] https://www.twistlock.com/container-security/

[8] https://sysdig.com/use-cases/continuous-security/