Red Hat’s OpenShift Container Platform (OCP) is a Kubernetes platform for operationalizing container workloads remotely or as a hosted service. OpenShift enables consistent security, built-in monitoring, centralized policy management, and compatibility with Kubernetes workloads. The rapid adoption of open source projects can introduce vulnerabilities in standard Kubernetes Environments. OCP supports these projects internally, allowing users to gain open source advantages with a managed product’s stability and security. OpenShift offerings include five managed and two hosted options.

This blog post is part one of a four-part OpenShift security blog series that will focus on the Red Hat OpenShift Container Platform (RHCOP) version 4.5, which is designed to be self-managed within your infrastructure environment due to various deployment options.

OpenShift Architecture

OpenShift is built on top of Kubernetes, and while Kubernetes provides container orchestration capabilities, pod resiliency, services definitions, and deployment constructs, there are many other components required to make it work. For example, Kubernetes does not provide default Container Network Interface (CNI) or default monitoring implementations. It is up to the cluster administrator to bring additional tools to operate and manage the Kubernetes cluster and any applications running. For security teams, this presents new challenges - as an example, these teams need to create new policies and vet images, configurations, and account access for any new applications that will be deployed into the cluster.

These additional, necessary operational capabilities are provided out of the box with OCP and are pluggable so that administrators can customize components and services to meet their infrastructure needs.

OCP’s architecture requires three different types of nodes within each cluster to ensure highly available deployments.

Control Plane Nodes

These nodes run the core Kubernetes control plane functions and provide additional services such as a self-service web console and developer- and operations-focused dashboards.

In most cloud environments, the control plane nodes are hidden from end-users and managed by providers for high availability, regular upgrades, and added security updates. With OCP, administrators manage, view, and interact with the control plane nodes directly, which means that they will need to set up their clusters for high availability and adequate security. To be compliant with industry-standard best practices, a minimum of three control-plane nodes should be configured to allow for accessibility to the control plane in a node outage event.

Infrastructure Nodes

These are nodes dedicated to hosting additional functionality such as OpenShift Routes and the OpenShift internal registry. Infrastructure nodes host administrator and network-focused services that are managed separately from your containerized applications.

App Nodes or Nodes

These are the OCP nodes used to run your containerized applications. These are similar to Kubernetes worker nodes and run various monitoring and networking services required across a cluster.

Cloud IAM, Accounts and Limits

When using a cloud provider, you will want to enforce tight control of individual clusters and other cloud resources sharing a project. Limit access to resources by applying the principle of least privilege. Understand each provider’s account roles and limitations before setting up access to any OCP clusters.

OCP helps this process by providing in-depth documentation on the installation process, including installation in AWS, Azure, GCP, and IBM Z.

Private Clusters

Strict network isolation, which prevents unauthorized external ingress to OpenShift cluster API endpoints, nodes, or pod containers, comprises a critical piece of cluster security. By default, OpenShift clusters have Kubernetes cluster API endpoints and nodes with public IP addresses. By default, the OpenShift Container Platform is provisioned using publicly-accessible DNS and endpoints. The DNS, Ingress Controller, and API server can be set to private after installing the cluster. Additionally, OpenShift may expose operations-focused dashboards for the admins and developers. Ideally, these dashboards will be running on infrastructure nodes away from your high-priority workloads.

The private cluster options vary based on the infrastructure environment. However, there are in-depth guides for setting up a private cluster through various providers. OpenShift outlines the installation methods and network setup options that are currently supported here.

After creating your private cluster, you may need to perform extra configuration steps to ensure your cluster’s components are correctly set. Also, upgrades to the cluster may require Internet access and extra considerations.

Setting up a Bastion Host

A bastion host provides access to a private network from an external network and is a simple way to add an extra layer of security to your OpenShift cluster. A bastion host minimizes the chances of unauthorized access to your OCP cluster by allowing for more tightly tuned access. Benefits of a bastion host include:

  • Separate login accounts for everyone accessing the bastion host
  • Auditing of user access and time
  • Specific node access

A bastion host is a useful way to augment security to your cluster. Restricting access to only specific nodes through the cluster using the bastions .ssh/config allows for private network access and can restrict users from tampering with nodes deemed off-limits.

Note: When using a cloud provider for deployment, utilize software-defined networks that are available. The proper implementation of cloud IAM accounts, firewall rules, and private networking will significantly reduce the attack surface.

VPC Networks

When deploying your OpenShift cluster, you will want to take advantage of the various cloud providers’ built-in networking and security protections. This will vary depending on the environment; however, there are defaults and best practices to keep in mind during setup.

  1. Create a single VPC network for each cluster and allow access accordingly.
  2. Setup firewall rules to allow for only the ports required. These include:
    1. AWS restricted network setup
    2. Azure private network setup
    3. GCP restricted network setup

Securing etcd

By default, data stored in etcd is not encrypted at rest in the OpenShift Container Platform. Etcd encryption can be enabled in the cluster to effectively provide an additional layer of data security and canto debug in your cluster to help protect the loss of sensitive data if an etcd backup is exposed to incorrect parties. Since OpenShift recommends an etcd backup during any upgrade, encrypting etcd should be a standard practice in your organzation.

When you enable etcd encryption, the following server resources are encrypted:

  • Secrets
  • ConfigMaps
  • Routes
  • OAuth access tokens
  • OAuth authorized tokens

When etcd encryption is enabled, encryption keys are created. These keys are rotated every week, and the admin must have these keys to restore from an etcd backup.

Node Images

Compromised nodes create a danger to your entire cluster and its workloads. Using minimal base operating system (OS) images and configuring read-only file systems provides two critical ways to protect your nodes against many attacks and limit their potential blast radius. With minimal images, attackers have limited tools to leverage, and if they cannot write or overwrite configuration files and binaries on the node’s root file system, they cannot hijack the system as easily nor install their malicious tools.

Providers are increasingly making available minimal, container-optimized OS images such as AWS Bottlerocket and GCP’s Container-Optimized OS (COS). However, It is best to leverage OpenShift’s relationship with the cloud providers and use the most recent Red Hat Enterprise Linux CoreOS (RHCOS) for all of your OCP cluster’s nodes. RHCOS is the default operating system for all cluster machines; however, you can create worker machines that use RHEL as their operating system.

RHCOS is designed to be as immutable as possible, allowing for only a few system settings to be changed. These settings are configured remotely, with the help of a specific operator developed by OpenShift. This scenario means no user will need to access a node directly, and any changes to the node will need to be directly authorized through the use of the Red Hat Machine Operator.

CRI-O

RHCOS also leverages CRI-O as its default container runtime. CRI-O focuses only on features needed by Kubernetes platforms. It also provides a smaller footprint and reduced attack surface than is possible with container engines that include a superset of functionality beyond Kubernetes-centric features. Since OCP is based on Kubernetes, it benefits from these features as well. By not including extra features for direct command-line use or other orchestration facilities, CRI-O’s footprint is smaller, and therefore potential vulnerabilities are reduced.