This blog post is part two of a four-part blog series where we discuss various OpenShift security best practices for

  • Designing secure clusters
  • Securing the network and cluster access (topic of this blog)
  • Building secure images (future blog)
  • Protecting workloads at runtime (future blog)

OpenShift Networking Best Practices for Security

The concept of zero-trust security has emerged to address the new security challenges of cloud-native architecture. These challenges include:

  • The sharing of cloud infrastructure among workloads with different levels of trust
  • Smaller microservices increasing complexity and enlarging the attack surface of applications

Microservice architecture creates a more extensive network attack surface. To address this issue, administrators and developers will have to ensure both external networks and internal software-defined networks are securely configured.

Secure Service Load Balancers

OpenShift, at a minimum, requires two load balancers: one to load balance the control plane (the control plane API endpoints) and one for the data plane (the application routers). If a load balancer is created using a cloud provider, the load balancer will be Internet-facing and may have no firewall restrictions. In most on-premises deployments, appliance-based load balancers (such as F5 or Netscaler) are used. Both types of load balancers will need to be configured by the administrator.

If the load balancer needs to be Internet-facing but should not be open to all IP addresses, you can add the field loadBalancerSourceRanges to the service specification to limit the IP address blocks allowed to connect. Verify that your load balancer supports this functionality. AWS, GCP, and Azure all support IP blocks.

Enable Network Policy

By default, network traffic in an OpenShift cluster is allowed between pods and can leave the cluster network altogether. Creating restrictions to allow only necessary service-to-service and cluster ingress and egress connections decrease the number of potential targets for malicious or misconfigured pods and limit their ability to exploit the cluster resources.

The OpenShift Software Defined Network (OpenShift SDN) can control network traffic to and from the cluster’s pods by implementing the standard Kubernetes Network Policy API. Network Policies can control both ingress traffic and block or allow individual IP blocks. NetworkPolicy objects are additive, which means you can combine multiple NetworkPolicy objects to satisfy complex network requirements. Other Container Network Interface (CNI) implementations allow for egress rules also to be set. OpenShift SDN does not currently support that functionality.

All supported versions of OpenShift come with Network Policies enabled by default. However, the cluster while still allowing for all pod traffic to be accepted. Make sure to deny all traffic by default and create additive rules to limit pod traffic only to what is required. Test the policies to make sure they block unwanted traffic while allowing required traffic.

Check out our blog posts below to learn more about Kubernetes Network Policies

Guide to Kubernetes Ingress Network Policies

Guide to Kubernetes Egress Network Policies

Master Authorized Networks

To protect against future vulnerabilities in the OpenShift API server and Kubernetes API server, limit network access to API endpoints to trusted IP addresses. Regardless of the OCP clusters, administrators need to create rules for access to the cluster’s API endpoints.

Authentication and Authorization

Control Access to Cluster Resources

In addition to utilizing cloud provider IAM roles and authorization, the OCP control plane includes a built-in OAuth server. This server allows administrators to secure API access control via authentication and authorization regardless of the cluster’s deployment location. OAuth 2.0 is the industry-standard protocol for authorization, which works over HTTPS and authorizes devices, servers, etc. with access tokens rather than credentials.

As an administrator, OAuth can be configured to authenticate using an identity provider, such as LDAP, GitHub, or Google. Administrators can obtain OAuth access tokens to authenticate themselves to the API as well. This feature can be enabled by cluster creation or after creation.

Manage API Access

Applications can have multiple, independent API services with different endpoints. Always aim to restrict access to endpoints and services and only grant the minimal access required. In addition to the OAuth server, OCP includes a containerized version of its 3scale API gateway. RedHat outlines this service efficiently:

“3scale gives you a variety of standard options for API authentication and security, which can be used alone or in combination to issue credentials and control access: standard API keys, application ID and key pair, and OAuth 2.0.”

Utilizing the 3scale API gateway allows for fine-grained control over authorization and exposure for the cluster’s API services. For example, quotas and throttling can minimize Denial-of-Service (DoS) attacks.

Rotate Cluster Certificates

Kubernetes and OpenShift clusters rely on several secure certificate chains and credentials for security. If sensitive keys or certificates are compromised, the integrity and safety of the entire cluster and its workloads may be placed at risk. Additionally, many security policies and compliance certifications require regular rotation of encryptions keys and credentials.

OCP leverages REST-based HTTPS communication with encryption via TLS certificates. These certificates are configured during installation for the components that require HTTPS traffic:

  • API server and controllers
  • Etcd
  • Nodes
  • Registry
  • Router

OCP manages these certificates for the administrators and enables more control, such as allowing administrators to rotate certificates manually.

Remove the kube-admin Account

By default, a service account is mounted to every pod in an OCP cluster, allowing containers to send requests to the Kubernetes API server. An attacker who gains access to a pod can obtain the corresponding service account token. With RBAC enabled by default in an OCP cluster service account, privileges are determined by role bindings. If these grant elevated privileges, an attacker could send a request to the Kubernetes API server to compromise cluster resources.

Organizations can mitigate this threat vector by configuring Kubernetes RBAC and adopting the least privilege model for service accounts and their role bindings. A core example of this model is removing the kube-admin user, with OpenShift itself recommending its removal to improve cluster security.