Jump to section

Container and Kubernetes compliance considerations

Copy URL

If you’re running containers you have thought about potential security risks. You may be just adopting a DevOps approach to your workflows or you may have a well-established CI/CD pipeline but you want to protect your sensitive data.

Role-Based Access Control (RBAC) provides the standard method for managing authorization for the Kubernetes API endpoints. Your Kubernetes cluster’s RBAC configuration controls which subjects can execute which verbs on which resource types in which namespaces, but RBAC doesn’t prescribe how you should configure your roles. That’s where compliance frameworks come in.

The National Institute of Standards and Technology Special Publication 800-190 (NIST 800-190) offers a framework for understanding some of the specific challenges related to securing containerized applications, as well as what organizations need to do to improve the application’s security profile.

NIST guidelines are targeted toward U.S. government agencies, as well as government contractors, but any company might want to follow NIST 800-190 guidelines, both to improve overall security and because they can make it easier to meet other compliance frameworks like PCI and HIPAA.

 

The risks

The NIST 800-190 highlights common sources of security vulnerabilities in containerized applications, which include:

  • Compromised images

  • Misconfigurations in container images

  • Untrusted container images

  • Poorly managed secrets

  • Misconfigured access controls

  • OS vulnerabilities

  • Unnecessarily large attack surfaces

Just as importantly, the NIST 800-190 stresses the need for organizations to approach security for containerized applications in a different way than they did for traditional applications. Containerized applications have different risk factors than virtual machines and require a different set of security practices.

NIST 800-190 requires organizations to:

  • Use purpose-built tools to manage image vulnerabilities throughout the entire image lifecycle, from build through deploy and runtime.

  • Ensure that images comply with configuration best practices.

  • Protect secrets by storing them outside the image, using Kubernetes to manage secrets, restrict access to secrets to those containers that need them and encrypt secrets at rest and in transit.

  • Use a secure connection when pushing or pulling from the registry.

  • Ensure that the container always uses the latest image version.

  • Segment network traffic, at the very least to isolate sensitive from non-sensitive networks.

  • Use Kubernetes to securely introduce nodes and keep an inventory of nodes and their connectivity states.

  • Control outbound traffic from containers.

  • Ensure continual compliance with container runtime configurations standards such as the CIS benchmarks.

  • Use security controls to detect threats and potential intrusions at the container and infrastructure level.

  • Use a hardened, container-specific operating system with an attack surface that is as small as possible.

  • Prevent host file system tampering by ensuring containers have as few permissions as possible to function as designed.

Even organizations who don’t need to comply with the NIST 800-190 requirements should consider them a useful framework for improving the organization’s security posture. They ensure organizations are thinking about security throughout the build, deploy and runtime phases, addressing the unique security requirements of each stage.

The Payment Card Industry Data Security Standard (PCI DSS) was created in 2004 by Visa, MasterCard, American Express, Discover, and JCP to create an industry-wide standard for security and data protection. The standards have been updated many times since they were first released to keep up with changes in technology. The standards apply to everything in the cardholder data environment, which is the people, processes, and technologies that store, process, or transmit cardholder data. In terms of technology, this includes both hardware and software.

Complying with PCI requirements isn’t easy, and costs an average of $5.5 million annually for companies. However, non-compliance is much more expensive, with an average annual cost from penalties of $14.8 million. With the right processes and tools in place, PCI compliance doesn’t have to be a major challenge.

PCI DSS has 12 requirements that are mapped to 6 more general goals. They are:

Build and maintain a secure network system

  1. Install and maintain a firewall configuration to protect cardholder data

  2. Do not use vendor-supplied defaults for system passwords and other security parameters

Protect cardholder data

  1. Protect stored cardholder data

  2. Encrypt transmission of cardholder data across open, public networks

Ensure the maintenance of vulnerability management programs

  1. Protect all systems against malware and exploits, and regularly update anti-virus software

  2. Develop and maintain secure systems and applications

  3. Implement strong access control measure

Restrict access to cardholder data by business based on need-to-know

  1. Identify and authenticate access to system components

  2. Restrict physical access to cardholder data

Regularly monitor and test networks

  1. Track and monitor all access to network resources and cardholder data

  2. Regularly test security systems and processes

Ensure the maintenance of information security policies

  1. Maintain a policy that addresses information security for all personnel

PCI compliance for containerized applications

There are several requirements pertaining to each of the above six goals outlined by PCI that are directly relevant to container and Kubernetes environments. Evaluate your container and Kubernetes security tooling to ensure they can address these requirements:

1.12 Current network diagram that identifies all connections between the cardholder data environment (CDE) and other networks, including any wireless networks

1.1.4 Requirements for a firewall at each internet connection and between any demilitarized zone (DMZ) and the internal network zone.

1.2 Build firewall and router configurations that restrict connections between untrusted networks and any system components in the cardholder data environment.

1.2.1 Restrict inbound and outbound traffic to that which is necessary for the cardholder data environment and specifically deny all other traffic.

1.3.2 Limit inbound Internet traffic to IP addresses within the DMZ.

1.3.4 Do not allow unauthorized outbound traffic from the cardholder data environment to the internet.

1.3.5 Permit only "established" connections into the network.

2.1 Always change vendor-supplied defaults and remove or disable unnecessary default accounts before installing a system on the network.

2.2 Develop configuration standards for all system components. Assure that these standards address all known security vulnerabilities and are consistent with industry- accepted system hardening standards.

2.2.1 Implement only one primary function per server to prevent functions that require different security levels from coexisting on the same server. (For example, web servers, database servers, and DNS should be implemented on separate servers.)

2.2.2 Enable only necessary services, protocols, daemons, etc., as required for the function of the system.

2.2.3 Implement additional security features for any required services, protocols, or daemons that are considered to be insecure.

2.2.5 Remove all unnecessary functionality, such as scripts, drivers, features, subsystems, file systems, and unnecessary web servers.

2.3 Encrypt all non-console administrative access using strong cryptography.

2.4 Maintain an inventory of system components that are in scope for PCI DSS.

3.6.2 Secure cryptographic key distribution.

6.1 Establish a process to identify security vulnerabilities, using reputable outside sources for security vulnerability information, and assign a risk ranking (for example, as "high," "medium," or "low") to newly discovered security vulnerabilities.

6.2 Ensure that all system components and software are protected from known vulnerabilities by installing applicable vendor-supplied security patches. Install critical security patches within one month of release.

6.4.1 Separate development/test environments from production environments and enforce the separation with access tools.

6.4.2 Separation of duties between development/test and production environments.

6.5.1 Injection flaws, particularly SQL injection. Also consider OS Command Injection, LDAP and XPath injection flaws as well as other injection flaws.

6.5.3 Insecure cryptographic storage.

6.5.4 Insecure communications.

6.5.6 All "high risk" vulnerabilities identified in the vulnerability identification process (as defined in PCI DSS Requirement 6.1).

10.2.5 Implement automated audit trails for all system components to reconstruct use of and changes to identification and authentication mechanisms—including but not limited to creation of new accounts and elevation of privileges—and all changes, additions, or deletions to accounts with root or administrative privileges.

11.2.1 Perform quarterly internal vulnerability scans. Address vulnerabilities and perform rescans to verify that all "high risk" vulnerabilities are resolved in accordance with the entity’s vulnerability ranking (per Requirement 6.1). Scans must be performed by qualified personnel.

11.5 Deploy a change-detection mechanism (for example, file-integrity monitoring tools) to alert personnel to unauthorized modification (including changes, additions, and deletions) of critical system files, configuration files, or content files; and configure the software to perform critical file comparisons at least weekly.

11.5.1 Implement a process to respond to any alerts generated by the change-detection solution.

The Health Information Portability and Accountability Act of 1996 created the HIPAA compliance framework to govern patient privacy related to any and all health records. The Security Rule, added in 2003, governs digital health records. Any organization that handles electronic protected health information ( ePHI) that is individually identifiable has to comply with HIPAA requirements. This includes applications used directly by healthcare providers for care, communications, or billing.

The primary challenge for HIPAA compliance is that the security framework provides only high-level guidance rather than specifics on how organizations should meet those guidelines in containers and Kubernetes. In addition, the difference between what is and what is not protected health information is often less obvious than, for example, what is and is not credit card information that must be protected under PCI compliance.

In addition to healthcare providers themselves, any organizations that provide services like storage or billing to healthcare providers have to meet HIPAA requirements if the services they provide involve handling electronic personal health information (ePHI).

The HIPAA Security Rule standards are broken into administrative, physical, and technical safeguards. The technical safeguards, which relate to the IT infrastructure, include the following standards:

  • Access control

  • Audit controls

  • Integrity

  • Authentication

  • Transmission security

The HIPAA Security Rule doesn’t provide specifics on how organizations should secure ePHI, and is not specific to containerized applications. Often, the best place to start working towards HIPAA compliance is by applying the NIST SP 800-190 framework, which provides guidelines and best practices for container security. Unlike HIPAA, NIST SP 800-190 provides a framework that is specific to containers and can therefore be easier to demonstrate compliance. However, meeting HIPAA requirements involves implementing additional data segregation controls to protect ePHI and keep it separate from other types of data.

HIPAA also requires that organizations keep backups of not just data but also configuration files, so that the application can be fully recovered to demonstrate continual compliance.

Keep reading

Article

What's a Linux container?

A Linux container is a set of processes isolated from the system, running from a distinct image that provides all the files necessary to support the processes.

Article

Containers vs VMs

Linux containers and virtual machines (VMs) are packaged computing environments that combine various IT components and isolate them from the rest of the system.

Article

What is container orchestration?

Container orchestration automates the deployment, management, scaling, and networking of containers.

More about containers

Products

An enterprise application platform with a unified set of tested services for bringing apps to market on your choice of infrastructure.

Resources

Podcast

Command Line Heroes Season 1, Episode 5:
"The Containers Derby"

E-Book

Boost agility with hybrid cloud and containers

Training

Free training course

Running Containers with Red Hat Technical Overview

Free training course

Containers, Kubernetes and Red Hat OpenShift Technical Overview

Free training course

Developing Cloud-Native Applications with Microservices Architectures