Gartner Report: Best Practices for Running Containers and Kubernetes in Production Download Today
{ .link_text }}

Kubernetes Security 101: Top challenges, risks, best practices

Kubernetes is by far the most widely used container orchestrator in the market, and Kubernetes adoption – especially in production environments – is taking off. According to Gartner, “by 2022, more than 75% of global organizations will be running containerized applications in production.” The explosion in Kubernetes adoption hasn’t been without its share of security concerns. Earlier this year, the runC vulnerability, which allows an attacker to gain host-level code execution by breaking out of a running container, was discovered. Within a couple of months, the Kubernetes API DOS vulnerability was uncovered, followed closely by a pair of high and medium severity vulnerabilities.

Given that security concerns remain one of the leading constraints for using containers and Kubernetes, organizations can’t afford to treat security as an afterthought if they want to unlock the benefits of cloud-native technologies while maintaining strong security of their critical application development infrastructure. In this article, we will take a deep dive into different areas of Kubernetes security and provide practical recommendations to help you build a resilient cloud-native infrastructure. We will discuss:

  • What are the key components of Kubernetes?
  • How does Kubernetes change the security paradigm?
  • What are the most important security features already built into Kubernetes, and how do you leverage them?
  • 12 security questions your DevOps and Security Team should be able to answer about your container and Kubernetes environment

What is Kubernetes?

Originally developed by Google and now maintained by the Cloud Native Computing Foundation, Kubernetes is a portable, extensible, and open-source platform to manage containerized workloads. Kubernetes leverages declarative data, automation, and configuration settings to orchestrate containers, with functionality including service discovery and load balancing, storage management, CPU and memory (RAM) management, and secrets management.

definitive guide to Kubernetes security

Definitive Guide to Kubernetes Security

Download our guide to learn about important Kubernetes security considerations, operationalizing built-in Kubernetes security features, and security best practices for building/deploying containers as well as protecting your containerized apps during runtime

Download Now

Basic components of Kubernetes

Kubernetes architecture is complex and beyond the scope of this article. At a high level, Kubernetes includes the following basic components (you can find a more detailed write-up on k8s design and architecture here):

Nodes

Nodes are the basic unit of computing in Kubernetes, also known as a worker machine. A node can be a physical machine or a VM, and it contains the services necessary to operate pods, including the container runtime, kubelet, and kube-proxy. Each worker node and the deployment of pods within them is managed by the Master node.

Clusters

Within Kubernetes, a grouping of nodes whose resources are pooled together is known as a cluster. The cluster will utilize the resources from the nodes to distribute and run program workloads as necessary.

Persistent volumes

Nodes do not have persistent storage, so instead they use what’s called Persistent Volumes that behave like an external hard drive that’s mounted not to any individual nodes but to the cluster as a whole.

Containers

These self-contained Linux execution environments run on Kubernetes, they contain all the dependencies for execution, and they are highly portable. Although multiple processes can be added into a container, it’s a best practice to limit the number of processes running in containers to ensure ease in updating and troubleshooting and to make anomaly detection more accurate.

Pods

Kubernetes doesn’t run containers directly but instead packages containers into groups known as pods, the atomic unit on the Kubernetes platform. Containers within a pod share the same resources and local network and can readily communicate with other containers within the same pod. Pods are replicable and Kubernetes can create copies of the pod as needed to scale capacity up and down.

Deployments

Deployments are a higher-level grouping of pods that are directly launched on a cluster. Essentially, deployments are where you determine the configuration of the application you wish to run.

Kubernetes includes many other components, including networking, admission controllers, RBAC, and other features. Some of these elements will be covered in the following sections related to Kubernetes security.

Next we’ll look at how containers and Kubernetes have changed the security paradigm and demand a new approach to security.

New security considerations and risks for the cloud-native stack

While the advent of containers and Kubernetes has revolutionized cloud-native application development and deployment by enabling faster delivery, superior portability, and a more inherently secure infrastructure – because of their immutable and declarative nature – the security mission in Kubernetes environments remains the same: keep the bad guys out and find and stop them if they do break in.

Containers and Kubernetes introduce several new security considerations that make that security mission more challenging.

Containers are numerous and everywhere - although the microservices design pattern that powers containerized applications underlies many of the benefits of containerization, it also creates security blind spots and increases your attack surface. As more containers are deployed, maintaining adequate visibility into your critical cloud-native infrastructure components becomes more difficult. The distributed nature of containerized apps makes it difficult to quickly investigate which containers might have a newly discovered zero-day vulnerability, which ones are running as privileged, or other factors.

Images and image registries, when misused, can pose security issues - organizations need a strong governance policy around using trusted image registries. Ensuring that only images from whitelisted image registries are being pulled in your environment can be challenging but must be part of any container and Kubernetes security strategy along with more advanced security best practices such as ensuring images are scanned for vulnerabilities frequently and any image not scanned within X number of days is rejected.

Containers talk to each other and to other endpoints - containers and pods will need to talk to each other within the deployment as well as to other endpoints to properly function. As a result, you must monitor both north-south and east-west traffic. If a container is breached, the attack surface is directly related to how broadly it can communicate with other containers and pods. In a sprawling container environment, implementing network segmentation can be prohibitively difficult given the complexity of configuring such policies in YAML files.

Kubernetes offers rich configuration options, but default settings are usually the least secure - in keeping with DevOps principles, Kubernetes is designed to speed application development, not to isolate its components. Kubernetes network policies, for example, behave like firewall rules that control how pods communicate with each other and other endpoint. When a network policy is associated to a pod, that pod is allowed to communicate only with the assets defined in that network policy. By default, however, every asset can talk to every other assets in a Kubernetes environment. Another configuration risk relates to how secrets such as cryptographic keys are stored and accessed, a discipline called secrets management. You must ensure that secrets are not being loaded as environment variables but are instead mounted into read-only volumes in your containers, for example.

Containers pose compliance challenges - while the continuous improvement and continuous delivery (CI/CD) model enabled by containers and Kubernetes is a core benefit of the cloud-native stack, this model also introduces challenges in complying with internal policies, best practices, and external policy frameworks. Beyond remaining compliant, organizations also must show proof of that compliance. Many of the traditional components that helped demonstrate compliance, such as firewall rules, take a very different form in container and Kubernetes environments. Also, the distributed and dynamic nature of containerized application environments means compliance assurance and audit must be fully automated for it to scale.

Containers create both familiar and new runtime security challenges - one of the security advantages of containers and Kubernetes is their immutable design – what’s running should never be patched or changed but rather destroyed and recreated from a common template when new updates are needed. Other innate properties of containerization pose unique challenges, including the difficulty in monitoring containers due to their ephemeral nature. And when a potential threat is detected in a running container, such as an active breach or vulnerability, you must be able to not only kill that container and replace it with a non-compromised version but also integrate that information into your CI/CD pipeline to inform future build and deploy cycles. Other runtime security risks include a compromised container running malicious processes. Although crypto mining has gained popular attention thanks to the infamous Tesla hack, other malicious processes can also be executed from a compromised container, such as network port scanning to look for open paths to attractive resources.

Understanding Kubernetes’ built-in security capabilities

As an open-source project, Kubernetes is enriched by the combined brain-power of thousands of developers, researchers, and security professionals. As such, Kubernetes has enjoyed rapid enhancement of its built-in security features. We’ll highlight some of the most important Kubernetes security capabilities and provide recommendations around best practices.

Role-Based Access Control (RBAC)

Kubernetes RBAC configuration is a critical control for the security of containerized workloads and was preceded by the older – and ultimately not recommended – Attribute-Based Access Control (ABAC). Kubernetes RBAC regulates access to computer or network resources based on the roles of individual users within the organizations.

RBAC supports two types of roles. “Role” grants access to resources within a namespace, such as pods, while “ClusterRole” grants permission to resources across namespaces as well as the cluster-scoped resources such as nodes and non-resource endpoints.

RoleBindings and ClusterRoleBindings are used to grant the permissions defined by Roles and ClusterRoles to users, sets of users or groups, and service accounts, also known as subjects.

The following RBAC best practices can help you improve security in your Kubernetes environment.

Specify your Roles and ClusterRoles to specific users or groups of users - one of the first steps you should take is ensure you’re not granting unnecessary permissions to the subjects, even if it might take some time to think through the minimum required permissions needed. While it might be tempting to grant cluster-admin privileges widely, since doing so will save time upfront, any account compromise or mistakes can lead to damaging consequences. As a best practice, avoid granting cluster-admin privileges to any subject, including service accounts, if possible.

Avoid duplication of permissions - while it might sometimes be useful to create overlap amongst Roles, doing so can create operational issues as well as blind spots when removing permissions. If multiple RoleBindings give the same privileges, then administrators will need to remove or update all of those RoleBindings to revoke access.

Remove unused or inactive roles - you should perform frequent and ongoing house cleaning for your Roles to stay on top of your RBAC management. Removing unused or inactive roles is typically safe and will allow you to focus attention on the active roles when troubleshooting or investigating security incidents.

Kubernetes Network Policies

One of the most important Kubernetes configurations that demands a lot of attention is Kubernetes Network Policies. These policies enable network segmentation and provide the Kubernetes equivalent of firewalls, controlling both pod-to-pod and pod-to-network endpoint communications.

Network policies come with a field called PodSelector, which determines which pods are affected by that policy. A pod that is associated to a policy can communicate only in those ways allowed by that policy. However, if no policy is associated to a pod, then all network communication is allowed to and from that pod.

Adopt the following Kubernetes Network Policy best practices to secure your environment.

Isolate your pods - you should apply at least one network policy to every pod to ensure they’re isolated. Many times, eager developers looking to get a cluster up and running fast will forego this step, which will expose the related application to both lateral and north-south threats. One way to prevent this threat is to apply a “deny all” policy to all pods as a default first step.

Explicitly allow Internet access for pods that need it by using labels - upon creating a “deny all” policy, which will isolate all pods by default, you will typically need to allow some assets to communicate with the Internet for your application to operate. In these cases, you can create labels, associate the labels with the pods that require Internet reachability, and create a network policy that targets those labels.

Explicitly allow necessary pod-to-pod communication - if you don’t know which pods need to talk to each other, you can limit security risk by allowing just those pods within the same namespace to freely communicate with each other. To get more information on securing pod-to-pod communications, see this blog post, authored by our own Kubernetes Networking expert, Viswa Venugopal.

Admission Controllers

One of the recent security features added to Kubernetes is a set of plugins called admission controllers. You can learn more about admission controllers in this in-depth blog authored by our own Kubernetes expert, Malte Isberner. Some of the use cases that can be addressed by admission controllers include:

Prevent risky configuration with PodSecurityPolicy - this admission controller is arguably the most important one from a security perspective. It defines a set of conditions a pod must run with to be accepted in the system. Pod security policies can be used to prevent containers from running as root or to make sure the container’s root filesystem is mounted read-only.

Enforce image registry governance - admission controllers can be used to allow images to be pulled from trusted registries while denying all untrusted image registries.

Ensure adherence to good DevOps practices - admission controllers can be used to enforce internal policies for use of labels on various objects or consistently adding annotations to objects.

Final thoughts – ensure you can answer these 12 questions about your container and Kubernetes environment

The rise of Kubernetes has been dramatic as organizations have rapidly embraced its power for simplifying application development with containers, but it hasn’t been without its security risks. To help you quickly assess your security posture, we’ve compiled a list of questions your security, DevSecOps, or DevOps teams should readily be able to answer if your cloud-native stack has been architected with appropriate security measures.

  1. Where are images used in containers coming from? You need to ensure only trusted assets are deployed.
  2. How long ago were the images scanned for vulnerabilities? Frequent scanning is essential to container security.
  3. Which of your containers are impacted by known vulnerabilities, and what’s their severity? Once new vulnerabilities are found, your team needs a way to find them not just in images but in running deployments – and fast.
  4. Are any of these containers in production impacted by a known vulnerability? Beyond finding where you have a vulnerability, you must be able to determine whether it’s been exploited.
  5. Which vulnerable running containers or deployments should be prioritize first for remediation? Knowing you have 68 instances of a critical vulnerability won’t help your team. Knowing which three or five instances present the biggest risk – based on attributes such as Internet reachability, app criticality, or running as root – will ensure those get fixed.
  6. Which deployments are using privileged containers, meaning they have full access to the host operating system? Find these developer short cuts to keep your environment safe.
  7. What container applications services are exposed outside of the Kubernetes cluster? With “default allow” network connectivity native to Kubernetes, you need to make sure your team has turned off unnecessary communication paths.
  8. Can we tell which processes are running in any container in any cluster? Identifying suspicious processes will highlight compromised containers.
  9. Which network communication pathways are active but are not being used in production? These insights will help your team reduce the attack surface.
  10. Which running deployment have had an adversary attempt to run a specific runtime exploit? Finding these operations, even if they failed, will highlight areas of compromise.
  11. What team in the organization owns a particular running application? Ensuring remediation will require the help of this team.
  12. How many of my clusters, namespaces, and nodes adhere to CIS benchmarks for containers and Kubernetes and to what extent (or PCI, HIPAA, NIST, depending on relevancy.) You need compliance insights by asset to align with different groups’ responsibility levels.

Categories:Tags: