As the container ecosystem has matured, Kubernetes has emerged as the de facto orchestrator for running applications. The advent of declarative and immutable workloads has paved the way for an entirely new operational model for detection and response. The rich set of workload metadata augments and elevates traditional detection approaches.
One such detection approach is anomaly detection. Anomaly detection consists of first creating an activity baseline for an application and then measuring future events against that baseline. Anything that falls too far outside of the normal baseline can be considered anomalous and should be investigated. Application activity includes file reads and writes, network requests, and process executions. One challenge of traditional anomaly detection approaches is knowing what activity is malicious vs. benign. When a given activity is blocked by the environment, you can be sure that some aspect of that event is undesirable. For example, when an application tries to reach out to the Internet but is blocked by a firewall, that activity indicates a malicious actor, a misconfiguration of the application, or a misconfiguration of the firewall. Any of those scenarios yields valuable information. The more declarative and locked down the environment is, the more accurate anomaly detection can be.
Anomaly Detection in Traditional Infrastructure
Virtual Machines (VMs) provide security primarily through isolation, but they do so at the cost of running a full operating system (OS). To amortize the cost of running the OS, organizations often run several core applications inside each one. From an operator’s perspective, tasks such as creating network firewalls and slim VM images are difficult, because the superset of all application activities must be considered. With this infrastructure, the broader spectrum of possible activity makes anomaly detection more reliant on the creation of complex models, algorithms, and machine learning. As a result, anomaly detection in VMs suffers two significant limitations - it both requires more expertise to tune and is much more prone to false positives.
Anomaly Detection in Containers and Kubernetes
Unlike VMs, containers are lightweight enough to run a single application, which frequently consists of a single process. This form factor and the declarative nature of Kubernetes increases the efficacy of anomaly detection by providing context around the applications that are running. The following diagram illustrates a richer model for baselining that leverages declarative information, as opposed to solely modeling runtime data. Each layer beneath runtime is declared by developers or operators and constitutes constraints for anomaly detection.
Immutable images provide a foundation for baselining by defining the set of binaries and packages installed in a specific version of an application. A Dockerfile is a manifest of the required application dependencies crafted by the application developer. This architecture relies on a significantly smaller set of packages and binaries compared to a VM, since containers don’t need to support a full operating system. With a reduced number of known binaries and packages for an application, this architecture makes it practical to use a simple form of anomaly detection, one that verifies that only pre-existing binaries are executed. This approach will catch attacks where a malicious actor inserts binaries and executes them.
Concrete actions to take:
- Remove all unneeded dependencies and binaries
- Scan for vulnerabilities
PodSpecs also allow developers to define security contexts for their Pods, declaring configurations such as privileges, Linux capabilities, and whether the filesystem is read-only. These configurations provide guardrails to the Pod’s activity and define aspects of the baseline that do not need to be inferred at runtime. For example, an attempted payload drop and execution on a Pod with a read-only filesystem would be denied, and the event can be fed into an anomaly detection system. Often, these events can be individually flagged as they indicate unanticipated behavior. In a VM world, such tight controls are not feasible, because all applications on the host would need to be compatible with such a change.
Concrete actions to take:
- Utilize Pod Security Policies
- Configure your pods’ filesystems to be read-only
- Drop unneeded Linux Capabilities
- Use Admission Controllers for custom rule enforcement
Analogous to firewalls but at a much more granular level, Kubernetes Network Policies allow developers to describe required ingress/egress in terms of Pods and IP subnets. This shift is critical, because developers in a microservices environment have a good understanding of their application’s network interactions and can scope access solely to the known dependencies. Kubernetes abstracts away IP addresses in application-to-application communication and provides logical segmentation constructs such as namespaces and labels. Carefully defined L3/L4 segmentation augments anomaly detection by narrowing the network activity to analyze and directly exposing blocked connections.
Concrete actions to take:
- At least enable namespace segmentation if not finer grained Network Policies
In a future blog, we’ll dig into anomaly detection approaches for containers and Kubernetes.
Kubernetes and containers create a unique opportunity for developers and operators to explicitly describe the environment in which their applications should run. Traditionally, it’s been difficult to effectively define an application’s activity; however, single-application containers enable users to define a minimal set of privileges, and Kubernetes provides high-level abstractions around service-to-service interactions. These fine-grained controls can augment anomaly detection by determining what is malicious vs. benign and highlighting behavior that violates user policies. As a result, the application’s attack surface is much smaller, making it less likely that a bad actor will be able to gain a foothold in the infrastructure.