How containers and DevOps could have left Equifax’s attackers empty-handed

By now, details of the massive Equifax breach that saw 143 million personal records compromised has made its way around the global news, as well as the broader security and enterprise IT communities. Within these circles, you can bet that anyone responsible for resolving application vulnerabilities is worried about becoming the next headline.

There’s little argument that patching applications is a big deal; both in terms of criticality to the organization’s security posture, and in terms of the onerous process it can be when performed in traditional application environments.

Regardless, the Equifax attackers were successful because they exploited an Apache Struts web application vulnerability (CVE-2017-5638) that Equifax hadn’t patched, though the patch was released two months prior to the breach. This is a sizeable window for any system to remain vulnerable, and one that organizations have difficulty reducing when traditional application environments are involved. Financial companies, on average, take up to 176 days to patch vulnerabilities.

Furthermore, it is a well-known phenomenon that attackers count on the fact that many large organizations are slow at patching, and therefore ramp up their hacking efforts immediately following the day a patch is released by a vendor or developer.

All things being equal, accelerating the patching process and narrowing the window of vulnerability is the key objective. However, monolithic application environments aren’t conducive to speedy and reliable patching processes.

An entirely new approach to hardening applications is in order– one that leverages the ephemeral nature of containers, along with the mechanisms of building and maintaining immutable infrastructure.

The DevOps methodology for rapidly discovering and resolving vulnerabilities

Resolving a vulnerability through a DevOps framework based on containers involves three primary phases. They are:

  1. Scan & Identify

  2. Patch & Test

  3. Deploy

Containers offer highly effective package management, a process by which an immutable image can be built and then deployed as many times as necessary. Each application can have its own container image that installs the set of necessary packages for a specified application. Engineers then have the freedom to use packages that do not need to be installed on each host. However, one of the drawbacks is that there will be a growing number of packages and versions that each potentially have their own vulnerabilities.

How can we prevent known vulnerable packages from entering the data center and how can vulnerable applications (e.g. applications that relied on Struts) be updated and re-deployed?

Preventing vulnerabilities involves a new take on a classic method that works well for containers: scanning for common vulnerabilities and exposures (CVEs). Automating the creation of container images allows for DevSecOps to inject criteria for images in order for them to make it to the image registries for consumption. Software like Docker Security Scanning or CoreOS Clair essentially functions as a gatekeeper to the container image registry to prevent tainted images from becoming available in the data center.

Figure 1: Docker Security Scanner (source: https://blog.docker.com/2017/02/docker-datacenter-1-13/)

Continuous integration (CI) and testing

In the world of monolithic applications, discovered vulnerabilities require that engineers work tirelessly to patch them, and update the application runtime environment. On a host- or VM-based architecture, new packages or a particular service must be upgraded.

With containers, only individual vulnerable containers need to be updated. However, with every software update there is a risk of introducing new incompatibilities that cause the application to malfunction. Automated test suites which use tools like Jenkins or Drone, or even sandbox environments (created through Kubernetes, which can label specific nodes for testing), allow for quick quality assurance (QA) that can identify any application regressions.

Containers and orchestrators such as Kubernetes allow for rapid QA on different container networks, but within the same production cluster or environment. Here, the underlying kernel and hosts are part of the actual production infrastructure, but the containers running on them which are undergoing QA are not serving actual customer traffic. Once automated testing is complete, the new application is scanned, signed, and then pushed to the registry. At this point, the application is both patched and tested, and is ready for rollout.

The process of developing applications using containers using a CI framework gives enterprises a substantial leg up in terms of security risk. Within this process, there is opportunity to evaluate vulnerabilities of various levels of severity, and produce updated images on a frequent basis.

To that point, Docker security lead Diogo Monica offers up an interesting perspective in his recent blog, which is that image freshness should be regarded as a key security risk metric.

Continuous deployment (CD)

Orchestrators (Kubernetes, Docker Swarm, etc.) provide a very simple method of rolling out new patches by using rolling upgrade features. This allows for applications to be taken down and brought up again with the new image. Health checks are configurable in order to make sure that the new application is healthy before the orchestrator continues rolling out the fix for other instances. This provides an easy and safe way to roll out applications with an update of a configuration and a click of a button.

Next, blue/green deployments come into play as a means of mitigating the risk of disruption in case a newly-built application isn’t functioning properly. Depending on the load balancing technologies used, it is possible to route a percentage of traffic to newly spawned instances of the application. (Note: blue/green deployments should not be done using the rolling update feature. Instead, they should be started as a separate and new application.) Once verified, using the rolling update feature to upgrade the vulnerable application will close the vulnerability.

Systematically replacing vulnerable and/or out-of-date workloads with newer ones on a regular basis is a fundamental tenet of maintaining an immutable infrastructure– now an established best practice among leading organizations, according to Gartner Research analyst Neil MacDonald.

The practice of maintaining immutable infrastructure gives rise to another key metric that Diogo Monica emphasizes: reverse uptime. With security in mind, the newest workloads (featuring the latest images) account for the least amount of risk.

The bottom line is that immutable infrastructure is bad news for attackers; container images are constantly being updated with latest fixes and vulnerability patches, and production workloads won’t live long enough for a threat to establish persistence anywhere in the environment.

The plot thickens: enter container runtime protection

Even with the ability to address vulnerabilities quickly, it’s important to keep in mind that it’s not possible to catch them all upfront. The ability to effectively detect, prevent, and respond to threats is absolutely key. So, for those enterprises deploying containerized applications, a dedicated container security solution is essential for protecting these environments.

Stay tuned for a follow-up blog post from us where we dive deep into container runtime protection, and demonstrate how a purpose-built container security platform such as StackRox can surface malicious behavior and attacker techniques like the ones characteristic of the Equifax breach.


Categories:

Tags: