Posts under Kubernetes Security
Last week we published part one of our five-part Amazon’s Elastic Kubernetes Service (EKS) security blog series discussing how to securely design your EKS clusters. This blog post expands on the EKS cluster security discussion and identifies security best practices for your critical cluster add-ons. EKS leaves the task of installing and managing most AWS service integrations and common Kubernetes extensions to the user. These optional features–often called add-ons–require heightened privileges or present other challenges addressed below.
I’ve always said the best part of my job is talking to customers – especially happy customers! – and I got that chance a couple weeks ago in interviewing George Gerchow, the chief security officer at Sumo Logic. George is one of those “no BS, move fast, lead by serving, and do it all with a smile” guys. And he’s unflinching about the criticality of security to the company he serves.
When it comes to cloud services like AWS, customers need to understand what features and tools their cloud provider makes available, as well as which pieces of the management role falls on the user. That share of the workload becomes even more critical with respect to securing the Kubernetes cluster, the workloads deployed to it, and its underlying infrastructure. Customers share the responsibility for the security and compliance of their use of services with AWS.
Welcome to the final post in our four-part series on security best practices for Azure Kubernetes Service. In the first three installments, we covered how to create secure AKS clusters and container images (part 1), how to lock down cluster networking (part 2), and how to plan and enforce application runtime safeguards (part 3). This post will close out the series by covering the routine maintenance and operational tasks required to keep your AKS clusters and infrastructure secured.
We recently asked IT and security professionals working at organizations that have adopted containers to rate the importance of several container security capabilities and use cases for their environments. We found that respondents put a premium on addressing those security use cases that allow them to shift security left and apply best practices earlier in the container life cycle, with vulnerability management and configuration management taking two of the top three spots.
Today, StackRox published its State of Kubernetes and Container Security Report, Winter 2020 edition (download your full copy here) - a first of its kind. Based on responses from more than 540 Kubernetes and container users across IT security, DevOps, engineering, and product roles, the report provides insights into how organizations are adopting containers and Kubernetes and its security impact. Of all the survey responses, five findings stand out as the biggest surprises.
Azure Kubernetes (AKS) Security Best Practices Part 1 of 4: Designing Secure Clusters and Container Images
Microsoft’s Azure Kubernetes Service (AKS), launched in June 2018, has become one of the most popular managed Kubernetes services. Like any infrastructure platform or Kubernetes service, though, the Azure customer has to make important decisions and formulate a plan for creating and maintaining secure AKS clusters. While many of these requirements and responsibilities apply to all Kubernetes clusters, regardless of where they are hosted, AKS also has some specific requirements that the platform users must consider and act on to ensure that their AKS clusters and the workloads their organization runs on them will be safeguarded from possible breaches or other malicious attacks.
As 2018 was coming to a close, and the blistering pace of Kubernetes adoption showed no signs of slowing, the first major Kubernetes security vulnerability was discovered in the container orchestrator (CVE-2018-1002105), with a criticality score of 9.8. The vulnerability enabled attackers to compromise clusters via the Kubernetes API server. Increasing the impact, this vulnerability had existed in every version of Kubernetes since v1.0 – so everyone using Kubernetes at that point was potentially affected.
Just in time for KubeCon next week, we’re announcing today the 3.0 version of our StackRox Kubernetes Security Platform. We’re really proud of the industry-first capabilities we’re introducing with this upgrade, enabling our customers to better harden their Kubernetes and container environments. Every time we build new functionality into our platform, we keep a relentless focus on the staff responsible for operationalizing container and Kubernetes security. This lens informs everything about how we design new capabilities.
SOC (System and Organization Controls) 2 is a set of compliance requirements that applies to companies that store, process, or transmit customer data. A broad range of companies, including SaaS providers, may need to comply with SOC 2 to be competitive in the market and keep customer data secure. Public cloud providers such as Amazon Web Services, Google Cloud Platform, and Microsoft Azure are subject to SOC 2 and make their audit reports publicly available.
I recently joined Alan Shimel, editor-in-chief of DevOps.com for a chat about what it means to be a Kubernetes-native security platform and why we believe it’s the most effective way to secure containers and Kubernetes. You can watch our conversation in the video below, or you can read through the transcript of our talk that follows, condensed and modified for clarity.
The Kubernetes team has released patches for the recently disclosed “Billion Laughs” vulnerability, that allowed an attacker to perform a Denial-of-Service (DoS) attack on the Kubernetes API server by uploading a maliciously crafted YAML file. With those patches comes the disclosure that the vulnerability was more severe than previously announced, as it could even be triggered by unauthenticated users (in Kubernetes 1.13) or any authenticated user, even when only granted read access via RBAC (Kubernetes 1.
When you’re focused on revolutionizing the Accounts Receivable (AR) market, feature innovation and delivery are your lifeblood, and containers and Kubernetes become your currency. Protecting customer data on that cloud-native infrastructure is essential to successfully disrupting this FinTech market. YayPay is proud of its digital disruptor status, and StackRox is proud to have enabled the security and data protection YayPay needs to fuel customer growth. It’s always fun to work with “born in the cloud” companies like YayPay.
As the container ecosystem has matured, Kubernetes has emerged as the de facto orchestrator for running applications. The advent of declarative and immutable workloads has paved the way for an entirely new operational model for detection and response. The rich set of workload metadata augments and elevates traditional detection approaches. One such detection approach is anomaly detection. Anomaly detection consists of first creating an activity baseline for an application and then measuring future events against that baseline.
Containers, along with orchestrators such as Kubernetes, have ushered in a new era of application development methodology, enabling microservices architectures as well as continuous development and delivery. Docker is by far the most dominant container runtime engine, with a 91% penetration according to our latest State of the Container and Kubernetes Security Report. Containerization has many benefits and as a result has seen wide adoption. According to Gartner, by 2020, more than 50% of global organizations will be running containerized applications in production.
Following security best practices for AWS EKS clusters is just as critical as for any Kubernetes cluster. In a talk I gave at the Bay Area AWS Community Day, I shared lessons learned and best practices for engineers running workloads on EKS clusters. This overview recaps my talk and includes links to instructions and further reading. About EKS Amazon Elastic Kubernetes Service (EKS) is AWS’ managed Kubernetes service. AWS hosts and manages the Kubernetes masters, and the user is responsible for creating the worker nodes, which run on EC2 instances.