We recently asked IT and security professionals working at organizations that have adopted containers to rate the importance of several container security capabilities and use cases for their environments. We found that respondents put a premium on addressing those security use cases that allow them to shift security left and apply best practices earlier in the container life cycle, with vulnerability management and configuration management taking two of the top three spots.
Welcome to part three of our four-part series on best practices and recommendations for Azure Kubernetes Service (AKS) cluster security. Previous posts have discussed how to plan and create secure AKS clusters and container images, and how to lock down AKS cluster networking infrastructure. This post will cover the critical topic of securing the application runtimes for AKS cluster workloads, and the tools and controls available to help enforce best practices in multi-tenant AKS clusters.
Today, StackRox published its State of Kubernetes and Container Security Report, Winter 2020 edition (download your full copy here) - a first of its kind. Based on responses from more than 540 Kubernetes and container users across IT security, DevOps, engineering, and product roles, the report provides insights into how organizations are adopting containers and Kubernetes and its security impact. Of all the survey responses, five findings stand out as the biggest surprises.
In part one of this series on Azure Kubernetes Service (AKS) security best practices, we covered how to plan and create AKS clusters to enable crucial Kubernetes security features like RBAC and network policies. We also discussed best practices for creating secure images to deploy to your AKS cluster and the need for performing regular vulnerability scans on those images. This post will cover topics related to the networking infrastructure of AKS clusters and suggestions for locking those networks down to protect against external attacks and internal misconfigurations of a cluster’s workloads.
Now that Kubernetes has won the container orchestration wars, all the major cloud service providers offer managed Kubernetes services for their customers. Managed Kubernetes services provide and administer the Kubernetes control plane, the set of services that would normally run on Kubernetes master nodes in a cluster created directly on virtual or physical machines. While dozens of vendors have received Certified Kubernetes status from the Cloud Native Computing Federation, which means their Kubernetes offering or implementation offers conformance to a consistent interface, details between offerings can differ.
Azure Kubernetes (AKS) Security Best Practices Part 1 of 4: Designing Secure Clusters and Container Images
Microsoft’s Azure Kubernetes Service (AKS), launched in June 2018, has become one of the most popular managed Kubernetes services. Like any infrastructure platform or Kubernetes service, though, the Azure customer has to make important decisions and formulate a plan for creating and maintaining secure AKS clusters. While many of these requirements and responsibilities apply to all Kubernetes clusters, regardless of where they are hosted, AKS also has some specific requirements that the platform users must consider and act on to ensure that their AKS clusters and the workloads their organization runs on them will be safeguarded from possible breaches or other malicious attacks.
Today we shared the news that StackRox supports the Anthos platform (download joint solution brief), extending the reach of our hybrid and multicloud security approach. Anthos and the StackRox Kubernetes Security Platform share a lot of common principles in delivering consistency across different environments – enabling both the infrastructure itself as well as the security policies and controls to bridge these worlds makes for a powerful combination. Hybrid and multicloud adoption are on the rise, as demonstrated in StackRox research and other reports.
A few months ago, we published a guide to setting up Kubernetes network policies, which focused exclusively on ingress network policies. This follow-up post explains how to enhance your network policies to also control allowed egress. A Brief Recap: What are Network Policies? Network policies are used in Kubernetes to specify how groups of pods are allowed to communicate with each other and with external network endpoints. They can be thought of as the Kubernetes equivalent of a firewall.
Kubernetes cluster networking can be more than a bit confusing, even for engineers with hands-on experience working with virtual networks and request routing. In this post, we will present an introduction into the complexities of Kubernetes networking by following the journey of an HTTP request to a service running on a basic Kubernetes cluster. We will use a standard Google Kubernetes Engine (GKE) cluster with two Linux nodes for our examples, with notes on where the details might differ on other platforms.
As 2018 was coming to a close, and the blistering pace of Kubernetes adoption showed no signs of slowing, the first major Kubernetes security vulnerability was discovered in the container orchestrator (CVE-2018-1002105), with a criticality score of 9.8. The vulnerability enabled attackers to compromise clusters via the Kubernetes API server. Increasing the impact, this vulnerability had existed in every version of Kubernetes since v1.0 – so everyone using Kubernetes at that point was potentially affected.
The release of Kubernetes 1.17 introduces several powerful new features and sees others maturing toward or into general availability. This recap provides a rundown of some of the most notable changes, which include: major improvements in cluster network and routing controls and scalability; new capabilities in cluster storage, pod scheduling and runtime options; and better custom resource support. Note that to try out these features, you will need to have access to a cluster running Kubernetes 1.