A few months ago, we published a guide to setting up Kubernetes network policies, which focused exclusively on ingress network policies. This follow-up post explains how to enhance your network policies to also control allowed egress. A Brief Recap: What are Network Policies? Network policies are used in Kubernetes to specify how groups of pods are allowed to communicate with each other and with external network endpoints. They can be thought of as the Kubernetes equivalent of a firewall.
Kubernetes cluster networking can be more than a bit confusing, even for engineers with hands-on experience working with virtual networks and request routing. In this post, we will present an introduction into the complexities of Kubernetes networking by following the journey of an HTTP request to a service running on a basic Kubernetes cluster. We will use a standard Google Kubernetes Engine (GKE) cluster with two Linux nodes for our examples, with notes on where the details might differ on other platforms.
As 2018 was coming to a close, and the blistering pace of Kubernetes adoption showed no signs of slowing, the first major Kubernetes security vulnerability was discovered in the container orchestrator (CVE-2018-1002105), with a criticality score of 9.8. The vulnerability enabled attackers to compromise clusters via the Kubernetes API server. Increasing the impact, this vulnerability had existed in every version of Kubernetes since v1.0 – so everyone using Kubernetes at that point was potentially affected.
The release of Kubernetes 1.17 introduces several powerful new features and sees others maturing toward or into general availability. This recap provides a rundown of some of the most notable changes, which include: major improvements in cluster network and routing controls and scalability; new capabilities in cluster storage, pod scheduling and runtime options; and better custom resource support. Note that to try out these features, you will need to have access to a cluster running Kubernetes 1.
This post is a companion to the talk I gave at Cloud Native Rejekts NA ’19 in San Diego on how to work around common issues when deploying applications with the Istio service mesh in a Kubernetes cluster. The Istio Service Mesh The rise of microservices, powered by Kubernetes, brings new challenges. One of the biggest changes with distributed applications is the need to understand and control the network traffic these microservices generate.
The Istio working group just released Istio 1.4.0 ahead of KubeCon + CloudNativeCon North America in San Diego this week. This post summarizes how this latest version continues the project’s recent focus on improving the operability and performance of Istio for production users. Highlights Continued work on performance improvements with alpha support for Mixer-less telemetry A complete update to service authorization system with the new AuthorizationPolicy Support for Istio installation, control plane configuration, and upgrades in the istioctl command More troubleshooting support in istioctl Proxy sidecar stability and feature improvements Laying the Groundwork for Performance Improvements Istio 1.
StackRox has pioneered Kubernetes-native container security, bringing rich context and infrastructure-native enforcement to protecting Kubernetes and containers across build, deploy, and runtime. We recognize the importance of getting critical alerts about this cloud-native stack to the right team, at the right moment – by integrating with PagerDuty, we broadened the choices on how to do so. To effectively protect the cloud-native stack, DevOps and security teams must be able to operationalize the security technologies designed to protect this new infrastructure.
Just in time for KubeCon next week, we’re announcing today the 3.0 version of our StackRox Kubernetes Security Platform. We’re really proud of the industry-first capabilities we’re introducing with this upgrade, enabling our customers to better harden their Kubernetes and container environments. Every time we build new functionality into our platform, we keep a relentless focus on the staff responsible for operationalizing container and Kubernetes security. This lens informs everything about how we design new capabilities.
SOC (System and Organization Controls) 2 is a set of compliance requirements that applies to companies that store, process, or transmit customer data. A broad range of companies, including SaaS providers, may need to comply with SOC 2 to be competitive in the market and keep customer data secure. Public cloud providers such as Amazon Web Services, Google Cloud Platform, and Microsoft Azure are subject to SOC 2 and make their audit reports publicly available.
I recently joined Alan Shimel, editor-in-chief of DevOps.com for a chat about what it means to be a Kubernetes-native security platform and why we believe it’s the most effective way to secure containers and Kubernetes. You can watch our conversation in the video below, or you can read through the transcript of our talk that follows, condensed and modified for clarity.
The Kubernetes team has released patches for the recently disclosed “Billion Laughs” vulnerability, that allowed an attacker to perform a Denial-of-Service (DoS) attack on the Kubernetes API server by uploading a maliciously crafted YAML file. With those patches comes the disclosure that the vulnerability was more severe than previously announced, as it could even be triggered by unauthenticated users (in Kubernetes 1.13) or any authenticated user, even when only granted read access via RBAC (Kubernetes 1.