In the three and a half years since its release, Kubernetes has become one of the most popular container management systems on the market. A survey by 451 Research found that 71% of enterprise organizations running containers are using Kubernetes. Likewise, Google Kubernetes Engine (GKE) has emerged as one of the leading managed services for Kubernetes deployments, attracting customers like Niantic, Philips, Meetup, and Evernote. GKE extends the baseline benefits of Kubernetes, including automated cluster deployment, managed container networking, autoscaling, and a managed master node with guaranteed uptime and automated Kubernetes upgrades.
GKE-native security features
GKE offers inherent security-related benefits, including a minimalist, container-optimized operating system (read: an OS with an arguably smaller attack surface than its multi-purpose counterparts) and management of master cluster resources provided by Google engineers. In particular, GKE’s management of the master node ensures that your master is configured correctly, updated regularly, and the main cluster API (which controls cluster authentication and authorization) is always available and secure.
GKE also enables enterprises using containers to easily take advantage of a number of security features developed by Google and the broader Kubernetes community. These include projects like Grafeas and Kritis that secure the container software supply chain and enhancements in recent Kubernetes releases like:
Role-Based Access Control (RBAC);
built-in secrets management that generates secrets containing API access credentials automatically and modifies your pods to use those secrets;
the Network Policy API, which allows users to restrict which pods can communicate with each other;
the Node Authorizer, which uses your kubelet’s node as a point of reference to restrict its access to secrets, pods and other objects;
TLS bootstrapping that supports server and client certificate rotation; and
HTTP re-encryption, which allows users to leverage HTTPS between the Google Cloud Load Balancer and service backends.
StackRox for runtime security
StackRox augments GKE’s built-in security functions with a deep focus on securing the container runtime environment. When a Fortune 500 financial services firm deployed StackRox to secure its containers running on GKE earlier this year, it detected a number of potential threat vectors related to vulnerabilities and misconfigurations that existed within their applications. These gave malicious actors the opportunity to establish persistence, escalate privileges and gain access to other systems. StackRox’s core capabilities and functionality include:
fine-grained network discovery and visualization at the application, service and container levels;
detection of adversarial actions such as attempts to establish persistence, escalate privileges or pivot through your network;
detection of new attacks via a robust machine-learning capability which builds and adapts models according to the behaviors of your containerized applications to pinpoint malicious behavior; and
a rich framework for enabling preventive and responsive actions on both pre-defined and customized policies and attack patterns.
StackRox’s GKE integration
StackRox runs alongside your containerized applications and is managed in the same way for deployments and upgrades. It is also designed to auto-scale based on the data it analyzes and operates on your existing configurations. StackRox ships as a set of container images that are used to launch our services into a GKE cluster and the deployment experience is streamlined with our native GKE integration.
We have a Kubernetes-aware bootstrapping process for all our services, meaning that standing up StackRox in your environment is a matter of running a single script and takes less than 20 minutes to complete. You can:
deploy StackRox with the kubectl CLI and leverage your existing orchestrator toolchain to manage the platform;
store StackRox images in Google Container Registry using private repositories;
easily administer privileges on StackRox services within your cluster (our services run in a dedicated namespace); and
leverage node selectors to assign StackRox services to whichever nodes you prefer throughout your cluster.
Additionally, you can use the platform to view and monitor Kubernetes services such as kube-proxy, kube-dns, heapster, kubernetes-dashboard, etc. for potential threats to the cluster—a critical capability in the event of orchestrator compromise, given that an attacker could take a number of privileged actions. These services are discovered automatically and populate the Kubernetes application in the StackRox interface:
Are you looking to secure your GKE environment at runtime? Contact us if you’re interested in trying StackRox.