In February, we published an article providing side-by-side comparison between the managed Kubernetes offerings of the three largest cloud providers: Amazon’s Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS), and Google Kubernetes Engine (GKE). The Kubernetes ecosystem changes rapidly, as do the feature sets of these managed platforms. This post covers important updates to these services made since our original comparison .
Azure Kubernetes Service (AKS)
AKS has made Kubernetes 1.15 the default for new clusters, version 1.16 generally available (GA), and has removed support for Kubernetes 1.13. Note that Azure is the first of the three providers to release version 1.16 as GA and make version 1.15 the default.
AKS now defaults to monitoring and performing repairs of faulty nodes. Currently, this feature only seems to support nodes deployed in Virtual Machine Scale Sets, but not VM Availability Sets.
Node auto-repair is a major step forward for AKS cluster reliability and automation. Previously, of the top three cloud providers, only GKE offered this ability.
Azure Security Center Support
Azure Security Center support for AKS graduates to GA. Security Center can discover, monitor, and suggest improvements to your AKS clusters and nodes automatically. This integration comes as part of the standard tier for Security Center Support, which requires opt-in and incurs usage charges.
Azure Monitor Support for GPU Nodes
The Azure Monitor service now supports monitoring metrics for containers which use GPU resources . Currently, this enhancement only covers NVIDIA and AMD GPUs. Azure Monitor requires opt-in and incurs usage charges.
Container and Kubernetes Security: An Evaluation Guide
Download our in-depth guide that identifies critical security controls you must implement in your container and Kubernetes environmentDownload Now
Elastic Kubernetes Service (EKS)
EKS now supports Kubernetes version 1.15. Support for version 1.12 is now deprecated.
AWS KMS Envelope Encryption for EKS Secrets
EKS now offers the ability to encrypt Kubernetes secrets at rest with the cluster’s etcd data store, using a data encryption key (DEK) which is stored encrypted-at-rest in the EKS cluster. The AWS Key Management Service (KMS) manages decrypting the DEK when the cluster’s Kubernetes API server needs to decrypt or encrypt
Secret objects using the DEK. The decrypted DEK only remains in memory, which makes exfiltration or exploitation of this secret more difficult. This practice of using an external key or service to encrypt an intermediate key is called envelope encryption.
Bottlerocket Container-Optimized Linux (Preview)
Amazon released its new open-source Bottlerocket Linux-based operating system, optimized for running container workloads. Bottlerocket features a minimal set of installed software and an update mechanism which allows one-step OS updates and rollbacks. Users can try it out on their EKS clusters with self-managed node groups.
Service Level Agreement
Amazon has increased their SLA for EKS cluster control planes from 99.9% to 99.95% uptime.
Google Kubernetes Engine (GKE)
New GKE clusters now use Kubernetes version 1.14 by default. GKE now offers Kubernetes 1.17 in preview, which requires requesting access from Google Cloud to use.
Users can now control the rate at which GKE performs cluster node upgrades.
Google Compute Engine Persistent Disk CSI Driver (Beta)
Kubernetes persistent volumes in GKE currently use the in-tree (built into the Kubernetes source code) driver for GCE Persistent Disk. Support for in-tree drivers for cloud providers is deprecated, with the expectation of moving the functionality to plugins using the appropriate container API interface.
GKE now offers GCE Persistent Disk support as a Container Storage Interface (CSI) plugin as a beta feature. The CSI plugin offers some additional features over the in-tree driver, which will stop seeing major improvements and eventually be removed. GKE manages the installation and upgrades of the CSI plugin in clusters that use it.
Ingress for Internal Load Balancing (Beta)
The GKE Ingress Controller now supports the creation of internal HTTP(s) load balancers, which reside in the cluster’s VPC.
Network Endpoint Groups for Kubernetes Services
GCP now offers Network Endpoint Group (NEG) for use as container-native endpoints for Kubernetes
Service objects. These NEGs exist in the GCE VPC but outside the GKE cluster. They open up support for a number of use cases, including using a combination of GKE pods and GCE VMs as back-ends for a service, customizing Kubernetes cluster ingress, and using GCP network tools which do not directly support GKE workloads.
Node Pool Auto-Provisioning
GKE adds optional support for node pool auto-provisioning. If a cluster with this feature enabled cannot schedule a workload or pod on existing node pools because of incompatible node taint tolerations, node affinities, resource availability, or GPU support, GKE will automatically create a node pool with the appropriate configuration and manage auto-scaling nodes as needed. GKE will then remove these node pools when they no longer have dedicated workloads.
This feature needs to be used with care, because users may need some cases where only workloads with specific configurations should get scheduled. A validating admission controller could handle rejecting workloads with unapproved configurations while still allowing valid workloads to take advantage of this feature.
Workload Identity is now generally available and the recommended way to control access to GCP service APIs from GKE cluster workloads. This feature supports fine-grained controls for cluster workloads needing to consume GCP services without having to administer cloud credentials for pods or give blanket permissions to the cluster nodes, the previously supported options. Users can associate Kubernetes service accounts in their cluster to GCP service accounts with the appropriate privileges.