Gartner Report - Market Guide for Cloud Workload Protection Platforms (CWPP) Download Report >
{ .link_text }}

EKS vs GKE vs AKS - Evaluating Kubernetes in the Cloud

We are now six years past the initial release of Kubernetes, and it continues to be one of the fastest-growing open-source projects to date. The rapid development and adoption of Kubernetes has resulted in many different implementations of the application. The Cloud Native Computing Foundation (CNCF) currently lists over 100 certified Kubernetes distributions or platforms. To ensure some consistency between platforms, the CNCF focuses on three core tenets;

  1. Consistency: The ability to interact consistently with any Kubernetes installation.
  2. Timely updates: Vendors are required to keep versions updated, at least yearly.
  3. Confirmability: Any end-user can verify the conformity using Sonobuoy.

These are the baseline requirements for the CNCF when it comes to Kubernetes, but cloud providers have such rich ecosystems that there are bound to be more significant discrepancies. We took a wide-ranging look at the current features and limitations of managed Kubernetes services from the three largest cloud providers.

  • Amazon’s Elastic Kubernetes Service (EKS)
  • Microsoft’s Azure Kubernetes Service (AKS)
  • Google’s Kubernetes Engine (GKE)

We hope that by presenting this information side-by-side, both current Kubernetes users and prospective adopters can better understand their options or get an overview of the current state of managed Kubernetes offerings.

This comparison aims to cover concepts such as version availability, network and security options, and container image services. This overview will not go into detail regarding pricing or topics outside of a platform’s technical capabilities for Kubernetes. All information was current as of October, 2020, and you can find more caveats in the “Notes on Data and Sources” at the end of this post.

Kubernetes security ebook - tips, tricks, best practices

Download to learn the steps needed to protect Kubernetes – from securing the software supply chain to protecting workloads and the underlying Kubernetes infrastructure

Download eBook

General information

Amazon EKSMicrosoft AKSGoogle GKEKubernetes

Currently supported Kubernetes version(s)

  • 1.17 (default)

  • 1.16

  • 1.15

  • 1.14

  • 1.19 (preview)

  • 1.18

  • 1.17 (default)

  • 1.16


  • 1.17

  • 1.16

  • 1.15 (default)

  • 1.14

  • 1.19

  • 1.18

  • 1.17

# of supported minor version releases

≥3 + 1 deprecated

3

4

3

Original GA release date

June 2018

June 2018

August 2015

July 2015 (Kubernetes 1.0)

CNCF Kubernetes conformance

Yes

Yes

Yes

Yes

Latest CNCF-certified version

1.17

1.18

1.17

-

Control-plane upgrade process

User initiated

User must also manually update the system services that run on nodes (e.g., kube-proxy, coredns, AWS VPC CNI)

User initiated

Automatically upgraded during cluster maintenance window; can be user-initiated

-

Node upgrade process

User initiated; AKS will drain and replace nodes

Automatically upgraded (default; can be turned off) during cluster maintenance window; can be user-initiated; GKE drains and replaces nodes

-

Node OS

Linux:

  • Amazon Linux 2 (default); Ubuntu (partner AMI)

Windows:

Linux:

Windows:

Linux:

  • Container-Optimized OS (COS) (default), Ubuntu

Windows:

Container runtime

Docker (default)


  • Docker (default)

  • containerd

  • gVisor


Linux:

  • Docker

  • Containerd

  • Cri-o

  • rktlet

  • any runtime that implements the Kubernetes CRI (Container Runtime Interface)

Windows:

  • Docker EE-basic 18.09

Control plane high availability options

Control plane is deployed across multiple Availability Zones (default)

Control plane components will be spread between the number of zones defined by the Admin

  • Zonal clusters: Single control plane

  • Regional clusters: Three Kubernetes control planes quorum

Supported

Control plane SLA

99.95%

  • 99.95% (SLA backed)

  • 99.9% (non-SLA backed)


  • Zonal clusters: 99.5%

  • Regional clusters: 99.95%

-

SLA financially-backed

Yes

Yes

Yes

-

Pricing

0.10/hour (USD) per cluster + standard costs of EC2 instances and other resources

Pay-as-you-go: Standard costs of node VMs and other resources

$0.10/hour (USD) per cluster + standard costs of GCE machines and other resources

-

GPU support

Yes (NVIDIA); user must install device plugin in cluster

Yes (NVIDIA); user must install device plugin in cluster

Yes (NVIDIA); user must install device plugin in cluster

Supported with device plugins

Control plane: log collection

Optional

Default: Off

Logs are sent to AWS CloudWatch

Optional

Default: Off

Logs are sent to Azure Monitor

Optional

Default: Off

Logs are sent to Stackdriver

-

Container performance metrics

Optional

Default: Off

Metrics are sent to AWS CloudWatch Container Insights

Optional

Default: Off

Metrics are sent to Azure Monitor

Optional

Default: Off

Metrics are sent to Stackdriver

-

Node health monitoring

No Kubernetes-aware support; if node instance fails, the AWS autoscaling group of the node pool will replace it

Auto repair is now available. Node status monitoring is available. Use autoscaling rules to shift workloads.

Node auto-repair enabled by default

-

Comments

Starting with the supported versions, AKS has been quicker to support the newer Kubernetes versions and has also announced support for more minor patches. AKS has a very structured approach to its supported versions, while customers may find flexibility with GKE’s overall number of supported versions. GKE maintains four minor versions with around 12 total versions supported between 1.14 and 1.17. EKS comes in with the same number of supported minor versions but a total of only four versions available.

One significant difference between the cloud provider options concerns the amount of management that each provides for clusters, particularly control plane components.GKE still maintains the lead here, offering automated upgrades for the control plane and nodes, in addition to detecting and fixing unhealthy nodes. GKE also offers release channels, which automates the ability for developers to test new versions. Upgrades in EKS and AKS require at least some degree of manual work. AKS has attempted to simplify this process, and user-initiated upgrades are handled relatively easily by AKS. EKS requires manual upgrades of the core Kubernetes components as well as the add-ons.

EKS does not offer any specialized node health monitoring or repair. EKS customers can create custom health checks to do some degree of node health monitoring and customer-automated replacement for EKS clusters. AKS has announced support for a node auto-repair feature and, when paired with its auto-scaling node pools, this should suffice for most organizations’ HA requirements. GKE remains the clear leader in cluster health maintenance with auto-repair enabled by default.

There has been some leveling off between providers when it comes to the service level agreements. All providers offer an uptime of 99.95%; however, EKS provides this by default, while AKS and GKE require additional costs or regional usage to achieve the same uptime. EKS, and now GKE, charge for their control plane usage at $0.10/cluster/hour. That amount will make up a negligible part of the total cost for all but the smallest clusters, but it brings something the other providers do not offer: a financially-backed SLA. All three providers now refund SLA penalties. Although they rarely compare to the loss of potential productivity or revenue suffered during a provider outage, offering published penalties can bring a greater degree of confidence, real or perceived, in the seriousness of the provider’s commitment to reliability and uptime.

While pods and nodes running in a Kubernetes cluster can survive outages of the control plane and its components, even short-lived interruptions can be problematic for some workloads. Depending on the affected control plane components, failed pods may not get rescheduled, or clients may not connect to the cluster API to perform queries or manage resources in the cluster. If the etcd database loses quorum (assuming it has been deployed as a highly-available cluster) or experiences severe data corruption or loss, the Kubernetes cluster may become unrecoverable.

Lastly, GKE supports a variety of operating systems (OS) and container runtimes. Along with Windows and Linux OS support, GKE supports a container optimized OS (COS). COS is a simplified, but hardened Linux version, allowing for quicker container deployments and scaling. GKE also supports Docker, containerd, and gVisor as container runtime options.

Service Limits

Limits are per account (AWS), subscription (AKS), or project (GKE) unless otherwise noted. Limitations for which the customer can request an increase are indicated with an asterisk (*).

EKSAKSGKEKubernetes (as of v1.19)

Max clusters

100/region*

100

50/zone + 50 regional clusters

-

Max nodes per cluster

30 (Managed node groups) * 100 (Max nodes per group) = 3000*

5000

Max nodes per node pool/group

Managed node groups: 100*

100

1000

-

Max node pools/groups per cluster

Managed node groups: 30*

10

Not documented

-

Max pods per node

110 (default)

100 (recommended value, configurable)

Comments

While most of these limits are relatively straightforward, a couple are not.

In AKS, the absolute maximum number of nodes that a cluster can have depends on a few configurations, including whether the node is in a VM State Set or Availability Set, and whether cluster networking uses kubenet or the Azure CNI. Even then, it is still unclear which number takes absolute precedence for specific configurations.

Meanwhile, in EKS, planning for the maximum number of pods scheduled on a Linux node requires some research and math. EKS clusters use the AWS VPC CNI for cluster networking. This CNI puts the pods directly on the VPC network by using ENIs (Elastic Network Interfaces), virtual network devices attached to EC2 instances. Different EC2 instance types support both a different number of ENIs and different IP addresses (one is needed per pod) per ENI.Therefore, to determine how many pods a particular EC2 instance type can run in an EKS cluster, you would get the values from this table and plug them into this formula: ((# of IPs per Elastic Network Interface - 1) * # of ENIs) + 2. A c5.12xlarge EC2 instance, which can support 8 ENIs with 30 IPv4 addresses each, can therefore accommodate up to ((30 - 1) * 8) + 2 = 234 pods. Note that large nodes with the maximum number of scheduled pods will eat up the /16 IPv4 CIDR block of the cluster’s VPC very quickly. Pod limits for Windows nodes in EKS are easier to compute and much more lower. Here, use the formula # of IP addresses per ENI - 1. The same c5.12xlarge instance could run as many as 234 pods as a Linux node could only run 29 pods as a Windows node.

GKE selects the pod range based on the available IPs allocatable on a worker node. With a range of /24, there are 256 allocatable addresses. Having 110 pods as the limit allows for quick and more reliable scaling.

Networking + Security

EKSAKSGKEKubernetes

Network plugin/CNI

Amazon VPC Container Network Interface (CNI)

Azure CNI or kubenet

  • kubenet (default)

  • External CNIs can added


Kubernetes RBAC

Required

Immutable after cluster creation

Enabled by default

Immutable after cluster creation

Enabled by default

Mutable after cluster creation

Supported since 2017

Kubernetes Network Policy

  • Not enabled by default

  • Calico can be manually installed at any time


  • Not enabled by default

  • Must be enabled at cluster creation time

  • kubenet: Calico

  • Azure CNI: Calico or Azure Policy

  • Not enabled by default

  • Calico can be enabled at any time

  • Not enabled by default

  • CNI implementing

  • Network Policy API can be installed manually


PodSecurityPolicy support

PSP controller installed in all clusters with permissive default policy (v1.13+)

PSP can be installed at any time. Will be deprecated for Azure Policy

PSP can be installed at any time

PSP admission controller needs to be enabled as kube-apiserver flag

Private or public IP address for cluster Kubernetes API

  • Public by default

  • Private-only address optional

  • Public by default

  • Private-only address optional

  • Public by default

  • Private-only address optional

-

Public IP addresses for nodes

  • Unmanaged node groups: Optional

  • Managed node groups: Required


No

No

-

Pod-to-pod traffic encrypted by cloud

No

No

No

No

Firewall for cluster Kubernetes API

CIDR allow list option

CIDR allow list option

CIDR allow list option

-

Read-only root filesystem on node

Pod security policy required

Azure policy required

  • COS: default

  • Pod security policy required


Supported

Comments

All three providers now deploy with Kubernetes RBAC enabled by default, a big win in the security column. By making RBAC mandatory, EKS maintains its strategy of implementing core Kubernetes security controls standard in every cluster. EKS also ensures support for Pod Security Policy with a permissive policy by default. Conversely, AKS makes it harder to manage security by requiring network policies to be enabled at cluster creation time. As users implement Kubernetes-native security controls, a cluster workload migration is required to take advantage of these features.

EKS requires the customer to install and manage upgrades for the Calico CNI themselves. AKS provides two options for Network Policy support, depending on the cluster network type but only allows enabling support at cluster creation time. AKS also provides additional policy management features via Azure Policy, which seems promising for hardening AKS clusters.

All three cloud providers now offer a few options for limiting network access to the Kubernetes API endpoint of a cluster. However, even with Kubernetes RBAC and a secure authentication method enabled for a cluster, leaving the API server open to the world still leaves it unprotected. An unprotected API server means more exposure that allows attackers to gain access to the cluster. Applying a CIDR allowlist or giving the API a private, internal IP address rather than a public address also protects against scenarios such as compromised cluster credentials.

EKS introduced managed node groups at re:Invent December 2019. While managed node groups remove a fair bit of the previous work required to create and maintain an EKS cluster, they come with a distinct disadvantage for node network security. All nodes in a managed node group must have a public IP address and must be able to send traffic out of the VPC. Effectively restricting egress traffic from the nodes becomes more difficult. While external access to these public addresses can be protected with proper security group rules and network ACLs, they still pose a severe risk if the customer incorrectly configures or does not restrict the network controls of a cluster’s VPC. This risk can be mitigated somewhat by only placing the nodes on private subnets.

Container Image Services

EKSAKSGKE

Image repository service

ECR (Elastic Container Registry)

ACR (Azure Container Registry)

GCR (Google Container Registry)

Supported formats

Access security

Supports image signing

No

No - GKE is working to implement Binary Authorization instead with Artifact Registry

No

Supports immutable image tags

Yes

Yes - and it supports the locking of images and repositories

No

Image scanning service

Yes - Free service: OS packages only

Yes - Paid service: Uses the Qualys scanner in a sandbox to check for vulnerabilities

Yes - Paid Service: OS packages only

Registry SLA

99.9%; financially-backed

99.9%; financially-backed

None

Geo-Redundancy

No: ECR is a regional service

Yes - configurable as part of the premium service

Yes: by default

Comments

All three cloud providers offer similar container image registry services at the moment. This year, we have seen massive outages from some third-party hosted services, so it is always useful to assess your dependencies and security strategies.

Amazon and Azure’s container services are relatively similar. Amazon’s Elastic Container Registry (ECR) is a paid tiered service that provides a financially backed SLA, a free image scanning service, and features such as immutable image tags. Azure’s Azure Container Registry (ACR) is also a paid tiered service that provides a financially backed SLA, a paid image scanning service, with the Qualys scanner, and features such as image signing, immutable tags, and the locking of images and repositories.

The pricing model is different between the two: ECR charges on data leaving the region, which can become an issue when companies use multi-regional clusters. ECR is not geo-redundant, so users will have to set up two repositories in their respective regions and automate syncing across them. This approach will cut down on cost, since no data is leaving the region, but will significantly increase complexity. Azure is priced on a daily usage rate and based on the amount of storage required but does not charge for network bandwidth. Microsoft also offers geo-redundancy as part of their premium plan.

Google is currently making a move away from its existing container registry, Google Container Registry, into a complete Artifact Registry product. Google has chosen to focus on more supported image formats, integrated image scanning, and binary authorization for a more secure offering.

Notes on Data and Sources

This post’s information should be considered a snapshot of these Kubernetes services at the time of publication. Supported Kubernetes versions, in particular, will change regularly. Features currently in preview (EKS and AKS terminology) or beta (GKE terminology) at this time are marked as such and may change before becoming generally available.

All data in the tables comes from the official provider online documentation (kubernetes.io in the case of open-source Kubernetes), supplemented in some cases by inspection of running clusters and service API queries. (Cloud Native Computing Foundation conformance data is an exception.) This information, particularly for supported Kubernetes versions, may be specific to regions in the US; availability may vary in other regions. Values for open-source Kubernetes are omitted where they are either specific to a managed service or depend on how and where a self-managed cluster is deployed.

We also do not attempt to make comparisons of pricing in most cases. Even for a single provider, pricing of resources can vary wildly between regions, and even if we came up with a standard sample cluster size and workload, the ratios of the costs might not be proportional for a different configuration. In particular, note that some optional features like logging, private network endpoints for services, and container image scanning may incur additional costs in some clouds.

We also do not address the performance differences between providers. A lot of variables come into play for performance benchmarking. If you need accurate numbers, run your tests to compare the multiple compute, storage, and network options of each provider, in addition to testing with your application stack, would provide the most accurate data for your needs.

All attempts have been made to ensure the completeness and accuracy of this information. However, errors or omissions may exist due to unclear or missing provider documentation or due to errors on our part.


Categories: