New report - State of Kubernetes Security Fall 2020 - Get your copy today Download Report
{ .link_text }}

What’s New in Kubernetes 1.17: A Deeper Look at New Features

The release of Kubernetes 1.17 introduces several powerful new features and sees others maturing toward or into general availability. This recap provides a rundown of some of the most notable changes, which include:

  • major improvements in cluster network and routing controls and scalability;
  • new capabilities in cluster storage, pod scheduling and runtime options; and
  • better custom resource support.

Note that to try out these features, you will need to have access to a cluster running Kubernetes 1.17 and, in some cases, the ability to set feature gates for Kubernetes components. Managed Kubernetes clusters typically do not support the latest Kubernetes release or allow users to enable alpha or beta features. You can use tools such as kubeadm or kops to install specific versions of Kubernetes on bare metal or your cloud provider’s virtual machine service.

EndpointSlice API

  • Graduating Status: Beta
  • Kubernetes API Group/Component: discovery.k8s.io
  • Expected GA Release: 1.19+
  • Kubernetes Enhancement Proposal or Design Doc: KEP
  • How to Try It: Instructions for enabling.

The EndpointSlice API joins the existing Endpoints API to allow management of and interaction with service destinations on the cluster network. The venerable Endpoints API and its controller hit severe scalability issues in very large or complex clusters. The new EndpointSlice API was designed to run alongside the existing API, with the potential to replace it eventually, and also to provide an extensible framework that can adapt to and enable the evolving Kubernetes use cases and ecosystems of service meshes, multi-cluster topologies, and other frameworks that need to be tightly coupled to cluster service discovery and network topology. The new API also supports dual-stack IPv4/IPv6 addressing for endpoints in preparation for wider cluster dual-stack support.

Kubernetes-native security: what is it and why it matters

Download this ebook to learn why a Kubernetes-native approach to protecting your containerized applications provides the most comprehensive security in Kubernetes environments

Download Today

IPv4/IPv6 Dual-Stack Support

  • Status: Alpha (ongoing major change)
  • Kubernetes API Group/Component: multiple
  • Expected GA Release: unknown
  • Kubernetes Enhancement Proposal or Design Doc: KEP
  • How to Try It: Instructions for enabling. Dual-stack support also requires any Container Network Interface (CNI) plugins in use to support and honor the changes. (Support in the Kubenet plugin ships with this release.)

While Kubernetes added support to use IPv6 in pod networks starting in v1.9, cluster networks still had to use either IPv4 or IPv6. However, many production environments have a real need to be addressable by IPv4 and IPv6 simultaneously, requiring NAT gateways or proxies to bridge the gap between networks using different protocols. This enhancement moves to add true dual-stack support to the Kubernetes pod and node network layers, meaning nodes and pods can now be addressable by both an IPv4 address and an IPv6 address for the same resource. (Note that the Kubernetes Service networks still support only single-stack networking, although the protocol can be either IPv4 or IPv6. Dual-stack support for Kubernetes Services is not in scope for this enhancement.)

Dual-stack support was initially introduced in Kubernetes 1.16, but this enhancement is considered a major change to core Kubernetes functionality, so the incremental introduction of the underlying changes will span multiple releases, working toward graduation to beta status. Updates for the 1.17 release include the ability to set IPv6 netmasks and pod IP address validation support.

PersistentVolume Snapshot Backup/Restore Support in CSI

  • Graduating Status: Beta
  • Kubernetes API Group/Component: snapshot.storage.k8s.io
  • Expected GA Release: 1.19+
  • Kubernetes Enhancement Proposal or Design Doc: KEP
  • How to Try It: Volume Snapshots require using a Container Storage Interface (CSI) plugin that implements this API.

While volume snapshots, particularly in highly virtualized environments, are not always a foolproof and reliable method for data backup, they are widely used because of a common industry need for zero downtime backups of increasingly large data sets. (For explanations on why snapshots make unreliable backups, try searching for “snapshots are not backups.”) This enhancement adds API support for Kubernetes Container Storage Interface plugins to create snapshots of PersistentVolumes and to restore them.

To use volume snapshots on any platform as a reliable medium for critical data restores, the user must take a number of steps, from the application level through the host server’s operating system and down to the actual storage hardware, to ensure data consistency. Snapshots taken before all in-memory data is flushed to the storage medium can cause unrecoverable corruption of the snapshot’s file system.

Topology-Aware Service Routing

  • Graduating Status: Alpha
  • Kubernetes API Group/Component: k8s.io (core)
  • Expected GA Release: unknown
  • Kubernetes Enhancement Proposal or Design Doc: KEP
  • How to Try It: Instructions for enabling.

Running large-scale production clusters brings a number of challenges, including balancing the need for redundant, highly available services against minimizing request latency. Whether the service runs on a cloud provider, which encourages users to spread application instances across multiple zones to avoid single points of failure, or an on-premises data center, the network distance and response time between application instances can vary wildly. Add the fact that cloud providers such as AWS charge for network traffic between availability zones, and cluster administrators have a great deal of motivation to try to keep requests within their distributed application stacks as “local” as possible.

The addition of support for topology-aware routing in the Kubernetes ServiceSpec addresses that need for network topology routing controls. This feature will allow users to specify node labels to use for prioritizing selection for routing a request among the pods backing the target service.

Node Taints by Condition

  • Graduating Status: Stable/GA
  • Kubernetes API Group/Component: scheduler, node controllers
  • Kubernetes Enhancement Proposal or Design Doc: KEP
  • How to Try It: This feature is enabled by default in 1.17.

Automatic tainting of nodes by the node condition graduates to stable release. These taints allow users to decide what node conditions the Kubernetes scheduler can either honor or ignore when placing application pods.

API Defaulting for Custom Resources

  • Graduating Status: Stable/GA
  • Kubernetes API Group/Component: apiextensions.k8s.io
  • Kubernetes Enhancement Proposal or Design Doc: KEP
  • How to Try It: This feature is enabled by default in 1.17.

This features adds support for supplying default values for Kubernetes Custom Resource Definitions (CRDs) through the OpenAPI v3 validation schema. Support for handling default values during processing of API requests already existed for core Kubernetes APIs, but lack of defaulting support in Custom Resources made CRD API version changes more cumbersome for developers and users.

Finalizer Protection for Service Load Balancers

  • Graduating Status: Stable/GA
  • Kubernetes API Group/Component: service controller
  • Kubernetes Enhancement Proposal or Design Doc: KEP
  • How to Try It: This feature is enabled by default in 1.17.

Service load balancer finalizer protection ensures that any load balancer resources allocated for a Kubernetes Service object will be destroyed or released when the service is deleted. In case of a failure to release a load balancer resource, the deletion of the Service object will also fail. This change addresses a frequent issue users were seeing with load balancers being silently left over even after the Service they were attached to was deleted from the Kubernetes cluster, translating to higher cloud provider bills or depletion of finite resources in on-premises clusters.

Process Namespace Sharing in Pods

  • Graduating Status: Stable/GA
  • Kubernetes API Group/Component: core (pod), Container Runtime Interface (CRI)
  • Kubernetes Enhancement Proposal or Design Doc: KEP
  • How to Try It: This feature is enabled by default in 1.17.

This feature enables the ability for the containers sharing a pod to share the same process namespace. When the field shareProcessNamespace is set to true for a pod, all containers in that pod will operate in a shared (Linux) process namespace, simplifying interprocess signaling and application debugging.


As adoption of Kubernetes increases, so does the diversity of workloads and user requirements and environments. A number of these enhancements address the needs of users in large-scale, critical production environments. The Kubernetes steering committee and the community groups are continuing to balance giving users the controls they need while still maintaining a flexible, customizable platform that can be adapted to many different organizations and workloads.

For insights on best practices for securing Kubernetes, please see our Definitive Guide to Kubernetes Security.


Categories:Tags: