skip to main content

Mar 23, 2020

What's New in Kubernetes 1.18? New Features and Updates

By: Karen Bruner

The release of Kubernetes version 1.18 comes at an interesting time, to say the least. The Kubernetes release team has done an amazing job of pushing out the new version despite all the turmoil and uncertainty caused by the spread of COVID-19, which impacts the global Kubernetes developer community members like everyone else.

The release features a number of new enhancements and changes. New and maturing features include enhanced security options, improved support for Windows, multiple extensions to the Container Storage Interface, and more. We will cover a few of these changes and enhancement highlights.

Breaking Changes

Version 1.18 includes several backwards-incompatible changes that users and developers need to know about before upgrading.

kubectl Endpoint

  • Kubernetes Enhancement Proposal, Design Doc, or Pull Request: PR

kubectl no longer defaults to using http://localhost:8080 for the Kubernetes API server endpoint, to encourage using secure, HTTPS connections. Users must explicitly set their cluster endpoint now.

KMS Configuration

  • Kubernetes Enhancement Proposal, Design Doc, or Pull Request: PR

Cluster administrators can choose to use a third-party Key Management Service (KMS) provider as one option for encrypting Kubernetes secrets at rest in the etcd data store backing the cluster. The KMS provider uses envelope encryption, which uses a data encryption key (DEK) to encrypt the secrets. Kubernetes stores a KMS-encrypted copy of the DEK locally. When the kube-apiserver needs to encrypt or decrypt a Secret object, it sends the DEK to the KMS provider for decryption. Kubernetes does not persist the decrypted DEK to storage.

Release 1.18 makes several changes to the KMS provider interface used for EncryptionConfiguration resources. The CacheSize field no longer accepts 0 as a valid value; the CacheSize type changes from int32 to *int32; and validation of the Unix domain socket for the KMS provider endpoint now happens when the EncryptionConfiguration is loaded.

Streaming Node Endpoints

  • Kubernetes Enhancement Proposal, Design Doc, or Pull Request: KEP

To simplify the configuration and security of Kubernetes API calls that involve streaming connections to containers, this change deprecates two streaming configurations.

  • The kubelet --redirect-container-streaming flag, which determines whether the kubelet should proxy container connection requests to the container runtime itself or if it should pass the CRI endpoint back to the API service for direct connections.
  • The feature gate StreamingProxyRedirects, which determines the behavior of the API server when it gets a redirect from the kubelet.

Enhancements

Raw Block Support for Persistent Volumes

  • Release Status: Stable/GA
  • Kubernetes API Group or Component: core, Container Storage Interface (CSI)
  • Kubernetes Enhancement Proposal or Design Doc: original design doc, KEP, KEP
  • How to Try It: This feature is enabled by default in 1.18. Also requires CSI driver support.

Kubernetes persistent volumes default to giving containers in a pod access to the volume by mounting the filesystem, a suitable method for the majority of applications and use cases. However, some applications require direct access to the storage block device, notably certain databases that use their own storage format for increased performance.

This enhancement allows users to request a persistent volume as a block device where supported by the CSI and underlying storage provider. In the corresponding pod’s container specification, users can set the device path which the container’s application can use to access the block device.

Horizontal Pod Autoscaling Rate Controls

  • Release Status: alpha
  • Kubernetes API Group or Component: autoscaling
  • Expected GA Release: Unknown
  • Kubernetes Enhancement Proposal or Design Doc: KEP
  • How to Try It: Ungated

The horizontal pod autoscaling (HPA) API allows users to configure the automatic addition and removal of pods in a replica set based on various metric values. This enhancement adds an optional behavior field to the HorizontalPodAutoscaler resource type. Users can set the scale-up and scale-down rates, enabling them to customize the HPA behavior for different applications. For example, an application like a web server which sometimes gets sudden spikes in traffic may require adding new pods very quickly.

Because web servers are generally stateless, pods could also be removed quickly when the traffic subsides. On the other hand, users may want to slow the scale-down for deployments with a higher initialization overhead, e.g., containers running Java.

Pod Topology Spread

  • Release Status: beta
  • Kubernetes API Group or Component: core
  • Expected GA Release: Unknown
  • Kubernetes Enhancement Proposal or Design Doc: KEP
  • How to Try It: Prerequisites

Cloud providers and many on-premises environments offer multiple zones or other topological divisions that provide redundancy in case of a localized failure. For applications to benefit from the independent availability of multiple failure zones, replicas need to be deployed to multiple zones. However, the default Kubernetes scheduler had no awareness or options for spreading a replica set’s pods across zones.

This feature adds an optional topologySpreadConstraints field to the pod specification. Users can select node labels to use for identifying these domains and configure the tolerance and evenness for replica placement.

Immutable Secrets and ConfigMaps

  • Release Status: alpha
  • Kubernetes API Group or Component: core, kubelet
  • Expected GA Release: 1.21
  • Kubernetes Enhancement Proposal or Design Doc:
  • How to Try It: ImmutableEphemeralVolumes feature gate

Currently, Secret and ConfigMap objects mounted in a container periodically get updated with the new object value if the associated Kubernetes resource gets changed. In most cases, that behavior is desirable. Pods do not need to be restarted to see the new value, and if a workload only needs the startup value, it can read it once and ignore future changes.

Some use cases may benefit from preserving the secret or config map data as it was at the pod’s start time. Making the data available in the mounted volume immutable protects applications from potential errors in updates to the underlying Kubernetes object. It also reduces the load on the kubelet and the kubeapi-server, because the kubelet no longer has to poll the Kubernetes API for changes for immutable objects.

This change adds the optional ability to make Secret and ConfigMap objects immutable through the new immutable field in their specifications. A resource created as immutable can no longer be updated, except for metadata fields. Users will need to delete an existing resource and recreate it with new data to make changes. If users do replace an object with new values, they will need to replace all running pods using those mounts, because existing pods will not get updates for the new data.

PVC Cloning

  • Release Status: Stable/GA
  • Kubernetes API Group or Component: CSI
  • Kubernetes Enhancement Proposal or Design Doc: KEP
  • How to Try It: This feature is enabled by default in 1.18. Also requires CSI driver support.

The ability to create a persistent volume cloned with the data from an existing persistent volume claim as source graduates to generally available. This feature is supported only via the Container Storage Interface, not in in-tree drivers. In addition, the back-end storage provider and the CSI plugin in use must support creating a volume from an existing volume’s image. Specify a dataSource in a PersistentVolumeClaim to clone from an existing PVC.

Note that the exact method of cloning depends on the storage provider. Some providers may not support cloning mounted volumes or volumes attached to a virtual machine. In addition, cloning active volumes creates the possibility of data corruption in the copy.

Kubernetes API Server Egress Proxy

Currently, the kube-apiserver in most Kubernetes clusters uses one of two methods to connect to nodes, pods, and service endpoints in the cluster. In most cases, the server makes a direct connection to the target, but this ability requires a flat network with no overlap between the IP CIDR blocks of the control plane, the nodes, and the cluster’s pod and service network.

The other method, largely used only in Google Kubernetes Engine, creates SSH tunnels from the control plane network to the cluster. The reliability and security of the SSH tunnel method have not held up well. SSH tunnel support in Kubernetes has been deprecated and will be removed altogether in the future.

As a replacement, this feature creates an extensible TCP proxy system for connections from the control plane to endpoints in the cluster. It uses the new Konnectivity service, with a server component in the control plane network and clients deployed as a DaemonSet on the cluster nodes. This architecture simplifies the API server’s code base, as well as opening up the possibility of using a VPN to secure and monitor traffic between the control plane and the nodes and offering other opportunities for customization.

    We just covered a handful of the enhancements in the 1.18 release, focusing on new features that may be extremely useful to some users and others which highlight the ongoing work to improve the security posture of Kubernetes and to address the complexity of the code base, which had created issues and questions during last year’s audit. Check out the (soon to be published) official release notes for a complete list of changes. Also, in case you missed it, you can find a great interactive tool for searching Kubernetes release notes at https://relnotes.k8s.io/.