Kubernetes 1.20 has been released! Congratulations to the Kubernetes release team on this newest version with a release cycle that returned back to a shorter period of 11 weeks following the extended cycle for 1.19 earlier this year.

Amid an extremely challenging year and coming on the heels of KubeCon + CloudNativeCon North America, this latest release highlights the rapid pace of innovation and development by the Kubernetes community with a total of 42 feature enhancements, across various stages of maturity, being advanced in 1.20. In particular, these changes and enhancements reflect the evolution of the Kubernetes platform with regards to certain technologies such as the container runtime as well as a focus on continuing to improve the end user/operator experience. This post covers notable features and changes in this version of Kubernetes.

Notable Changes

Before providing a rundown on new features in 1.20, I think it is worth pointing out some significant changes that have been made or are coming, namely Docker runtime deprecation and a fix for kubelet exec probe timeouts.

Docker runtime deprecation is coming

With 1.20, the Kubernetes release team is giving the community a heads up that Docker runtime support is going to be removed within the next few releases (currently planned for version 1.22 next year). This is not necessarily surprising given that the Docker runtime is not compliant with the Container Runtime Interface (CRI) for Kubernetes and some Kubernetes platforms such as Azure Kubernetes Service (AKS) and Red Hat OpenShift have already moved to default to other container runtimes such as containerd and CRI-O, respectively. Practically, this means that Dockershim, the tool that Kubernetes currently needs to use containerd, will be removed in either 1.22 or another release next year. Related to this, the status of the CRI API, which has to date been in alpha, continues to be advanced as well.

It is also critical to clarify that Docker images, which are separate from the Docker runtime, can continue to be used with any CRI-compliant runtimes - this is because Docker images adhere to the standardized Open Containers Initiative (OCI) format.

If you currently use a managed Kubernetes service or a distribution like OpenShift, your provider will likely help you ensure there is no impact to your environment when switching off the Docker runtime. If you are using open source Kubernetes on your own clusters, you will need to ensure you make changes in the coming months. Starting in 1.20, you will receive a warning that the Docker runtime is going to be deprecated.

For more information, please see this FAQ put out by the Kubernetes community.

Fix for Kubelet exec probe timeouts

A notable fix for a longtime bug is included in 1.20: exec probes will now respect the field timeoutSeconds and employ a default value of one second if no value is specified. This may require updates to your current pod specifications; previously probes would just run indefinitely.

Note that you do not necessarily have to check pod definitions right away because you can choose to revert to the previous behavior by configuring the new feature gate ExecProbeTimeout to false, but this feature gate will eventually be removed in a future Kubernetes release.

For more information, please see this KEP for details.

Feature Highlights

Now, on to new feature enhancements! 16 have been released in alpha, 15 have graduated to beta, and 11 have graduated to stable.

Graceful node shutdown in alpha

As one example of how this latest release improves the experience for end users, this feature improves pod termination by ensuring the kubelet is aware of node system shutdowns. Previously, pods could run into problems when a running node was being shut down and cause issues with resources. With this feature hopefully operators will have less to troubleshoot and debug.

For more details, see this PR and KEP.

kubectl debugging graduated to beta

I covered kubectl alpha debug in Kubernetes 1.19, which primarily focused on node debugging. The feature has graduated to beta and has support for several debugging scenarios that can be handled directly from kubectl. You should now use kubectl debug instead of kubectl alpha debug, which is being deprecated.

For more information on using this feature to debug your environment, see here.

Process ID (PID) limiting goes GA

PIDs are a fundamental resource on Linux hosts, and mechanisms are required to ensure that application pods do not result in PID exhaustion that prevents host daemons, such as the container runtime or kubelet, from running. This feature adds support for configuring the kubelet to limit the number of PIDs an individual pod can utilize so as to limit their potential impact on other pods on a node. PID limits are being graduated to GA and no longer require users to configure SupportNodePidsLimit.

More details on this feature can be found in this PR and KEP.

CronJobs updates

CronJobs were originally introduced all the way back in Kubernetes version 1.4 but have not been advanced to stable status despite being widely used, even in production environments. Work has recently been undertaken to make changes that address scalability and other issues, with the goal of graduating this to stable in either version 1.21 or 1.22. Note that to try out the new implementation you will need to enable the CronJobControllerV2 feature flag.

For more details, see the relevant PR and KEP.

IPv4/IPv6 dual stack support reimplemented, still in alpha

I first wrote about Kubernetes adding support for assigning either IPv4 or IPv6 addresses to pods and services, rather than having to choose one for your entire cluster, in version 1.16. We also continued to track this in our updates about subsequent releases. Major work, which has resulted in breaking changes, continues to take place on this feature, which means it still remains in alpha Natively supporting dual stack mode is a big deal (with a number of benefits to accommodate different types of Kubernetes workloads), so I expect it will still be some time before we see this graduate to beta and, eventually, stable.

More details are here.

Volume snapshot operations goes to stable

We wrote about volume snapshots back when Kubernetes 1.16 was released and how they involve a number of operations to serve as a reliable means for data restore. Kubernetes has been working for some time to provide the appropriate primitives to enable more advanced storage use cases that leverage snapshots. This feature makes available a standard way for creating volume snapshots and handling their operations. In 1.20, it finally moved to stable.

For more details, see the PR and KEP.

Other notable enhancements

API priority and fairness is now enabled by default as part of graduating to beta and will help better manage certain types of requests.

The EndpointSlice API, which we covered starting in Kubernetes 1.17, has undergone additional changes.

Version 1.20 also graduates support for third-party device monitoring plug-ins to stable, which unlocks container-level metrics for devices provided by device plug-ins.

Looking Ahead

According to the latest CNCF survey, 83% of respondents are now using Kubernetes in production. I expect that these newly stable and innovative feature enhancements in 1.20 will only continue to accelerate that trend. For more details on this latest Kubernetes 1.20 release, please check out the official release notes for a complete list of changes.

Editor's note: This post was migrated from the StackRox blog.

About the author

Wei Lien Dang is Senior Director of Product and Marketing for Red Hat Advanced Cluster Security for Kubernetes. He was a co-founder at StackRox, which was acquired by Red Hat. Before his time at StackRox, Dang was Head of Product at CoreOS and held senior product management roles for security and cloud infrastructure at Amazon Web Services, Splunk, and Bracket Computing. He was also part of the investment team at the venture capital firm Andreessen Horowitz.

Read full bio