Another quarter, another Kubernetes release! On June 19, the Kubernetes Release Team announced the delivery of Kubernetes 1.15.
The first thing that jumps out about Kubernetes 1.15 is that, in contrast to previous releases, it introduces relatively few new features. This is actually exciting! It is a sign that the project has reached a certain level of stability and maturity. Organizations can now more easily hop on the Kubernetes train, without having to worry about keeping up with the same flurry of feature additions and deprecations (along with rapidly-changing best-practices) that has been the norm until now.
Without further ado, here are some of the things to look out for this release.
Stability and test coverage
As we hinted at above, a big focus for this release has been “continuous improvement”. The Kubernetes team has improved test coverage for several parts of the codebase. They have also invested in cleaning up their backlog and maturing existing features, with a focus on stability. We believe that this is a great investment, and sets the stage for Kubernetes to become more stable and, by extension, secure.
The Kubernetes team has introduced many new developments around features that permit extensibility of the Kubernetes API - in particular, Custom Resource Definitions.
For the uninitiated, custom resource definitions allow users to define their own custom objects, and interact with them almost like they are native Kubernetes objects like deployments and pods. Istio, for example, is implemented using dozens of custom resources.
In 1.15, Kubernetes is trying to make custom resources even more like native resources, with the stated goal that users should not notice whether they are interacting with a custom resource or a native resource. To this end, they have introduced the notion of a “structural schema”, which is essentially a set of restrictions on fields of CustomResources that allow for better standardization. Also, the
kubectl get and
describe commands have been improved to work better with custom types.
Admission controller improvements
Admission Controllers are essentially gatekeepers that intercept requests to the Kubernetes API, and can modify them (in the case of “mutating webhooks”) or reject them if they fail validation (in the case of “validating webhooks”). Admission controllers are a powerful feature – among other things, they can be leveraged to improve security.
As admission controllers become more widely adopted, Kubernetes has continued to make improvements to them. Two major enhancements in the 1.15 release are:
- Mutating webhooks can opt into revocation specifying the newly available
reinvocationPolicy. This means that a mutating webhook will get a second chance if another webhook mutates the object after the first time the mutating webhook ran. Note that validating webhooks will still be called only after all rounds of mutating webhooks.
- The admission webhook now has an
objectSelectorfield, which enables excluding objects with certain labels from admission. This is a big step forward, because it will enable operators to gradually apply new admission controllers across a cluster, applying labels on objects to “allow list” them from an admission controller, until the object is updated to work with it.
Node PID limiting is in beta
In the Kubernetes 1.14 release, we discussed that Process ID (PID) limiting for pods was in beta. In Kubernetes 1.15, PID limiting for nodes has moved to beta, and is enabled by default. Process ID (PID) limits are important because process ids are a shared resource on the host; without PID limits, a misbehaving or buggy pod which forks new processes indiscriminately can exhaust all available PIDs on the host, effectively crippling it, and impacting all other workloads running on it. Setting a PID limit at the node level adds an additional layer of protection over setting it at the pod level - the latter is insufficient to protect against an attack where a large number of small pods are launched on a node, and collectively fork-bomb it.
Other miscellaneous updates
- Kubernetes core now supports Go modules, so published Kubernetes components (like
client-go) can be consumed using a
- Several resources accessible will no longer be served through the
apps/v1beta2API versions from Kubernetes 1.16, which means this release is the last window to update our YAMLs! Make sure you use
apps/v1for Deployments, DaemonSets and ReplicaSets,
networking.k8s.io/v1beta1for Ingress resources and
And there’s a lot more! Check out the official release notes for full details.