Wilson Husin co-authored this post.
Each Kubernetes release is driven by the community. The features that are delivered, along with the release cadence, are guided by Kubernetes contributors. This year presented a number of challenges that impacted the community and, as a result, the Kubernetes 1.19 release was prolonged and saw a large focus on stability, both in terms of testing and features delivered. After this prolonged release, there was just enough time to get one more release out for the year!
As in previous years, this end-of-year release coincided with a virtual KubeCon + CloudNativeCon North America and the start of the holiday season. Normally this last release of the year would contain a smaller number of features and would largely focus on stability; Kubernetes 1.13 delivered 23 enhancements, and Kubernetes 1.17, for example, had only 22. Keeping with the unusual nature of 2020, Kubernetes contributors delivered an incredible 44 enhancements in this release, making it the largest release in quite some time. The enhancement list consists of 16 in alpha stage, 15 in beta stage, 11 in stable, and two deprecations.
Below are some notable enhancements delivered in the 1.20 release. For a rundown of all of the new features, check out the Kubernetes Enhancement Tracking spreadsheet for 1.20.
Here are a few interesting alpha enhancements in Kubernetes 1.20
Alpha enhancements are new, somewhat experimental features. They are controlled by feature gates that are disabled by default. They generally are not recommended for production use but highlight new features that will become generally available in later Kubernetes releases.
Originally introduced in Kubernetes 1.16, this enhancement enables you to assign both IPv4 and IPv6 addresses to your pods. In the 1.20 release, the implementation has undergone a major rewrite and remains as an alpha feature to allow more time for testing and evaluation.
Kubelet can now signal your workloads that a shutdown is happening, giving your workloads a chance to gracefully prepare for the event. This feature currently depends on systemd to be present on your cluster node.
This new enhancement enables the creation of a LoadBalancer Service that has different port definitions with different protocols. Previously, when more than one port was defined, all ports needed to have the same protocol.
This enhancement will be pretty useful for cluster administrators. Starting in 1.20, a new metrics endpoint has been added to the scheduler that allows cluster operators to view requested pod resources and the imposed pod limits as metrics. This new metrics endpoint will help to better illustrate actual cluster utilization and capacity.
Beta enhancements are existing enhancements that previously existed in alpha form but are now enabled by default. In Kubernetes 1.20, a number of enhancements have graduated from alpha to beta, as well as a few that saw major updates to their beta status:
CronJob was originally introduced in Kubernetes 1.4 as ScheduledJobs and graduated to beta in 1.8, when they became CronJobs. An exciting change in Kubernetes 1.20 is the introduction of a new CronJob Controller that addresses performance issues and prepares CronJobs for graduation to stable in an upcoming release. This flavor of implementation will be the default in future releases and you can draft your upgrade strategy early.
Container Runtime Interface (CRI) is a plugin interface that enables Kubelet to use a wide variety of container runtimes beyond just Docker. It has been available for use since Kubernetes 1.5, but has remained an alpha feature. In Kubernetes 1.20, CRI support moves to beta. Along with this graduation, an important deprecation is occurring: using Docker directly as an underlying runtime is being phased out. This means that you should eventually move to something like containerd or CRI-O. You can read more about that in this blog post.
First introduced in Kubernetes 1.18, Priority and Fairness for API Server Requests has graduated to beta. This feature allows you to define priority levels for incoming requests to ensure that important requests are still handled during times of high load.
Originally introduced in Kubernetes 1.18, the kubectl debug command graduates to beta in Kubernetes 1.20. A new change coming with this feature is the ability to make a copy of the pod being debugged and change the image being used.
Stable enhancements are those that have existed in beta form for at least one release and have been deemed ready to promote to stable. Here are a few useful enhancements that have graduated to stable in Kubernetes 1.20:
Over the last few Kubernetes releases, a number of long-standing beta features have graduated to stable. First introduced in Kubernetes 1.12 and promoted to beta in Kubernetes 1.14, the ability to specify different runtime classes within a PodSpec has now graduated to stable.
Some workloads can take a long time to start. Traditionally, these slow-starting workloads could cause issues with readiness and liveness probes. First introduced in Kubernetes 1.16, the startupProbe allows you to disable health-checking probes until the pod has successfully started. This feature has graduated to stable in 1.20.
CSI Volume Snapshot
After staying in Beta since Kubernetes 1.17, CSI Volume Snapshot is now generally available—a standard workflow to trigger volume snapshot operations and portability to incorporate them with any Kubernetes environment or compatible storage providers.
This enhancement is a little different but is equally important to mention. It is not graduating or adding new features but is intended to remove existing terminology. The Kubernetes project is moving away from wording that is noninclusive, and this enhancement brings this effort to kubeadm. Currently, kubeadm applies a node-role.kubernetes.io/master label and taint to nodes. This enhancement introduces a new label and taint, node-role.kubernetes.io/control-plane, with the end goal of removing the noninclusive term "master." Existing labels and taints will continue working for a deprecation period but will eventually be removed.
Kubernetes has normally followed a quarterly release pattern. However, 2020 was a year full of challenges and change. This year, we saw an extended 1.19 release bookended by “normal” 1.18 and 1.20 releases. What does the future hold for the Kubernetes release cadence going forward? There is discussion about moving to three releases per year, which you can follow on this GitHub issue. For now, the 1.21 release is planned to start in January and will likely follow the normal cadence and release sometime in March.
VMware remains committed to being a leader in the upstream Kubernetes community and would love for you to contribute as well. Becoming a member of the Kubernetes Release Team is a great way to contribute to the project, even if you are new to it. There are multiple roles on the team, many of which require no prior development experience; learn more about the various volunteering opportunities available. A call for shadow applications will go out soon to the kubernetes-dev mailing list. You can also subscribe to this issue on Github to follow the process of building the 1.21 release team.
If you’re looking for other places to contribute, check out the shiny new k8s.dev, the Kubernetes site for contributors. There you will find the contributor guide, the community calendar, as well as upcoming news and events. Keep an eye on @k8scontributors for more information.
About the AuthorMore Content by Jeremy Rickard