Tanzu Kubernetes Grid 2.1 Enhances Lifecycle Management and Extends Kubernetes to the Edge

February 2, 2023 Kendrick Coleman

VMware Tanzu Kubernetes Grid has become a trusted tool to automate the lifecycle of Kubernetes clusters. The simplicity of management has quickly turned it into a valuable asset in many organizations. The release of Tanzu Kubernetes Grid 2.1 adds new features that enhance lifecycle management and extends Kubernetes to the edge. 

The previous version, Tanzu Kubernetes Grid 2.0, established a unified framework of cluster creation using a new API called ClusterClass. With the latest version, this capability has been extended to VMware vSphere with management clusters and includes all supported cloud platforms. Using vSphere with Management Clusters provides flexible architecture models that lend themselves to many use cases. 

The ability to make sure management clusters and worker nodes remain highly available has gotten a boost in Tanzu Kubernetes Grid 2.1 with node anti-affinity for VMware vSphere 7.0 and up. Node anti-affinity is implemented using the cluster module API in vCenter and is turned on by default, but the option to disable is available through feature flags. The vSphereVM object has been augmented to store the Cluster Module Universally Unique ID (UUID) as well as the VMware ESXi host that the virtual machine (VM) is placed on. The node object gets a label applied with the ESXi host information that it is on. This allows pods to be deployed to specific ESXi hosts if there is specialized hardware and also ensures the nodes themselves are spread among multiple ESXi hosts.

Now offered as an experimental feature available as a technical preview is the ability to back up and restore management clusters on vSphere. In many cases, the workload clusters that operate all the running applications are the ones that require daily or hourly backups that are achieved through Velero, which is included in Tanzu Kubernetes Grid. That level of resiliency can be achieved at the management cluster layer by backing up objects that represent the workload clusters. It’s a good practice to take backups regularly, especially after any changes are performed on a workload cluster. These objects are stored in an Amazon Simple Storage Service (Amazon S3)-compatible location and can be restored to another management cluster. 

Pushing Kubernetes to the edge is one of the hottest trends to pay attention to in 2023. There are many industries that are looking at running Kubernetes spread across retail locations, manufacturing facilities, and in connected cars. VMware is ready to enable edge computing with new features in Tanzu Kubernetes Grid 2.1. 

Also provided as an experimental feature is single node cluster deployments with minimal operating systems for vSphere. A standard VMware Tanzu Kubernetes Cluster will have a control plane node and some worker nodes based on the applications. In an edge environment where resources are not as abundant, there needs to be consolidation. A single node cluster will create one Tanzu Kubernetes control plane node and remove the taint that allows applications to be deployed as if it were a worker node. To help reduce resource usage, a new Tanzu Kubernetes release with an edge-optimized runtime has been added. This minimal footprint uses Kubernetes packaged as a single binary and reduces the open virtual appliance (OVA) to ~700MB for Photon and ~900MB for Ubuntu. This new edge optimization only runs core packages such as Container Network Interface (CNI) (e.g., Antrea or Calico), vSphere Cloud Provider Interface (CPI), vSphere Container Storage Interface (CSI), cert-manager, kapp-controller, and secretgen-controller. 

Graphical user interface, text, websiteDescription automatically generated

The edge also requires load-balancing solutions to provide connectivity to Cluster API servers and the running workloads. Currently, Tanzu Kubernetes Grid uses kube-vip to provide control plane virtual IPs, however, it has been extended to provide a Service Type: Load Balancer. This is important because edge workload clusters are typically located far away from the data center and kube-vip allows lower latency and tolerates disconnects better than other Load Balancer options. Load Balancer IPs are allocated using a range of IPs in the same network range as the node IPs and placed as a parameter in the cluster configuration. 

DHCP often becomes a point of frustration because edge environments aren’t provided access to a DHCP server, have issues with lease expirations, or may exhaust addresses during upgrades. Cluster API Node IP address management (IPAM) integration will disable DHCP for Tanzu Kubernetes Clusters and use addresses assigned by a deployed IPAM solution for the VMs. The management cluster is given a range of IP addresses that will be issued to workload cluster VMs that are deployed in the given namespace. 

There’s a lot of new features, especially for VMware vSphere! If you’re excited about Tanzu Kubernetes Grid 2.1, then get started by going to VMware Customer Connect and download the latest release, and check out Tanzu Tech Zone for new articles on Tanzu Kubernetes Grid.

About the Author

Kendrick Coleman is a reformed sysadmin and virtualization junkie. His attention has shifted from hypervisors to cloud native platforms focused on containers. In his role as an Open Source Technical Product Manager, he figures out new and interesting ways to run open source cloud native infrastructure tools with VMware products. He's involved with the Kubernetes SIG community and frequently blogs about all the things he's learning. He has been a speaker at DockerCon, OpenSource Summit, ContainerCon, CloudNativeCon, and many more. His free time is spent sharing bourbon industry knowledge hosting the Bourbon Pursuit Podcast.

More Content by Kendrick Coleman
Previous
Tanzu Application Platform on AWS QuickStart: Now for Multicluster Deployments Too
Tanzu Application Platform on AWS QuickStart: Now for Multicluster Deployments Too

VMware Tanzu Application Platform on Amazon EKS helps you unlock innovation while enforcing open source gov...

Next
Improved Deployment Times with VMware Tanzu Operations Manager 3.0
Improved Deployment Times with VMware Tanzu Operations Manager 3.0

This blog details the work with deployment enhancements in the release of VMware Tanzu Operations Manager 3.0.