The pace of open source Kubernetes releases has slowed down and a greater focus has been put on stability. We are finally at a time when Kubernetes is ready for mainstream consumption. VMware is known for implementing and operationalizing enterprise-ready software, and so there have been a few different iterations of Kubernetes in the VMware Tanzu portfolio. As a customer or partner, there have been evaluation periods where we’ve been asked, “Which VMware Tanzu Kubernetes solution is right for me?” Now with VMware Tanzu Kubernetes Grid 2.0, there is a unified experience on vSphere 8.
Tanzu Kubernetes Grid utilizes upstream and community projects to deliver an engineered and supported Kubernetes solution that includes Kubernetes and brand new application lifecycle management capabilities. Tanzu Kubernetes Grid 2.0 has a unified framework to deliver a consistent user experience when deploying Kubernetes clusters on vSphere 8.0, using two types of supported deployment models. Each will be familiar but now they both support a single, unified way of cluster creation using a new API called ClusterClass.
The first deployment model is using the Supervisor as the management cluster to provide a native and heavily integrated vSphere experience. The second, which will follow in an upcoming release of Tanzu Kubernetes Grid, utilizes virtual machines (VM) as the management cluster. These are the same types of deployment models in previous releases, and the prior primitives for deploying Tanzu Kubernetes Clusters remains, but there is an added API specifically for ClusterClass.
Tanzu Kubernetes Grid 2.0 with the Supervisor-based management cluster, otherwise known as Workload Control Plane, becomes enabled when the underlying vSphere version is at 8.0 and the Supervisor Cluster goes through a simple upgrade process from vSphere 7.0U3. Utilizing the Supervisor Cluster is the recommended approach for more generalized workloads, will satisfy ease of operational use, and has tighter integration with vSphere capabilities.
The next version of Tanzu Kubernetes Grid will have VM-based management cluster creation on vSphere 7U3 and 8 that employs the ClusterClass API. This packages the Tanzu CLI, along with Carvel, for lifecycle management of applications like Harbor, Contour, FluentBit, and more. Tanzu Kubernetes Grid with a VM-based management cluster will always be the mechanism for managing clusters on hyperscale cloud-based deployments. However, utilizing this architecture on vSphere is only recommended for specific workload types until feature parity is achieved for the Supervisor. Therefore, choosing this type of deployment model will require a migration to the Supervisor later.
As mentioned previously, the earlier primitives of Tanzu Kubernetes Clusters will still exist for Tanzu Kubernetes Grid 1.X and vSphere with Tanzu. A new feature has been introduced as a part of Cluster API called ClusterClass which reduces the need for redundant templating and enables powerful customization of clusters. The whole process for creating a cluster using ClusterClass is the same as before but with slightly different parameters. Let’s take a look at those features.
Below is a minimalistic cluster example. At the very top is the new API v1beta1 endpoint and the designation of ‘cc01’ to the cluster. Regarding specifications, it’s obvious where the version of Kubernetes is being set, along with the amount of control plane nodes and workers. The variables at the bottom relate to the type of virtual machine class that has been enabled in vSphere, as well as the storage class to be used by the backing disks of the virtual machines. The use of variables allows further extensibility as new features are added.
apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: cc01 spec: topology: class: tanzukubernetescluster version: v1.23.8 controlPlane: replicas: 1 workers: # node pools machineDeployments: - class: node-pool name: node-pool-01 replicas: 3 variables: - name: vmClass value: best-effort-small # default storageclass for control plane and node pool - name: storageClass value: wcpglobal-storage-profile
Another new feature of Tanzu Kubernetes Grid 2.0 is the introduction of vSphere-managed topologies. This allows an administrator to segment failure domains based on hosts, clusters, and even racks! This information gets tagged into a VSphereFaultDomain custom resource definition and you can include topology definitions such as vSphere datacenter, cluster, host, and datastore information.
Here's an example of using the new ClusterClass model with this topology by allowing the workers to be spread across three different failure domains.
apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: cc-a01 namespace: namespace-a01 spec: clusterNetwork: services: cidrBlocks: ["198.51.100.0/12"] pods: cidrBlocks: ["192.0.2.0/16"] serviceDomain: "cluster.local" topology: class: tanzukubernetescluster version: v1.23.8---vmware.2-tkg.1-zshippable controlPlane: replicas: 3 workers: machineDeployments: - class: node-pool failureDomain: zone1 name: node-pool-1 replicas: 1 - class: node-pool failureDomain: zone2 name: node-pool-2 replicas: 1 - class: node-pool failureDomain: zone3 name: node-pool-3 replicas: 1 variables: - name: vmClass value: best-effort-small - name: storageClass value: wcpglobal-storage-profile
To get started using ClusterClass, upgrade your environment to vSphere 8.0. Always refer to the documentation for the latest updates and information.
About the AuthorMore Content by Kendrick Coleman