Simply Scaling a Tanzu Kubernetes Cluster with the TKG Service for vSphere

July 1, 2020 Kendrick Coleman

The previous two posts in this series walked through both the architecture of the Tanzu Kubernetes Grid (TKG) Service for vSphere and how to use it to deploy Tanzu Kubernetes clusters. In this post, we’ll walk through how to take a cluster and scale it on demand. The examples shown are consistent with the same tag-demo-cluster-01 cluster spec used previously.

Inside the Workload Management menu, there is a list of Tanzu Kubernetes clusters and a running list of all the associated virtual machines (VMs). Take note of the VM class, as it will change for this cluster.

After authenticating using vSphere SSO, set the context to the namespace where the Tanzu Kubernetes cluster is registered. This will show all of the tanzukubernetes clusters using the kubectl command line. The Tanzu Kubernetes cluster has already been deployed using the custom specification. Editing and reapplying the cluster spec will use the declarative nature of Kubernetes to achieve the desired state. 

For this demonstration, the control plane will scale out, from one to three nodes, while the worker nodes will scale in, from two to  three nodes. To add more vCPU and RAM capacity to the worker nodes, the type of VM will change from extra small to small. This part, however, will be done in a rolling fashion. 

After applying the updated configuration to the Supervisor Cluster, Cluster API will be responsible for reconciling the cluster state.

As that reconciliation is under way, a multitude of other things will be taking place. The third worker will be removed from the cluster, but it will still take some time for the VM itself to be deleted from vSphere. The second and third control and plane nodes will be added to the cluster and will require a leader election.

Then the workers will reconcile by way of a rolling upgrade. First, a new node with the best-effort-small VM class is created. The rolling upgrade will flush the containers of the node being decommissioned and taint it so no new workloads are assigned. Kubernetes deployments will reconcile the pods and have them run on the new node, which is joined to the cluster that doesn’t have the taint. After the node has been completely flushed, it’s removed from the cluster. Every worker node will go through the same process in a serial fashion to make sure there is always an appropriate amount of resources available. 

Back on the vCenter UI, the final worker node with size extra small has been deleted from inventory and all of the VMs are now at our desired end state. 

Interested to see it happen on video? Watch it here:

For more information, check out the Tanzu Kubernetes Grid site. 

About the Author

Kendrick Coleman is a reformed sysadmin and virtualization junkie. His attention has shifted from hypervisors to cloud-native platforms focused on containers. In his role as an Open Source Technical Product Manager, he figures out new and interesting ways to run open source cloud native infrastructure tools with VMware products. He's involved with the Kubernetes SIG community and frequently blogs about all the things he's learning. He has been a speaker at DockerCon, OpenSource Summit, ContainerCon, CloudNativeCon, and many more. His free time is spent sharing bourbon industry knowledge hosting the Bourbon Pursuit Podcast.

More Content by Kendrick Coleman
Previous
Streamline Your Teams' API Design and Strategy with User-Centered Documentation
Streamline Your Teams' API Design and Strategy with User-Centered Documentation

How great documentation allows you to respond quickly to change, prevent developer fatigue, and develop API...

Next
Dissecting a Tanzu Kubernetes Cluster Spec with the TKG Service for vSphere
Dissecting a Tanzu Kubernetes Cluster Spec with the TKG Service for vSphere

In this post, we will focus on deploying a Tanzu Kubernetes Grid Service cluster using a simple, customized...