Getting Started with VMware Tanzu Kubernetes Grid CLI

April 23, 2020 Kendrick Coleman

With the announcement of VMware Tanzu Kubernetes Grid, anyone can see how easy it is to deploy and manage Kubernetes clusters. The new Tanzu Kubernetes Grid CLI becomes the tool to make it all happen.

Background

Kubernetes clusters have gone through multiple iterations over the years, with varying solutions. There’s been kubespray, kops, multitudes of vendor products, and even “the hard way.” All of these solutions have served many organizations well, but problems arise when Kubernetes needs to be tailored for a new IaaS or building not just one or two clusters, but tens or hundreds of Kubernetes clusters. And then there’s the issue of managing them at scale.

Cluster API is a technology that was born out of SIG-Cluster Lifecycle as a way to support any IaaS and make Kubernetes cluster deployments utilize Kubernetes’ declarative API. Let’s quickly explain it.

One of the premier tools that has come from SIG-Cluster Lifecycle is kubeadm. Its main function is to take a machine and automatically apply all the steps required for it to become a Kubernetes control plane or worker node. That includes creating a certificate, initializing the control plane, setting up and configuring etcd, applying add-ons like CoreDNS and kube-proxy, and then joining these nodes together to create a conformant cluster. Not only that, kubeadm does upgrades of Kubernetes versions within the cluster. The tool does not, however, take into account any vendor environments, so it doesn’t matter if your nodes run on AWS, GCE, vSphere, bare metal, etc. It just works.

Cluster API, which uses kubeadm under the hood, is IaaS-aware and can provision machines based on the API-driven infrastructure. These technologies, coupled together, create a best-in-class solution for automatically deploying and configuring as many Kubernetes clusters as you want. No longer do you have to worry about building Kubernetes clusters on your own!

Introducing VMware Tanzu Kubernetes Grid CLI

With the VMware Tanzu Kubernetes Grid CLI, all of the open-source components for natively building enterprise-ready Kubernetes clusters are packaged together for a tightly integrated solution. Tanzu Kubernetes Grid CLI provides easier installation, automated multicluster operation, high availability, and open-source alignment to the upstream Kubernetes community.

Tanzu Kubernetes Grid can be used in multiple ways:

  1. Tanzu Kubernetes Grid can be deployed on any infrastructure, including at the edge, in the cloud, and on vSphere.

  2. Tanzu Kubernetes Grid is integrated in vSphere 7 with Kubernetes as the Tanzu Kubernetes Grid Service for vSphere.

  3. Tanzu Kubernetes Grid can be consumed as a service with Tanzu Mission Control.

Before getting started, let’s go over some of the core concepts and the architecture so it's easy to visualize the process.

Cluster API uses the Kubernetes declarative API to build Kubernetes clusters with a desired state. New custom resources enable the use of custom controllers and objects. These controllers run loops to remediate objects to a desired state. One example of a new object is a “MachineSet” that is controlled by a “MachineDeployment.” A “MachineSet” manages “Machine” objects that are represented as replicas. In Kubernetes, this is analogous to deployments and pod replicas. Infrastructure providers associate compute resources to a “Machine” that allows Kubernetes to provision and delete resources as needed so as to achieve a desired state.

It all starts with the Tanzu Kubernetes Grid CLI and the bootstrap cluster, which requires an existing Kubernetes Cluster or, for ease of use, kind. The bootstrap cluster will provision a management cluster on the destination IaaS and then copy all the Cluster API resources needed to it. At this point, the bootstrap cluster has done its job and can be retired or deleted.

The management cluster becomes the new endpoint for the Tanzu Kubernetes Grid CLI, as well as where workload clusters can be created. Using the Tanzu Kubernetes Grid CLI, specify the cluster parameters using flags and let the management cluster begin the process of deploying and configuring the Kubernetes cluster. Do this as many times as you want and the management cluster will keep the workload clusters in a desired state through its controller.
 

Use the Tanzu Kubernetes Grid CLI

Let’s put it to action! This example will demonstrate how to use Tanzu Kubernetes Grid on vSphere 6.7U3 (not using vSphere 7 with Kubernetes). You can also skip ahead and watch the video at the bottom of this post.

As a prerequisite, import the VMware signed and supplied Photon and HAProxy .ova images, which have all the core Kubernetes components pre-installed. Mark them as templates. The Photon image contains all the core software required to instantiate a Kubernetes cluster. The HAProxy image will function as a load balancer to front-end Kubernetes API server requests for the control plane. This allows a control plane to scale to multiple hosts.

On the local machine, the only prerequisites are having Docker and kubectl installed.

Using the Tanzu Kubernetes Grid CLI, kickstart the process with tkg init --ui. This will launch the Tanzu Kubernetes Grid Installer interface.

Click “Deploy on vSphere” and fill out all the required fields.

Hint: Generate an SSH public key by following these GithHub instructions.

Hint: The network name setting requires a port group with a routable network and DHCP.

Follow the logs and after a few minutes, the local control plane (management cluster) will be provisioned.

Now, create the first Tanzu Kubernetes Grid cluster with

tkg create cluster [name] --plan [production/dev]

This provisioning process will only take a few minutes.

Access the new cluster by appending the KUBECONFIG to the local machine with

tkg get credentials [name]

The configuration can also be exported with

tkg get credentials [name] --export-file [string]

Switch the context of kubectl with

kubectl config use-context [clustername]-admin@[clustername]

Using kubectl get nodes, the cluster is now ready!

There is the first Kubernetes cluster that’s been provisioned! And doing it was simple and easy, without any of the headaches that come with creating a certificate, managing etcd, or manually applying any add-ons. 

Running an App on Tanzu Kubernetes Grid

Now that a Kubernetes workload cluster has been deployed, how can it be used? For that, let’s turn to Helm as a simple way to get an application running.

To harness the power of persistent volumes, a StorageClass needs to be added.

tee defaultstorageclass.yaml >/dev/null <<EOF

kind: StorageClass

apiVersion: storage.k8s.io/v1

metadata:

  name: standard

  annotations:

    storageclass.kubernetes.io/is-default-class: "true"

provisioner: csi.vsphere.vmware.com

parameters:

  storagepolicyname: "vSAN Default Storage Policy"

EOF

 

kubectl apply -f defaultstorageclass.yaml

Install the Helm CLI using any of the methods in the documentation. Helm will use the current KUBECONFIG context to figure out which Kubernetes cluster to perform Helm actions. Initialize the Helm Chart repo with

helm repo add stable https://kubernetes-charts.storage.googleapis.com/

Test MySQL using

helm install mysql-test stable/mysql

After the deployment is complete, go to vCenter and look for the MySQL persistent volume located under Cluster -> Monitor -> Cloud Native Storage -> Container Volumes.

Watch the video:
 

What’s next?

Look for more demos, along with a Tanzu Kubernetes Grid Hands-on Lab, coming soon. In the meantime, learn how Tanzu Mission Control can attach Tanzu Kubernetes Grid clusters to gain global visibility, provide real-time conformance checks, and drive consistent policy management. Check out the Hands-on Lab with Tanzu Mission Control (TMC) to get started.

Hyperlink disclaimer: This article may contain hyperlinks to non-VMware websites that are created and maintained by third parties who are solely responsible for the content on such websites.

About the Author

Kendrick Coleman is a reformed sysadmin and virtualization junkie. His attention has shifted from hypervisors to cloud native platforms focused on containers. In his role as an Open Source Technical Product Manager, he figures out new and interesting ways to run open source cloud native infrastructure tools with VMware products. He's involved with the Kubernetes SIG community and frequently blogs about all the things he's learning. He has been a speaker at DockerCon, OpenSource Summit, ContainerCon, CloudNativeCon, and many more. His free time is spent sharing bourbon industry knowledge hosting the Bourbon Pursuit Podcast.

More Content by Kendrick Coleman
Previous
How to Conduct a Remote Event Storming Session
How to Conduct a Remote Event Storming Session

VMware Pivotal Labs has adapted our years of experience facilitating Event Storming to our new reality. Thi...

Next
Get Hands-On with Kubernetes on KubeAcademy from VMware
Get Hands-On with Kubernetes on KubeAcademy from VMware

VMware's new hands-on courses offer an interactive way to learn about Kubernetes.