New Proxy Support, Registry Service Trust, and Separate Disks on vSphere with Tanzu

December 18, 2020 Kendrick Coleman

As 2020 comes to an end, we are capping it off with a new patch release of vSphere. In this post, we will examine the new functionality in vSphere with Tanzu as it relates to the Tanzu Kubernetes clusters.

So what’s new, exactly? I’m glad you asked. Let’s dive in.

HTTP/HTTPS proxy support

Many organizations that run in highly regulated environments, like financial institutions, require all internet access to go through a corporate proxy. However, doing so magnifies constraints, as all the components—including pushing and pulling container images or any Tanzu Mission Control attach operations—require internet access. Now a new feature allows for a global proxy setting to be applied to the Supervisor Cluster.

This global proxy setting will apply to all newly provisioned Tanzu Kubernetes clusters; it will not propagate to any that currently exist. Since this setting is applied at the Supervisor Cluster level, no additional configuration is required when deploying a Tanzu Kubernetes cluster.

There is also an additional parameter of `noProxy` where an array of IP addresses or CIDR blocks can be set to not go through the proxy. This parameter is required for the ranges applied to all pod, ingress, and egress CIDRs in your environment or those that have been allocated to vSphere with Tanzu.

```

apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TkgServiceConfiguration
metadata:
  name: tkg-service-configuration
spec:
  defaultCNI: antrea
  proxy:
    httpProxy: http://user:password@10.182.49.15:8888
    httpsProxy: http://user:password@10.182.49.15:8888
    noProxy: [172.26.0.0/16,192.168.124.0/24,192.168.123.0/24]

```

Read the latest docs on Provision Tanzu Kubernetes clusters with a Proxy Server.

Native Registry Service trust

The Registry Service feature, which is enabled by the use of NSX-T, will automatically deploy a Harbor container registry and integrate role-based access control, as well as projects, based on vSphere Namespaces. This registry is created using a series of self-signed certificates to make automation a breeze.

The problem with self-signed certificates is that they are not inherently trusted by anyone. That used to include any Tanzu Kubernetes clusters. The certificates would have to be manually imported and trusted by every Kubernetes node, settings that would not persist during an upgrade.

This is a pain that we’re happy has been resolved. Now, through the use of Kubernetes secrets, a Tanzu Kubernetes cluster will poll the list to see if a Harbor Registry secret exists. If so, the cluster will import the certificates through the bootstrap process (KubeadmConfig) and the certificate will get added to the appropriate file path.

As a result, the Registry Service will now be trusted automatically with any new Tanzu Kubernetes cluster once it is enabled. For any existing Tanzu Kubernetes cluster, the new nodes will natively trust the Registry Service when an update is performed.

Separate disk creation on nodes

We never really know what our Kubernetes nodes will be used for. It could be anything from small to highly transactional applications. Since the Tanzu Kubernetes cluster needs to adapt to whatever use case you may have, a new feature sets user-defined storage parameters for the Kubernetes nodes.

Perhaps you would like to configure important filesystem paths, like `/var/lib/docker` or `/var/lib/kubelet`, on volumes with more storage. By default, a Tanzu Kubernetes cluster node is deployed with 16GB of storage, which becomes important when a node needs to run container images that would be greater than 16GB combined because they have to be stored locally. Maybe you’re running a very large Kubernetes cluster and would like to have etcd be located on a volume that can handle lots of transactions. In both of these cases, we want to specify a separate volume from the primarily read-only root partition.

Here’s an example taken directly from the documentation:

apiVersion: run.tanzu.vmware.com/v1alpha1      
kind: TanzuKubernetesCluster                   
metadata:
  name: tkgs-cluster-5                         
  namespace: tgks-cluster-ns                   
spec:
  distribution:
    version: v1.18                             
  topology:
    controlPlane:
      count: 3                                 
      class: best-effort-small                 
      storageClass: tkgs-storage-policy
      volumes:
        - name: etcd
          mountPath: /var/lib/etcd
          capacity:
            storage: 4Gi       
    workers:
      count: 3                                 
      class: best-effort-small                 
      storageClass: tkgs-storage-policy        
      volumes:
        - name: containerd
          mountPath: /var/lib/containerd
          capacity:
            storage: 16Gi       

Worker node remediation

A MachineHealthCheck is a Cluster API resource that allows a user to define what an unhealthy cluster looks like. A healthy Tanzu Kubernetes cluster is defined as one that has all cluster nodes powered on and communicating. If a node is not powered on or disappears, it needs to be remediated.

With this patch, new remediation code was added that utilizes MachineHealthCheck to gauge the health of a cluster. If a node is powered off, it will be automatically powered back on by communicating to the vCenter API once the time threshold is reached. If a node is forcefully removed and deleted from the cluster, as well as from vCenter, MachineHealthCheck will invoke Cluster API by creating a new virtual machine and performing the bootstrap process to remediate the cluster. With this update, a better experience of achieving the desired state with Tanzu Kubernetes clusters is delivered without any upgrade to the cluster itself.

For all updates in this release, view the release notes. To get started using vSphere with Tanzu, utilize VMware’s Hands-on Labs with HOL-2113-01-SDC or watch How to Get Started Using vSphere with Tanzu for Tanzu Basic and Tanzu Standard

About the Author

Kendrick Coleman is a reformed sysadmin and virtualization junkie. His attention has shifted from hypervisors to cloud native platforms focused on containers. In his role as an Open Source Technical Product Manager, he figures out new and interesting ways to run open source cloud native infrastructure tools with VMware products. He's involved with the Kubernetes SIG community and frequently blogs about all the things he's learning. He has been a speaker at DockerCon, OpenSource Summit, ContainerCon, CloudNativeCon, and many more. His free time is spent sharing bourbon industry knowledge hosting the Bourbon Pursuit Podcast.

More Content by Kendrick Coleman
Previous
Cluster API Provider for Azure Is Another Giant Leap for the Community
Cluster API Provider for Azure Is Another Giant Leap for the Community

Another vote of support for Cluster API is a win for everyone.

Next
How A21 Is Bringing Software to the Fight Against Human Trafficking
How A21 Is Bringing Software to the Fight Against Human Trafficking

How VMware's Pivotal Act program helped nonprofit A21 create a better method of educating its stakeholders.

×

Subscribe to our Newsletter

!
Thank you!
Error - something went wrong!