Tanzu Kubernetes Grid Integrated Edition Management Console v.1.7: Improved Support for Multi-cluster Kubernetes Deployments

April 27, 2020 Robert Guske

Note: VMware Enterprise PKS has been renamed VMware Tanzu Kubernetes Grid Integrated Edition. Please note the change has not been applied to any related products yet, so screenshots will not reflect the new name.

VMware recently announced version 1.7 of the production-grade Kubernetes-based container solution, VMware Tanzu Kubernetes Grid Integrated Edition. This release introduced lots of enhancements to provide more flexibility and increased operational efficiency for Kubernetes cluster lifecycle management. Take a look at this recently published blog post by Donna Lee, Tanzu Kubernetes Grid Integrated Edition product marketing manager, to find out more about the release.

Tanzu Kubernetes Grid Integrated Edition includes the Tanzu Kubernetes Grid Integrated Edition Management Console for simple and integrated management of vSphere-based Tanzu Kubernetes Grid Integrated Edition clusters. The console is provided as a virtual appliance and contains all required and pre-checked compatible packages to deploy Tanzu Kubernetes Grid Integrated Edition to both Internet-connected as well as air-gapped environments. It moreover has all required CLI tools installed so as to offer a straightforward experience in deploying and managing Tanzu Kubernetes Grid Integrated Edition and provisioned Kubernetes clusters.

Latest versions

The components that the Tanzu Kubernetes Grid Integrated Edition Management Console includes the latest packages for are:

●  Tanzu Kubernetes Grid Integrated Edition v.1.7.0 itself, for full Kubernetes cluster lifecycle management

●  Ops Manager v.2.8.5, to lifecycle Tanzu Kubernetes Grid Integrated Edition (with health checks, scaling, auto-healing, and rolling upgrades)

●  Harbor v.1.10.1, the cloud-native container registry

●  Stemcells for vSphere (VM templates) to deploy Kubernetes in version 1.16.7, as necessary

 

New component versions of Tanzu Kubernetes Grid Integrated Edition

New features at a glance

The Tanzu Kubernetes Grid Integrated Edition Management Console contains a number of new features, among them role-based access control (RBAC), which means only users with the appropriate roles can perform certain operations. A user with enhanced privileges can manage all existing Kubernetes clusters, whereas a user with fewer privileges will have limited access to managing clusters based on their role.

 The Cluster Administrator view (all K8s clusters) is shown here on the left; on the right is the Cluster Manager view (only one cluster).

LDAP validation can be performed by the Tanzu Kubernetes Grid Integrated Edition Management Console for an LDAP IDP endpoint during initial configuration;  a colored banner will appear when you perform the endpoint connection test.

 LDAP verification check

You can also set fine-grained resource quota management (beta) for memory and CPU allocations (vSphere only) within Tanzu Kubernetes Grid Integrated Edition via a new section in the Tanzu Kubernetes Grid Integrated Edition Management Console, as well as limit the number of Kubernetes clusters a user can provision. 

A cluster admin can create network profiles via the Tanzu Kubernetes Grid Integrated Edition Management Console. They can also apply a network profile while creating the cluster instead of using the default values configured during initial installation.

And a cluster admin can apply Kubernetes profiles to Kubernetes clusters aimed at helping enterprises meet specific security and configuration requirements related to the Kubernetes cluster.

Operational enhancements

Operating a multi-cluster Kubernetes platform across different teams can be challenging, and requires an appropriate set of tools and features. With the new role-based access control feature in the Tanzu Kubernetes Grid Integrated Edition Management Console, a platform operator can assign roles to different users and groups with the appropriate privileges to monitor and manage their Kubernetes lifecycle.

Roles include:

  • Cluster Admin — Can create clusters, as well as read, update, and delete all clusters, regardless of who created them.
  • Cluster Admin read-only — Can observe the Tanzu Kubernetes Grid Integrated Edition deployment and all its clusters but cannot deploy or modify clusters.
  • Cluster Manager — Can create clusters, but can only read, update, and delete the clusters they have created.

The Tanzu Kubernetes Grid Integrated Edition Management Console also features Kubernetes cluster lifecycle management, which includes create, edit (e.g., scaling, profile assignment) and delete functions. This single pane is made available for Day 1 and Day 2 Kubernetes cluster operations.

New Kubernetes clusters can be easily created via the clusters tab within the Tanzu Kubernetes Grid Integrated Edition Management Console. The size of the master/etcd and worker nodes is determined when plans are defined in Step 6 of initial configuration. Plans are configuration sets that define the sizes as well as the deployment target (availability zones) of the Kubernetes clusters.

A network profile and/or a Kubernetes profile can also be optionally configured for the Kubernetes cluster.

Create a Kubernetes cluster

And because being able to react quickly to new demands is important, scaling out and scaling in  existing Kubernetes clusters is possible with just a few clicks using the update function. Network and Kubernetes profiles can also be changed using this function, as can the node drain and pod shutdown grace period settings. 

Update a Kubernetes cluster

Further enhancements 

To increase flexibility for Day 1 and Day 2 operations, network and Kubernetes profiles are now available when creating or updating Kubernetes clusters.

Network profiles allow the platform operator to customize the network settings for a Kubernetes cluster beyond the default values that were configured initially. They are defined via a JSON configuration file and must conform to a specific format. This configuration will be handed over to Tanzu Kubernetes Grid Integrated Edition via the command-line interface and be subsequently made available for selection.

To learn more about the creation and implementation of network profiles, see this post.

The Tanzu Kubernetes Grid Integrated Edition Management Console simplifies the creation of network profiles by specifying the various parameters for things like load balancer size, pod, and node network, as well as for the container network via the graphical interface. Only cluster administrators (role: pks.cluster.admin) can create network profiles.

Create a network profile

The newly created profile is available to users who have been assigned the cluster manager role (pks.cluster.manage). Cluster managers can then assign that profile to new or existing Kubernetes clusters.

K8s cluster specification (left) and cluster creation in progress (right)

Additional capabilities

Operators who have previously used the Tanzu Kubernetes Grid Integrated Edition Management Console will notice a number of new capabilities with this new version.

Version 1.7 (left) and version 1.6  (right)

Resource quotas were introduced as a new, experimental feature in Tanzu Kubernetes Grid Integrated Edition v.1.6; they gave cluster administrators the ability to set memory and CPU allocation limits to users across their provisioned Kubernetes clusters. It’s also possible to limit the number of clusters a user can provision.

With version 1.7, resource quotas can be easily configured with just a few clicks.

 Setting resource quotas for a user

It is important to mention that only root (default login) users currently have permission to configure resource quotas to users who have the role pks.clusters.manage. Accounts with this assignment can create and access their own clusters.

Tanzu Kubernetes Grid Integrated Edition Management Console integration with Tanzu Mission Control

Tanzu Mission Control is VMware’s centralized management platform for consistently operating and helping secure Kubernetes infrastructures and modern applications across multiple teams and clouds. Simply specify your Tanzu Mission Control parameters during the initial configuration of your Tanzu Kubernetes Grid Integrated Edition deployment, and every newly created Kubernetes cluster will be automatically attached to your Tanzu Mission Control cluster group.

Tanzu Mission Control integration

For example, a freshly deployed Kubernetes cluster named “k8s-prod” has been immediately added to the cluster group “rguske-tmc” and is available for monitoring.

New K8s cluster in TMC cluster group (left) and Kubernetes cluster details in TMC (left)

Please find more information about Tanzu Kubernetes Grid Integrated Edition Management Console below.

Resources:

This article may contain hyperlinks to non-VMware websites that are created and maintained by third parties who are solely responsible for the content on such websites.

 

About the Author

Robert is a Senior Technical Account Manager working for VMware’s Professional Service Organization (PSO). He’s also a Cloud-Native Apps field SME.

More Content by Robert Guske
Previous
Harvest the Benefits of a Multi-cluster Kubernetes Architecture with VMware Tanzu Mission Control
Harvest the Benefits of a Multi-cluster Kubernetes Architecture with VMware Tanzu Mission Control

Using Tanzu Mission Control, organizations can centralize all of their Kubernetes clusters.

Next
Pivotal Platform Evolves with vSphere 7 with Kubernetes
Pivotal Platform Evolves with vSphere 7 with Kubernetes

VMware Cloud Foundation 4.0, which is shipped with vSphere 7, delivers Kubernetes as a service.