Managing VMware Enterprise PKS Clusters in a Hybrid Cloud with vRealize Automation Cloud

October 29, 2019 Pranay Bakre

Contributions from Pranay Bakre and Alka Gupta

Many organizations are leveraging Kubernetes to streamline and orchestrate containerized applications and reduce costs in a consistent manner across their on-premises environments as well as public clouds. Kubernetes, however, can be difficult to manage and maintain. This blog describes the use of VMware vRealize Automation Cloud, along with Wavefront by VMware and VMware vRealize Log Insight Cloud, to deliver and operate an enterprise-grade Kubernetes experience with VMware Enterprise PKS running on multiple clouds, both on premises and in public clouds.

Integrating vRealize Automation Cloud with VMware Enterprise PKS allows you to bring your public and private cloud Kubernetes clusters into one platform for lifecycle management and governance. Cloud administrators can now manage the end-to-end lifecycle of Kubernetes clusters across multiple clouds directly from their vRealize Automation Cloud services portal. Developers have the ability to spin up Kubernetes clusters across multiple clouds on demand through the same portal. The result is a hybrid cloud experience.

VMware Enterprise PKS is a turnkey solution to deploy, run, and manage Kubernetes with integrated components that build on your existing VMware SDDC footprint. The management solutions mentioned above support VMware Enterprise PKS.

The key components of vRealize Automation Cloud are VMware Cloud Assembly, VMware Service Broker, and VMware Code Stream. The focus of this blog is VMware Cloud Assembly, which serves as a centralized interface for deploying and working with VMware Enterprise PKS clusters. We will describe the configuration steps for two IaaS targets: VMware vSphere for an on-premise private cloud deployment and AWS for a public cloud deployment. We will assume that VMware Enterprise PKS is already deployed on vSphere and AWS and ready to be configured and integrated with VMware Cloud Assembly for automation and management.

Getting Started

In order to get started, you add the two cloud accounts for vSphere and AWS inside VMware Cloud Assembly, so it can connect to them. Under the Infrastructure tab, in the left-hand navigation pane, go to Connections – Cloud Accounts.

For the AWS account inside VMware Cloud Assembly, specify the access key and validate it before adding it. For the vSphere environment, add a cloud proxy before adding it in VMware Cloud Assembly. A cloud proxy routes information to, and collects data from, a VMware Cloud Assembly account and an on-premises application, such as VMware vCenter. Once configured, both accounts should be visible in the portal, as shown in Figure 1.  

Next, define a project. A project in VMware Cloud Assembly is a logical group that controls which users can use which cloud resources. You create a project by going to the Configure –> Projects section of the navigation pane, as shown in Figure 2:  

VMware Cloud Assembly supports existing VMware Enterprise PKS deployments with its many native and seamless integrations. The current integrations available can be seen by going to the Integrations section under Connections in the navigation pane, as shown in Figure 3:  

Integrating VMware Enterprise PKS on AWS and vSphere in VMware Cloud Assembly

The next steps outline how to configure the integration of VMware Enterprise PKS running on AWS and on premises on vSphere inside the Cloud Assembly portal. The integration steps in this blog assume that VMware Enterprise PKS is already deployed on AWS and vSphere.

To integrate VMware Enterprise PKS running on vSphere in VMware Cloud Assembly, click the Add Integration button, select the VMware Enterprise PKS tile, and then provide the necessary information: the UAA and PKS addresses, the location of the deployment, the cloud proxy (or create a new one), the PKS platform user credentials, and the name for this endpoint. Finally, save the changes as shown in Figure 4:  

For a VMware Enterprise PKS deployment on AWS, follow the same steps as above for vSphere but set the location as Public Cloud.

Once successfully validated, both endpoints are visible in the Infrastructure tab as show in Figure 5:  

These steps complete the integration between VMware Cloud Assembly and VMware Enterprise PKS running on vSphere and AWS.

Defining Compute Resources

After adding the endpoints, define a set of compute resources that can be used for provisioning Kubernetes clusters. Navigate to the ‘Kubernetes Zone’ tab and create a new zone. In the ‘Account’ field, select one of the PKS endpoints added before. Click the ‘On-demand’ tab and select cluster deployment plans that need to be enabled for provisioning Kubernetes clusters. Make sure that ‘Allow Provisioning’ is enabled for the selected plan. Save the settings and add this Kubernetes zone to the project.  

Deploying Kubernetes Clusters from VMware Cloud Assembly

Now let’s deploy Kubernetes clusters from VMware Cloud Assembly to these VMware Enterprise PKS endpoints. In the Resources section under Kubernetes on the left, click the Deploy button under the Clusters tab, and then specify the details for the Kubernetes cluster to be deployed. These details include selecting the PKS endpoint and project name configured earlier, specifying names for the cluster and master, selecting the PKS Plan, and choosing the number of worker nodes, as shown in Figure 7:  

Once successfully completed, the clusters should be present in the Kubernetes tab, as shown in Figure 8:  

We can also add a Kubernetes cluster running externally on any public or private cloud. Choose the ‘Add External’ button in the Kubernetes tab and enter the necessary information about this cluster as shown in Figure 9. Use a Kubernetes service account to create a bearer token to authenticate with the Kubernetes cluster. Once successfully validated, the cluster is visible in the Kubernetes tab in VMware Cloud Assembly, as shown in Figure 9:  

Segregating Cluster Resources by Using Namespaces

We can also segregate the Kubernetes cluster resources by creating namespaces through VMware Cloud Assembly. A cloud administrator can add namespaces in Cloud Assembly and provide role-based access control to Kubernetes clusters. When a namespace is deployed with VMware Cloud Assembly, it includes a link to a kubeconfig file that grants users, such as developers, access to interact with and manage some aspects of the namespace.

You can add namespaces in VMware Cloud Assembly by navigating to the Namespaces tab in the Kubernetes section and clicking the ‘New Namespace’ button. All that is required is creating a name for the namespace, selecting the Kubernetes cluster, and clicking Create. Once submitted, it should look something like the namespace in Figure 10:  

This namespace will now be visible in the Kubernetes cluster. If you click the Clusters tab, click the Kubernetes cluster you chose, and finally click the Namespaces tab, you will see that the namespace has been created and is now being managed by VMware Cloud Assembly. In Figure 11, we can see our ‘test’ namespace and a ‘Download’ link for the kubeconfig file that can be used to interact with it.  

Provisioning Clusters through an On-Demand Blueprint

Kubernetes clusters can also be provisioned through an on-demand blueprint in VMware Cloud Assembly. Navigate to the ‘Blueprints’ tab and create a new blueprint. Drag a ‘K8s cluster’ component from the left pane and complete its configuration in the code editor as demonstrated in Figure 12 below. Sample Kubernetes cluster YAML code can be found here.  

Click ‘Version’ to capture the blueprint version and release it to the catalog. Now, deploy the blueprint by specifying the deployment name and blueprint version. Once the deployment completes successfully, the Kubernetes cluster’s status should be ‘On’ and can be seen in the Kubernetes tab. The same YAML file can be used to deploy the clusters to multiple end points, such as vSphere, AWS, Microsoft Azure, and Google Cloud Platform (GCP).

Getting Operational Insights into Your Hybrid Cloud

Thus far, we have seen how VMware Cloud Assembly can deploy and operate Kubernetes clusters across multiple clouds from a single console.

For a successful hybrid cloud experience, you also need operational insights into the Kubernetes platform and applications across multiple clouds. As an administrator, you need to collect metrics on the applications running on the platform as well as the underlying platform logs. Integration of VMware Enterprise PKS and Wavefront lets you visualize metrics of multiple clusters running on different IaaS platforms from a single dashboard. The integration is pretty simple, as you can see from the Wavefront documentation. You just need to add the Wavefront account URL in the VMware Enterprise PKS deployment and confirm the changes. The integration is the same across all deployments –- both on premises and in public clouds.

Metrics for Kubernetes clusters from multiple end points are visible on a single dashboard in Wavefront, as shown in Figure 13:  

Additionally, we can analyze the logs generated by our Kubernetes clusters deployed across various platforms. To do so, forward the logs to VMware vRealize Log Insight Cloud — a log aggregation and alerting tool that provides a fully managed log analytics and troubleshooting service. The logs from the Kubernetes clusters can be viewed on the vRealize Log Insight Cloud (Log Intelligence) portal, as shown in Figure 14:

Wrapping Up

In this blog post, we covered how VMware Cloud services such as vRealize Automation Cloud along with Wavefront and vRealize Log Insight Cloud bring a consistent governance and operational experience to VMware Enterprise PKS deployed across multiple cloud platforms. This architecture gives you a solution to expand your data center into a multi-cloud strategy for next-generation workloads. The architecture improves flexibility and agility. On the horizon are additional VMware Cloud services, such as VMware NSX Service Mesh, vRealize Network Insight Cloud, and VMware Tanzu Mission Control, that will help improve governance, security, and policy control across multiple cloud deployments of Kubernetes clusters.

About the Author

Pranay is working as a Consultant at VMware. He is passionate about container orchestration in cloud, the infusion of Kubernetes in VMware stack and solving complex problems. He enjoys working with partners and customers to help them build their Cloud Native solutions.

More Content by Pranay Bakre
Previous
Inspired and Engaged: A List of Talks and Tips to Fire Up Your KubeCon Experience
Inspired and Engaged: A List of Talks and Tips to Fire Up Your KubeCon Experience

We’ve picked out some talks that we think will be good for folks just getting engaged with Kubernetes and t...

Next
Tapas and Kubernetes: What to See and Do at VMworld 2019 Europe
Tapas and Kubernetes: What to See and Do at VMworld 2019 Europe

Whether you’re new to Kubernetes or you’re hungry for more advanced discussions, VMworld 2019 Europe, 4-7 N...