Modern microservices-based cloud native applications often consume application runtimes and backing services such as caching, databases, logging, monitoring, messaging, and so on from open source software. The key goals behind this practice are standardization, community leverage, and time to market. But the use of open source in enterprise software development has its own challenges. While one might achieve speed and agility, security and compliance posture can be compromised if open source dependencies have vulnerabilities and are not scanned in advance.
This post provides guidance for how to customize, deploy, and manage open source software at scale in a secure, reliable, and consistent way in Kubernetes-based environments with the use of Helm charts. Helm is a Kubernetes package manager that facilitates the packaging, deployment, and lifecycle management of Kubernetes artifacts using Helm charts. The open source Helm project is supported by the Cloud Native Computing Foundation.
In this post, we’ll use the Apache Kafka Helm chart, though the procedure we’ll explain can be applied to any Helm chart. Apache Kafka is a community-distributed event streaming platform capable of handling trillions of events a day at scale. Popular use cases for Apache Kafka include messaging, website activity tracking, metrics, log aggregation, stream processing, and event sourcing.
To demonstrate how to customize, deploy, and manage an open source Kafka Helm chart, we’ll use the following:
VMware Tanzu Application Catalog to configure and customize the Helm catalog
Helm Command Line Interface (CLI) to deploy the Helm chart
Helm CLI to manage the lifecycle of the deployed workload
Observability integration to provide logging, monitoring, and scaling for the workload
For the illustration below, we are using two key personas: developer and operator. The developer persona is a typical microservices developer responsible for design, the implementation of business logic, as well as all corresponding continuous integration (CI) activities. The operator persona is responsible for deploying, standardizing, and managing the lifecycle of Kubernetes environments along with the microservices that run on top of them. In practice, there are many finer-grained personas in every organization, but for the sake of simplicity we are using these two.
A typical workflow for deploying Helm-based workloads in a Kubernetes cluster using Tanzu Application Catalog service looks like this:.
The operator understands the development needs of the team. She sets up a private catalog in Tanzu Application Catalog using the steps described in this doc. It is possible to create a custom catalog that can follow security and isolation rules for the enterprise. For example, operators might decide to create an application catalog for front-end components used by various business applications, another catalog for integration components, and yet another one for backend components. Alternatively, different custom catalogs can be created for applications deployed for different business units. Operators typically seek to choose the right balance between isolation and standardization.
The operator populates the catalog with curated Helm charts. In order to create these curated Helm charts, the operator specifies a base OS image and the required open source software that needs to be deployed in Kubernetes. As shown in the following diagram, the Tanzu Application Catalog service ships with standard Linux distros for base OS images. Or, as commonly found in enterprise scenarios, operators can import a custom base OS image built to their enterprise standards to deploy across all workloads.
Once the base OS and corresponding software packages are selected, the Tanzu Application Catalog service builds and packages the curated Helm charts. For example, below is the digest information for an Apache Kafka Helm chart.
The Tanzu Application Catalog service also generates validation, build, and test reports. These get packaged with the image. This test and validation automation increases developer productivity by reallocating the time spent on packaging compliant software back to creating business value. For example, the Kafka Helm chart produces:
Validation reports, including functional and integration tests
Build reports, including detailed information about the asset’s contents
Asset-spec.json for Kafka is shown below. As you can see, it enumerates the software assets and their corresponding auditable details, which can be used to keep track of deployed software inventory as well as to standardize software component usage across a given scope within the enterprise.
Integration test results for Kafka release are shown below. The integration tests are run across all cloud providers, including Azure AKS, Google Cloud’s Google Kubernetes Engine, and on-prem VMware Tanzu Kubernetes Grid.
You can view Kafka Helm chart metadata with the Tanzu Application Catalog CLI. For example, you can get Kafka metadata by the Kafka application ID, as shown below. Tanzu Application Catalog CLI can be downloaded via this doc.
Helm Chart deployment
We’ll use the Helm CLI to deploy this vetted, tested, and curated Kafka Helm chart created by the Tanzu Application Catalog service to a standard Kubernetes cluster.
We’ll start by adding the Helm chart repo.
Next, we deploy the Kafka Helm chart.
Then we create the Helm release kafka-my-demo.
To get the Kafka client deployed and running, we’ll deploy the Kafka client pod.
Once the Kafka client is deployed and running, the following pods and persistent volume claims are created. The Helm CLI also deploys Zookeeper.
Let’s execute to the Kafka client pod and start the Kafka producer and consumer.
To ensure our deployment is working, we’ll test publishing messages to the producer and ensure the consumer receives those messages by subscribing to the topic.
The Kafka consumer then subscribes to this topic and retrieves messages posted on it.
We have now successfully deployed the Apache Kafka enterprise-grade Helm chart.
Enterprise-grade Helm chart deployment best practices
To ensure successful enterprise-grade Helm chart deployment, we recommend taking the following steps:
Create appropriate role-based access control for managing Kubernetes artifacts including Helm charts
Parameterize the Helm charts (via values.yaml) to ensure that containers are run as non-root
Use built-in Kubernetes practices for liveness and readiness probes to check the health of the pods before sending traffic to them
Ensure that pod-to-pod traffic is encrypted with TLS
Integrate the deployed charts with logging and monitoring tools
These best practices can be seen incorporated in this open source Apache Kafka chart. To ensure the appropriate parameterization, refer to values.yaml file for the Helm chart. The container created by the Kafka Helm chart runs as non-root by default. Enable serviceAccount.create to enable appropriate service account creation for the Kafka pods, which can in turn be used to enforce appropriate role-based access control.
You can also configure liveness and readiness probes. To check if the Kafka deployment is ready to serve the client requests,
set livenessProbe.enabled and
readinessProbe.enabled. And set
auth.tls.autoGenerated: true to enable TLS encryption for Kafka pod communication with other services.
To deploy a Kafka cluster with three Kafka brokers and TLS authentication for both inter-broker and client communications, use the following parameters.
metrics.kafka.enabled: true parameter to create a standalone Kafka exporter to expose Kafka metrics. To expose JMX metrics to Prometheus, use the parameter
metrics.jmx.enabled : true. To enable Zookeeper chart metrics, use the parameter
Lifecycle management of deployed Helm ChartsThe Helm CLI supports the upgrading and deleting of deployed Helm instances (called Helm releases). As Tanzu Application Catalog builds newer versions of the Helm charts by watching upstream revisions of chart dependencies, the operators can deploy newer versions of Helm charts by upgrading the existing deployment.
You can also upgrade, delete, and enable lifecycle management activities for Kafka.
Upgrading lifecycle management activities
Upgrade a release using the following Helm CLI command.
Deleting a release
Delete a release by using the following Helm command. If you want to permanently delete the deployment and all data, you’ll also need to delete the persistent volume claims. By default, Helm will not delete persistent volume claims so as to prevent accidental data loss.
metrics.kafka.enabled: true parameter to create a standalone Kafka Prometheus exporter to expose Kafka metrics, which can then be scraped by Prometheus. To expose JMX metrics to Prometheus, use the parameter
metrics.jmx.enabled : true. To enable Zookeeper chart metrics, use the parameter
While Tanzu Application Catalog allows you to build curated Helm charts with audit-ready capabilities, standard Helm tooling allows you to deploy and manage the lifecycle of the Helm charts—and by extension, the open source software that they package.
Learn more about VMware Tanzu Application Catalog.