How to Increase Developer Productivity with a Local Kubernetes Cluster

December 13, 2021 Victor Agung

At VMware Tanzu, we firmly believe that a deployment platform should improve developer productivity and drive the DevSecOps model of working. VMware Tanzu Labs in the Australia New Zealand region is doing an engagement with the customer that illustrates this in the real world. 

Since 2020, a global medical diagnostics organization headquartered in the ANZ region has been working with VMware Tanzu Labs to transform into a modern digital business. One key aspect of this transformation has been using Kubernetes clusters to improve their developer productivity. Early in the Tanzu Labs engagement, this customer deployed VMware Tanzu Kubernetes Grid in their infrastructure, and we’ve helped them discover how Kubernetes can drive efficiency in the end-to-end value stream, from a developer’s laptop to code running in production. This article shares some of the best practices we used to enable these developers to deploy and run an application on a Kubernetes cluster.

An alternate development workflow to increase confidence 

One of the first areas we focused on during this engagement was establishing a streamlined development workflow with robust automation with a goal of decreased time to production and reduced deployment failures. Automated tests give engineers confidence that their code is sound before running it in production. The key is to ensure that deployment failures are a thing of the past.

Automation through the CI/CD pipeline enables the rapid development of the software product over manual deployment, which can be error-prone and time-consuming. To gain complete control over the process, it’s important to understand how to handle an issue in the deployment to your Kubernetes cluster.

For most of us, the deployment to Kubernetes happens during the CI/CD pipeline, but what if you could push to CI/CD, having the confidence that the deployment will not fail?

A typical development workflow would look something like this:

  1. Build the application

  2. Test the application code locally (typically 5–10 minutes)

  3. Push to CI 

  4. CI runs tests (typically 10–15 minutes)

  5. CI deploys to the Kubernetes environment when tests pass (typically 2–3 minutes)

With this workflow, what happens when there is an issue with the deployment step? The options available to the developer would be:

  • Get the logs from CI to figure out what went wrong and, if the fix is obvious, apply and commit the new change to re-run the entire CI pipeline. 

  • Access the shell environment to the node/container that performs the deployment step and debug the deployment. 

  • Take a guess at what went wrong, apply a potential fix, and re-run the entire pipeline.

Having an application deployed to a Kubernetes environment allows easy replication of the same infrastructure. With the advent of tools such as Minikube, Docker Desktop (Mac), and many others, developers can easily replicate your production environment in your local development environment. For this engagement, we used Minikube, as we were on a Linux machine and setting up Minikube was easier. The below table summarizes the challenges we have faced with debugging the deployment process and how a local Kubernetes environment resolves these issues.

Challenges with the typical development workflow

How local Kubernetes deployment resolves this issue

SSH access is usually not available on most CI platforms as the process would have terminated and released any resources used.

Shell access is readily available since it is on your local machine.

CI is not built for introspection and log output may not be sufficient.

Full access to the Kubernetes cluster for introspection and modifications can be done at will.

Debugging your CI pipeline could potentially block other members of your team from committing their changes and thus introducing a blocker to the team.

Debugging locally frees up the CI pipeline to be used by other team members.

Feedback to confirming the fix worked requires a re-run of your entire CI pipeline, which increases the turn-around time for a fix.

Much quicker feedback to make many small changes leading to a quicker fix.

Here’s the basic workflow we implemented for this customer: 

  1. Build the application

  2. Test the application code locally (typically 5–10 minutes)

  3. Deploy the app to a local Kubernetes cluster (typically 2–3 minutes)

  4. Push to CI

  5. CI runs tests (typically 10–15 mins)

  6. CI deploys to the Kubernetes environment when tests pass (typically 2–3 mins) 

Benefits of local deployments 

Using the above  workflow significantly reduced the waiting time for the customer from 11–13 minutes to 2–3 minutes, allowing the developer to diagnose the problem quickly and come to a solution. The table below shows how it affects the time to resolution.


CI Pipeline

Local deployment

Time spent waiting for feedback

12 mins (or 0.2 hours)

3 mins or (0.05 hours)

Changes per hour

4–5 changes

20–30 changes

Typical time to resolution*

2 hours

30 mins

Time saved in a month*

2 developer days (15 hours)

*The assumption is that the fix required 10 tries.

To summarize, If the development team were required to apply fixes or changes to their deployment 10 times in a month, without a local deployment, it would require, on average, 20 development hours or 2.5 days. However, by having a local deployment, it would take just 5 hours. We saved 2 days per month, or approximately 10 percent in productivity. 

ROI analysis of local deployment  

Setting up a local deployment requires an up-front cost. When working with customers, we must always do a cost-benefit analysis to determine whether this is a worthwhile investment based on their particular circumstances. 

To answer that question, first we need to answer this question: How long does it take to deploy a local Kubernetes cluster? 

In this engagement we deployed a local end-to-end deployment with tests within one business day. This was done on a vanilla RHEL8 (Redhat Enterprise 8.2) image in a secured environment behind a corporate proxy. The actual time to deploy the local Kubernetes cluster was only a fraction of the total time to achieve an end-to-end deployment. Therefore we can conclude that a simple local Kubernetes environment can be achieved within hours.

The table below shows how much time would be spent per deployment, factoring in 0.5 days for the initial deployment and the figures from the table above of 0.2 hours and 0.05 hours for the time spent waiting when using the CI pipeline and local deployment respectively:

Plotting this on a graph, the results are obvious. When factoring the time invested in deploying a local Kubernetes cluster, we were able to achieve a positive ROI after the third deployment fix or change. After 10 changes, the team had saved a whole day.

For some software projects, you may need to deploy to your Kubernetes infrastructure only once or twice. Most of the time, however, changes in application deployments are a matter of when—not if—they will happen. Using local Kubernetes clusters can help development teams adjust their deployment process efficiently in order to maximize productivity.

A Two-Tiered Approach to User Adoption, AKA the 'User Adoption Sandwich'
A Two-Tiered Approach to User Adoption, AKA the 'User Adoption Sandwich'

When building enterprise software, looking at user adoption from two perspectives helps ensure stakeholder ...

DevOps Loop Recap: A Day Filled with Glorious Purpose
DevOps Loop Recap: A Day Filled with Glorious Purpose

At the inaugural DevOps Loop at VMworld, experts looked at DevOps from every angle, including its payoffs, ...