Zero Cost, No Sign-up: Introducing Tanzu Observability for Spring Boot Applications

May 6, 2020 Pontus Rydin

Java application performance monitoring has traditionally been considered difficult. You had the choice between two undesirable options: overhauling your application to inject instrumentation, or using a bytecode instrumentation agent that increased the footprint of your application—as well as the risk of hard-to-debug application crashes.

Luckily, Spring Boot offers tools for automatically extracting metrics and traces of any application without having to write any code or inject bytecodes in runtime. This article describes how to use those techniques in conjunction with Tanzu Observability by Wavefront.

What is Tanzu Observability by Wavefront?

Tanzu Observability by Wavefront (formerly called Wavefront) is an enterprise-grade observability solution delivered in a SaaS form factor and capable of handling millions of data points per second while offering advanced analytics in real time.

Micrometer and Spring Boot

Micrometer is a common data collection API and framework for “dimensional metrics,” those metrics that are arranged in a flat, non-hierarchical structure using arbitrary tags. Such metrics are easily ingested and analyzed by modern observability solutions. Micrometer is completely vendor-independent and can be applied on top of virtually any metric source.

Read more about dimensional metrics with Micrometer and Spring Boot in this post.

By default, Spring Boot auto-configures a set of metrics consisting of, among others, JVM, CPU, memory, and file descriptor metrics, as well as metrics for common frameworks and utilities, such as Spring MVC, Tomcat, and RabbitMQ.

Micrometer is independent of metric source, as well as metric consumer. In this article, we will focus on Tanzu Observability by Wavefront.

You can find more information about Micrometer at

Spring Boot auto configuration

Auto configuration is, in essence, a mechanism to automatically configure an application based on its dependencies. Put another way, it allows components to provide their share of configuration, thereby relieving the application programmer from having to implement large amounts of boilerplate code.

In this article, we’re demonstrating how Spring Boot auto configuration can be used to configure Micrometer and link it to an observability solution, in this case Tanzu Observability.

Trying it out

To demonstrate how the observability auto configuration works, we picked a simple test application, such as Spring Pet Clinic. Our goal is to set up application monitoring along with transaction tracing. The procedure is surprisingly simple!

Downloading the Pet Clinic application

To download the demo application used in this example, follow these instructions

Enabling Tanzu Observability 

To enable Tanzu Observability, add it as a dependency in the POM file. In the <dependencies> section, add the following:


This pulls in the Wavefront for Spring Boot Starter module, which automatically sets up monitoring through Micrometer and links it to Tanzu Observability. Since Tanzu Observability is delivered as a SaaS, no other software needs to be installed.

Enabling distributed tracing

We can also enable optional distributed tracing. This is typically done using the Sleuth framework and is automatically enabled by the auto configuration. In order for distributed tracing to work with our application, we need to add the following dependency:


In addition to Sleuth, we also support OpenTracing as a tracing framework.

Setting application properties

If we don’t explicitly disable it, the auto configuration will create a free Tanzu Observability account for us. More on this below. So at that point, we could just start the application and we would see metrics and traces flowing. However, it would look a bit strange, since the observability platform doesn’t know the application name or service name. It would work just fine, but the application would show up as “unknown_application”. Let’s fix that!

We can provide additional configuration for Tanzu Observability by editing application properties. Typically, they would be in a file called In the Pet Clinic example, you would add properties to src/main/resources/

Let’s add three configuration items to make things look a bit nicer.

This sets the display name of the application, which is what’s going to be shown in labels in the Tanzu Observability UI. The best practice is to provide unique and descriptive application names.


Modern applications are typically made up of several microservices. These services tend to share the same application name, but have different service layers. In our case, we have only a single service. Since it’s the code that handles the application UI, we can call it “spring-petclinic”.

Starting the application

First, let’s make sure the dependencies of the project are properly resolved. We can do that by simply running a clean build:

mvn clean package

Now we can run it using the built-in Maven goal for starting Spring Boot applications.

mvn spring-boot:run

You should now see output similar to this:

Connect to your Wavefront dashboard using this one-time use link:

That output tells us that a free Tanzu Observability tenant has been provisioned for us; it also provides a one-time URL that can be used to access it.

Bring your own account

Some of you may already have a paid or active trial account and would like to use it for Spring Boot observability. We can do that by configuring the application properties as described above. To configure your Tanzu Observability account, simply add the following to

management.metrics.export.wavefront.uri=https://<your cluster>

Be sure to edit the API token to match the API token of your account! Also, make sure the URI matches what you use to access the observability UI.

Once you start the application again, you should see data flowing to the Tanzu Observability tenant linked to your account.

A tour of Spring Boot Observability

Before you start, we need to generate some traffic to the application. You can do that by navigating to http://localhost:8080 and clicking around in the user interface. A handful of clicks should be enough, but for more interesting data, try adding some owners and pets!

Once you have some traffic, copy and paste the URL from the application log (as described above) into your browser. You should see a summary of your application performance. If not enough data has been collected yet, you might instead see a sample application called “beachshirts”. If this happens, just reload the screen. You might also have to generate some more traffic to the application. Also notice that, by default, metrics are sent in 1-minute batches, so you might have to wait a short while before you see metrics.


Shortly after starting the application, you should see metrics and traces (if enabled) flowing into your tenant. Open the UI by navigating to the temporary URL printed to the application log or the URI you configured if using an existing account. From there, navigate to Browse->Metric and select, for example, jvm.memory.used. You should see data points appearing in a time series diagram.

But there’s much more to it than just JVM metrics. Everything that exposes metrics to Micrometer will automatically be discovered and its metrics collected.

Let’s select the http category by typing “http” in the search field of the metric browser and navigate to the maximum request time, “http.server.requests.max”. This will give us some basic performance numbers broken down by URL and response code. Feel free to click some other metrics to get an idea of what’s collected even in a simple test application like Pet Clinic!


While metrics summarize the behavior of an application as a collection of time series, traces deal with the individual interactions or transactions of an application. There are three important terms to understand when talking about distributed tracing: spans, traces, and baggage. Simply put, a “span” tells us about an individual operation, such as a function call, whereas a “trace” is the tree of spans tied together to describe an end-to-end interaction with an application. The term “baggage” (sometimes referred to as “tags”) is used to describe arbitrary data associated with a span, such as SQL statements or HTTP queries.

To check whether traces are coming through, navigate to Application->Application Status. If you set up the application and service names as described above, you should see something similar to this:

The values for Requests, Error, and Duration might not yet be populated if you just started the application. From here, you can click on the title of the application (“demo”) and you will see the services making up this application. In our case, there should only be one: the “spring-petclinic” service. Click “Dashboard” on this service and you’ll see an overview of the application metrics.

Again, your diagrams might not be populated yet, so you might have to drive some traffic to the application and wait a couple of minutes for metrics to appear. The upper portion of this screen shows what’s known as RED metrics, with RED standing for “Requests, Errors, and Duration.” The request duration/latency is displayed as a time series, as well as a histogram. The latter is the yellow bar chart, which gives you a breakdown of call durations. This is an extremely useful metric for understanding application performance and tracking down bottlenecks!

Before we start looking at traces, let’s scroll down a bit, where you’ll notice a set of metrics for the JVM and the machine it’s running on. Again, very useful when you’re chasing a performance issue.

Let’s drill deeper into the application! In the lower-right corner is a breakdown of the slowest request captured. Let’s click the slowest one to get more information about where the application is spending most of its time.

In our particular test application, there isn’t much latency at all, but it is rather interesting to look at nonetheless. Let’s look at the trace depicted above. We can see that it comes in through a controller method called “initUpdateForm”. That happens to be the code for the page that loads a pet owner, along with all their pets and appointments. As we can see, there’s a fair number of queries needed to load all that data. In our particular case, they’re all fast, which is nice. But let’s drill down into one of them to see the level of detail we can get to!

To get to this screen, we selected one of the query spans and expanded its tags. As you can see, we’re able to get to some very deep details. We can even see exactly what statement was executed, against which database, and how long it took. This is obviously invaluable when debugging a performance problem with an application.

Advanced topics

Tanzu Observability also offers a variety of additional tools.

Custom dashboards

A graphical, drag-and-drop-based dashboard builder allows you to save any diagram as a dashboard and keep adding more diagrams and interactions to it when browsing for metrics.

This tutorial provides a complete guide for how to build dashboards.


In order to proactively handle situations adversely affecting the performance or functionality of an application, you need to be able to create rules-based alerts. These alerts can then be routed through various notification channels to emails, phone apps, or ticket-tracking systems. Tanzu Observability offers an advanced alerting framework with an easy-to-use user interface.

For more information, please see this documentation.

Custom metrics

Micrometer allows you to declare and use custom metrics in your application code. This isn’t intended to be a full tutorial on creating metrics in Micrometer, however, so for more information, have a look at this page.

In our application, we want a simple counter on how many lives a cat has used up. Cats have nine lives, remember? (No cats were harmed in the building of this example.)

Micrometer lets us do this with a single line of code! Whenever the cat has an accident, it uses up a life, so we implement this as a counter. To make sure we keep counts for each individual cat we tag the counter with the cat’s name. The code looks like this:

public String catAccident(@PathVariable("petId") int petId, ModelMap model) {
  Pet pet = this.pets.findById(petId);
  model.put("pet", pet);
  Metrics.counter("cat.lives.used", "name", pet.getName()).increment();

To test this, we’re just going to hit the URL http://localhost:8080/owners/1/pets/1/accident

And sure enough, the poor cat is using up lives every time we hit that URL. So let’s not do that anymore…

Although this is a somewhat silly example, custom application metrics are extremely useful for keeping track of things like the revenues flowing through a system or the number and frequency of certain business events.


Application observability is often considered difficult to implement, since it requires code changes or invasive bytecode instrumentation. In this article, we have shown how Micrometer and Spring Boot can provide powerful application observability without any code changes using the auto configuration feature. To learn more about Tanzu Observability by Wavefront, check out our documentation.

Getting Started with VMware Tanzu Build Service for Local Development
Getting Started with VMware Tanzu Build Service for Local Development

Insert VMware's Tanzu Build Service into your development cycle to move faster and more effectively.

Business Transformation Simplified: 3 Guiding Principles
Business Transformation Simplified: 3 Guiding Principles

In order to keep the embers of business transformation continually burning, organizations need to have guid...


Subscribe to our Newsletter

Thank you!
Error - something went wrong!