How to Instrument and Monitor Your Spring Boot 2 Application in Kubernetes Using Wavefront

December 21, 2019 Howard Yoo

In this post, I’ll walk you through instrumentation and monitoring of a simple standalone application developed with Spring Boot 2.0. The application is running as a container in Kubernetes. Spring Boot is an open-source Java-based framework used to create microservices. Kubernetes is a popular container orchestration system. See the references at the end of the page to learn more about Spring Boot and Kubernetes.

Assume that I want to observe my application’s performance at all levels of the stack. Here’s the scenario:

  • I’m monitoring my Kubernetes environment.
  • I’m collecting metrics coming from the application itself.
  • I add another dimension by instrumenting the application to send distributed traces.

Because all of those metrics are visible in Wavefront, I now have insights into a rich set of data coming from Kubernetes, the application, and the application’s traces.

The application I created for this blog was developed as a demo. It’s a Java application that has REST API endpoints for receiving requests. The responses are generated in JSON format. I named the application ‘loadgen’ because it generates simulated CPU and memory load to use a system’s computational resources. Find the source code of loadgen in my GitHub repo.

While developing this application, I decided to use:

  • Spring Boot 2.0 as the base framework because I thought it would help me make my application quickly
  • The Micrometer instrumentation library
  • The Wavefront reporter, which fits nicely with the Micrometer library
  • I used Maven, and I added the following dependency to my pom.xml file to enable the use of Micrometer:
<dependency>
    <groupId>io.micrometer</groupId>
    <artifactId>micrometer-registry-wavefront</artifactId>
    <version>${micrometer.version}</version>
</dependency>

Step 1: Sending Application Metrics

The instrumentation of my loadgen application was quite simple. I have added the following code in my application class (which uses WavefrontMeterRegistry as the registry).

// create a new registry
registry = new WavefrontMeterRegistry(config, Clock.SYSTEM);

// default JVM stats
new ClassLoaderMetrics().bindTo(registry);
new JvmMemoryMetrics().bindTo(registry);
new JvmGcMetrics().bindTo(registry);
new ProcessorMetrics().bindTo(registry);
new JvmThreadMetrics().bindTo(registry);
new FileDescriptorMetrics().bindTo(registry);
new UptimeMetrics().bindTo(registry);

Proxy Settings

The application.properties file specifies the following proxy config settings:

wf.prefix = kubernetes.loadgen
wf.proxy.enabled = true
wf.proxy.host = wavefront-proxy
wf.proxy.port = 2878
wf.duration = 10

The metrics that this Spring Boot application will generate include JVM-related metrics such as how the ‘classloading’ is occurring, garbage collection rates, processor’s CPU utilization, threads, file operations, system uptimes, etc., all under the prefix of ‘kubernetes.loadgen, specified as wf.prefix above.

I chose to add kubernetes as the prefix to emphasize this application is related to it. When my Spring Boot application starts running, it will collect these metrics and send them to the Wavefront proxy every 10 seconds, as defined by wf.duration above.

Step 2: Easy Distributed Tracing Instrumentation

I wanted to make sure that I could also send distributed traces from the application to Wavefront. The OpenTracing SDK is open-source based and it generates tracing data in a platform-neutral way.

Understanding how your code, components, calls, and services interact with each other is critical to understanding the application performance. I have added the following code to create a Spring Boot component responsible for maintaining a tracer instance. The tracer instance can be used to generate the application’s call spans to keep track of the application’s performance.

That required adding the following dependency to my pom.xml file:

<dependency>
    <groupId>io.opentracing</groupId>
    <artifactId>opentracing-api</artifactId>
    <version>0.32.0</version>
</dependency>
<dependency>
    <groupId>io.opentracing</groupId>
    <artifactId>opentracing-util</artifactId>
    <version>0.32.0</version>
</dependency>
<dependency>
    <groupId>com.google.guava</groupId>
    <artifactId>guava</artifactId>
    <version>19.0</version>
</dependency>
<dependency>
    <groupId>com.wavefront</groupId>
    <artifactId>wavefront-opentracing-sdk-java</artifactId>
    <version>1.7</version>
</dependency>

I also added the following properties to the application.properties file:

wf.proxy.histogram.port = 40000
wf.proxy.trace.port = 30000
...

wf.trace.enabled = true
wf.application = loadgen
wf.service = run


Creating Span Data

I created a new Java class called TraceUtil. My application can access the initialized tracer instance from that class, and can start creating span data for the traces:

package com.vmware.wavefront.loadgen;

import org.apache.log4j.Logger;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;

import javax.annotation.PostConstruct;

import com.wavefront.opentracing.WavefrontTracer;
import com.wavefront.opentracing.reporting.CompositeReporter;
import com.wavefront.opentracing.reporting.ConsoleReporter;
import com.wavefront.opentracing.reporting.Reporter;
import com.wavefront.opentracing.reporting.WavefrontSpanReporter;
import com.wavefront.sdk.common.WavefrontSender;
import com.wavefront.sdk.common.application.ApplicationTags;
import com.wavefront.sdk.direct.ingestion.WavefrontDirectIngestionClient;
import com.wavefront.sdk.entities.tracing.sampling.ConstantSampler;
import com.wavefront.sdk.proxy.WavefrontProxyClient;

import java.net.InetAddress;
import java.net.UnknownHostException;

import io.opentracing.Tracer;
import io.opentracing.util.GlobalTracer;

@Component("traceutil")
public class TraceUtil {

  /* its own tracer */
  public Tracer tracer;

  public TraceUtil() {

  }

  @Value("${wf.proxy.enabled}")
  public boolean proxyEnabled;

  @Value("${wf.proxy.host}")
  public String proxyhost;

  @Value("${wf.proxy.port}")
  private String proxyport;

  @Value("${wf.trace.enabled}")
  public boolean traceEnabled;

  @Value("${wf.proxy.histogram.port}")
  private String histogramport;

  @Value("${wf.proxy.trace.port}")
  private String traceport;

  @Value("${wf.direct.enabled}")
  public boolean directEnabled;

  @Value("${wf.direct.server}")
  public String server;

  @Value("${wf.direct.token}")
  public String token;

  @Value("${wf.application}")
  public String application;

  @Value("${wf.service}")
  public String service;

  protected final static Logger logger = Logger.getLogger(TraceUtil.class);

  @PostConstruct
  public void init() {
    if(traceEnabled == true && application != null && service != null) {

      ApplicationTags appTags = new ApplicationTags.Builder(application, service).build();
      String hostname = "unknown";
      try {
        hostname = InetAddress.getLocalHost().getHostName();
      } catch (UnknownHostException e) {
        e.printStackTrace();
      }

      WavefrontSender wavefrontSender = null;
      if(proxyEnabled == true) {
        wavefrontSender = new WavefrontProxyClient.Builder(proxyhost).
            metricsPort(Integer.parseInt(proxyport)).
            distributionPort(Integer.parseInt(histogramport)).tracingPort(Integer.parseInt(traceport)).build();
      } else if(directEnabled == true){
        wavefrontSender = new WavefrontDirectIngestionClient.Builder(server, token).build();
      }
      Reporter wfspanreporter = new WavefrontSpanReporter.Builder().withSource(hostname).build(wavefrontSender);
      Reporter consoleReporter = new ConsoleReporter(hostname);
      Reporter composite = new CompositeReporter(wfspanreporter, consoleReporter);
      logger.info("created new tracer with " + application + " : " + service);
      tracer = new WavefrontTracer.Builder(composite, appTags).withSampler(new ConstantSampler(true)).build();
    }
    else {
      logger.info("created new Global Tracer...");
      tracer = GlobalTracer.get();
    }
  }

  public Tracer getTracer() {
    return tracer;
  }
}

The tracer instantiates WavefrontSpanReporter using the WavefrontSender (containing proxy host and port to report traces to), as well as the application’s name and service as its default tag.

In loadgen’s controller class, which handles all the REST API requests, I added the code on each of the request paths. Whenever there is a request, a span of how the execution ran is generated and tracked. Below is the code snippet of my Controller class:

package com.vmware.wavefront.loadgen;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.stereotype.Component;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;

import java.io.IOException;
import java.util.HashMap;
import java.util.ArrayList;
import java.util.Iterator;

import javax.annotation.PostConstruct;

import io.opentracing.Span;
import io.opentracing.Tracer;

@RestController
public class Controller extends HashMap {

  @Autowired
  private TraceUtil traceutil;

  public Controller() {
    super();
  }

  @PostConstruct
  public void init() {

  }

  @RequestMapping("/")
  public Response root() {
    return new Response("/", "dir", "['/cpu','/mem']");
  }

  @RequestMapping("/help")
  public Response help() {
    return new Response("help", "text", "Help messages are here..");
  }

  @RequestMapping("/cpu")
  public Response cpu() {
    return new Response("/cpu", "dir", "['/info','/run']");
  }

  @RequestMapping("/mem")
  public Response mem() {
    return new Response("/mem", "dir", "['/info','/run']");
  }

  ....

  private Tracer getTracer() {
    return traceutil.tracer;
  }

  ....

  @RequestMapping("/mem/run")
  public Response memrun(@RequestParam(value="threads", defaultValue="0") int threads,
                         @RequestParam(value="duration", defaultValue="0") int duration) {
    Span span = getTracer().buildSpan("/mem/run").start();
    String msg = "";
    if(threads > 0) {

      ArrayList<MemGen> gens = (ArrayList<MemGen>)get("/mem");
      if(gens == null) {
        gens = new ArrayList<>();
        put("/mem", gens);
      }

      if(!isRunning("/mem")) {
        gens.clear();
        for (int i = 0; i < threads; i++) {
          MemGen gen = new MemGen(duration, getTracer(), span);
          gens.add(gen);
          gen.start();
        }
        msg = String.format("{threads='%d', duration='%d', status='started'}", threads, duration);
      } else {
        duration = gens.get(0).getDuration();
        threads = gens.size();
        msg = String.format("{threads='%d', duration='%d', status='already running'}", threads, duration);
      }
    } else {
      msg = "need to specify two parameters, threads and duration.";
    }
    span.finish();
    return new Response("/mem/run", "text", msg);
  }
  ....
}

You can view the full source code in my github repository here.

Each of the @RequestMapping methods utilizes tracing by creating a new span for each of its starting points and closes it off by calling span.finish() at the end of its routine.

Now, the coding part to add instrumentation to loadgen is done, and I can build the application using maven:

mvn clean

mvn compile

Step 3: Creating a Docker Image

To make the application run inside a container, I created a Docker image using the following Dockerfile.

# base image
FROM openjdk:8-jdk-alpine

LABEL maintainer="hgy@gmail.com"
VOLUME /tmp
EXPOSE 8080

RUN apk update
RUN apk add curl

ARG JAR_FILE=target/loadgen-0.0.1-SNAPSHOT.jar
ADD ${JAR_FILE} loadgen-0.0.1.jar

# run the jar file
ENTRYPOINT ["java","-Djava.security.edg=file:/dev/./urandom","-jar","/loadgen-0.0.1.jar"]

When I ran mvn package to build and package my loadgen application, it will end up as a packaged JAR file under /target folder of the loadgen project. The Docker file adds that JAR into the JDK 8 base image, with proper configuration settings for the loadgen container executable defined using ENTRYPOINT instruction. Here’s how it works:

1. I create the Docker image by using the following docker build command:

docker build -t "howardyoo/loadgen:0.0.4" -t "howardyoo/loadgen:latest" .

2. After that, I logged into the Docker using

docker login

3. Finally, I push the container image using the Docker push command:

docker push "howardyoo/loadgen:0.0.4"

Now, the loadgen Spring Boot application is inside a Docker image and uploaded into my Docker repository, ready to be used by Kubernetes.

Step 4: Setting-up the Kubernetes Cluster

There are many ways to run Kubernetes, but I decided to use the simplest way, which is to use KIND (Kubernetes in Docker). It uses my local Docker engine to run a single-node Kubernetes cluster perfect for my testing purpose.

kind create cluster --name kind-wf

export KUBECONFIG="$(kind get kubeconfig-path --name="kind-wf")"

With my kind-wf Kubernetes cluster up and running, and with proper KUBECONFIG environment setup, I can now start using kubectl to set up the Wavefront integration for this cluster.

To configure the Wavefront integration for Kubernetes, just follow the instructions in the Wavefront documentation. I download deployment yaml files for both Wavefront proxy server and Wavefront collector, which will be collecting and sending the Kubernetes system metrics to Wavefront.

I can even use a Helm chart to do the setup with a single Helm command. Find a Helm chart related to the Wavefront Kubernetes integration here.

Step 5: Deploying Loadgen Pod Into Kubernetes Cluster

Now we can deploy our loadgen in the kind-wf cluster. I’ve created the following simple deployment yaml file (loadgen-svc.yaml) to do it:

---
apiVersion: v1
kind: Namespace
metadata:
  name: loadgen
---
apiVersion: v1
kind: Service
metadata:
  name: wavefront-proxy
  namespace: loadgen
spec:
  type: ExternalName
  externalName: wavefront-proxy.default.svc.cluster.local
  ports:
  - name: wavefront
    port: 2878
    protocol: TCP
  - name: traces
    port: 30000
    protocol: TCP
  - name: histogram
    port: 40000
    protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: loadgen
  namespace: loadgen
  labels:
    app: loadgen
    user: howard
    version: 0.0.5
spec:
  selector:
    matchLabels:
      app: loadgen
  replicas: 2
  template:
    metadata:
      labels:
        app: loadgen
    spec:
      containers:
      - name: loadgen
        image: howardyoo/loadgen:0.0.4
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
        resources:
          limits:
            memory: 512Mi
          requests:
            memory: 256Mi
---
apiVersion: v1
kind: Service
metadata:
  namespace: loadgen
  name: loadgen-svc
  labels:
    app: loadgen-svc
    user: howard
spec:
  ports:
  - port: 8080
    targetPort: 8080
    protocol: TCP
  type: LoadBalancer
  selector:
    app: loadgen

 

This yaml file creates a pod that runs two instances of loadgen containers, load-balanced as a service on port 8080 and accessible via that port. In the snippet above, loadgen is referencing the Wavefront proxy service using its external name and defining it as ‘wavefront-proxy’ with appropriate three ports. Wavefront proxy listens:

  • for regular metrics on its default port 2878
  • for histograms on port 40000
  • for traces on port 30000.

We have to define these ports so that the loadgen instrumented code can send all of its metrics, histograms, and spans to the Wavefront proxy, which will then forward the data to the Wavefront server running in the cloud.

Step 6: Running Loadgen and Monitoring It Using Wavefront

We should now have the following types of metrics coming into Wavefront:

  • Kubernetes system metrics – how your cluster, nodes, pods, containers are doing – delivered by wavefront-collector
  • Application metrics – (centered around JVM performance) for loadgen
  • Distributed traces for loadgen troubleshooting when it is running

Let’s run our loadgen by starting the kubectl proxy:

kubectl proxy

Then, run the following curl commands. The loadgen application is designed to be intuitive, giving you selections of things you can run. Note, that the address of proxy may vary (in my case, it was 127.0.0.1 on port 8001).

curl "http://127.0.0.1:8001/api/v1/namespaces/loadgen/services/loadgen-svc/proxy/"

{"name":"/","type":"dir","message":"['/cpu','/mem']"}

The response should be in JSON message format as shown above.

I can then further run by adding /cpu or /mem to it. So, if I add the /cpu to the context root,

curl "http://127.0.0.1:8001/api/v1/namespaces/loadgen/services/loadgen-svc/proxy/cpu"

{"name":"/cpu","type":"dir","message":"['/info','/run']"}

I can either check on the info or run the CPU load, and it will give you instructions on how to run it.

curl "http://127.0.0.1:8001/api/v1/namespaces/loadgen/services/loadgen-svc/proxy/cpu/run"

{"name":"/cpu/run","type":"text","message":"need to specify two parameters, threads and duration."}

So, when running the CPU load, I need to provide the two parameters, ‘threads’ and ‘duration.’ Threads are the number of threads, and duration is the duration of the load (in seconds), so running the following request will generate some high CPU loads:

curl "http://127.0.0.1:8001/api/v1/namespaces/loadgen/services/loadgen-svc/proxy/cpu/run?threads=5&duration=60"

{"name":"/cpu/run","type":"text","message":"{threads='5', duration='60', status='started'}"}

When it is running, I can also check the status of your run by issuing /cpu/info:

curl "http://127.0.0.1:8001/api/v1/namespaces/loadgen/services/loadgen-svc/proxy/cpu/info"

{"name":"/cpu/info","type":"text","message":"{duration='60', threads='5'}"}

If you are running this on your laptop, you should probably hear CPU fan boosting up, as loadgen is going to be peaking CPU usage. Let it run for a while, and check how our kind-wf cluster is doing using the Wavefront Kubernetes dashboard per the figure below:

The dashboard should be the ‘kubernetes metrics by namespace’ and we can set the cluster to ‘kind-wf’ and namespace to ‘loadgen’ in order to view loadgen specific performance.

Since loadgen just induced a load, I can see the POD name ‘loadgen-5c57b54cd7-zg67h’ was the one who took the load request.

Step 7: Visualizing Performance Metrics

Using the dashboard editor, we can quickly create a dashboard to visualize Kubernetes.loadgen.* metrics in the Wavefront UI.

Step 8: Getting Distributed Traces with Wavefront

Earlier, we enabled the loadgen application to emit the trace data. We can examine them by selecting Application -> Inventory.

We click the loadgen, link, and click Details on the next page. There, we see the RED metrics generated by the Wavefront OpenTracing SDK – and the duration of service calls. An excellent new dimension of application’s performance!

We click the /cpu/run on the topk bar (just under the duration chart) to drill into the details of the run’s traces.

There is just a single ‘run’ (as our microservice does not call other services within itself), and you can see the basic RED (request, error, duration) of your service so far. It seems so far there were two invocations of run, and the second run had 5 different threads, each running for approximately 60 seconds, just as we had implemented it (generating CPU loads).

Full-Stack Observability

By combining three distinct areas where you can collect your application metrics, Kubernetes metrics, and distributed tracing data into Wavefront, developers now have complete observability into what’s happening from top to bottom of their applications.

  1. Developers can understand how their applications are performing by understanding the number of requests, errors, and duration of their particular services. Also, they can understand how each component executes relative to other components.
  2. Developers can monitor all of the custom application-specific metrics they instrumented. It includes insights into how many instances are running, how much resources are utilized, and from which container they are running.
  3. With an application running on top of an orchestration platform like Kubernetes, developers can deploy and run their applications with ease anywhere. They can correlate and quickly locate where their application is running from and how many instances are running. Also, it is possible to observe everything that happens on the platform while tracking the performance of the applications. All of that gives developers a complete view of their stacks.

Wavefront is an Enterprise Observability as a Service platform. It means that Wavefront will always be there to receive your telemetry data, regardless of where you deploy and run your application.

Run your application on your laptop, or deploy on Kubernetes that is on AWS. Move your application from AWS EKS to GCP GKE, or even to Azure AKS. It will not matter. Wavefront will never lose sight of your performance. Try it for yourself, sign up for our free trial.

References

https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/

https://spring.io/guides/gs/rest-service/

https://docs.wavefront.com/wavefront_sdks.html

https://docs.wavefront.com/micrometer.html

https://docs.wavefront.com/kubernetes.html

https://docs.wavefront.com/tracing_instrumenting_frameworks.html

https://github.com/howardyoo/loadgen

Get Started with Wavefront Follow @YooHoward Follow @WavefrontHQ

The post How to Instrument and Monitor Your Spring Boot 2 Application in Kubernetes Using Wavefront appeared first on Wavefront by VMware.

About the Author

Howard Yoo

Howard Yoo is a systems engineer in Wavefront by VMware. He always strives to improve things, and also makes complicated problems simpler and easier to understand and solve. He loves his work.

Follow on Twitter More Content by Howard Yoo
Previous
How to Track Wavefront Adoption with Usage Metadata
How to Track Wavefront Adoption with Usage Metadata

As you continue your monitoring and observability journey with Wavefront by VMware and more teams in your o...

Next
Yammer Increases Code Reliability and Saves Developers’ Time by Using Wavefront
Yammer Increases Code Reliability and Saves Developers’ Time by Using Wavefront

I recently chatted with Scott Bonebrake, Principal Software Engineer in the Data Engineering and Analytics ...

×

Subscribe to our Newsletter

!
Thank you!
Error - something went wrong!