Case Study: Refactoring A Monolith Into A Cloud-Native App (Part 4)

April 8, 2016 Jared Gordon


sfeatured-clouds-monolithsThis is the fourth installment of a series of articles covering the migration of a legacy monolithic application into a federation of related microservices. In Parts 1 and 2 we got a legacy app running on Pivotal Cloud Foundry and added a new service. In Part 3, we introduced resilience via the use of Eureka and Hystrix.

In Part 4, we are going to use the lessons learned in the previous posts to migrate the remainder of the data module. Then, we will step back to see where this has taken us.

Recapping the Methodology

Continuing the story—in Part 2, we outlined a methodology for identifying and migrating embedded components into distributed microservices:

  1. Locate the code and functionality to be refactored
  2. Identify a service interface as the “go-to” for rest of the monolith
  3. Use the proxy pattern and modify existing code to exclusively funnel through this service interface
  4. Create a client implementation of the interface so the monolith can talk to our new microservice
  5. Point the monolith to the new service via the new client implementation

We were able to use this method to extract a quote service—can we repeat for the remainder of the entities in the data module?


  • Use same technique we used on QuoteService to “evaporate” the data module
  • Take a step back, run some metrics, and assess what we’ve accomplished

Our Approach

Large, interwoven domain models can be difficult to untangle, but bounded context techniques can help. Luckily, our model is pretty simple, and breaks nicely into three chunks:

  • Quotes: external market events that happen outside of the system
  • Accounts: users of the system, and their profiles
  • Orders: transactional events that are performed on behalf of the users

Quotes extraction was described earlier, so let’s look at our Accounts and Orders in a little more detail.


The new AccountService replaces Account and AccountProfile persistence with a single microservice. It is implemented as a simple spring-boot, JPA -backed, rest-enabled app. The corresponding clients for this service are here, and here on GitHub. Please note that this service is not (currently) secure. User information is transported in plain text for all to see in JSON payloads. Obviously, this is not yet “production ready.” We will deal with security in a future post—it’s too big a topic to delve into at this point.


OrderService encompases Order and Holding, and follows the same Boot/JPA/Rest approach used by the AccountService. Its clients are here, and here on GitHub. This microservice caries over the oddly modeled relationship between Holding and Order—perhaps this service is itself a candidate to refactor (pull requests welcomed).

After letting the dust settle, SpringTrader now looks as follows (and you can compare it to the “before view” in Part 3):


Solution Details

As with previous posts, please refer to the git README for detailed information on how to build and run each of the services and SpringTrader. The diff can be consulted to see what individual code changes were needed to extract Accounts and Orders.

Some Analysis

Stepping back, let’s see effect on the app.

Data Module (spring-nanotrader-data) has been transformed from a full-blown transactional super-module into a stateless glue code library (except for the FallbackQuoteService needed for Quote failover). We’ve also been able to eliminate a raft of dependencies and configurations from this module.

Chaos Module (spring-nanotrader-chaos) was used by the app to simulate market changes—we’re able to pull our chaos from the financial markets now via the QuoteService. So, we can eliminate this code. Shout-out to Brian Dussault for the fun easter eggs.

Tools Module (tools) is mostly used as a wrapper to drive the Chaos module, and it can be removed as well. We can now use the cf command line to chaotically bring down services as described in Part 3.

How do we quantify these changes? One way (though admittedly flawed) is by looking at Lines of Code (LoC). Info was gathered via the Intellij Statistics plugin, which is able to filter out blank lines, comments, and other “non-source” from our Java source files. We’re also ignoring test code:

LoC Part 1 Part 2 Part 3 Part 4
Monolith 6783 6947 7401 6602
Microservices 0 571 937 1688
Total 6783 7518 8338 8290

Some Observations

  • Microservices have not been the ticket to an overall reduction in LoC. However, the size of the monolith is now shrinking.
  • Overall increases in LoC (22%) can be balanced against increases in capability because we have specifically improved scalability, resiliency, and maintainability due to code simplification and the use of modern libraries and approaches.

Types of Changes

Thinking beyond Lines of Code, is there a way to categorize what these changes represent? In the following table we’ve grouped our changes by subjectively “eyeballing” the red and green lines on the git diff reports, and then bucketing classes based on an overall impression of what the changes accomplish.

The categories are:

  • Obsolesced: The “red” classes—this code was removed and not refactored elsewhere.
  • New Functionality: The “green” classes—this code provides functionality (or access to functionality) that did not exist before.
  • Remedial: These are a mixture of red and green within a class—these are changes needed to enforce service boundaries and generally clean things up.
  • Test related: These changes are related to the test code.
Category Part 1 to Part 2 Part 2 to Part 3 Part 3 to Part 4
Obsolesced 10% 10% 40%
New Functionality 20% 33% 0%
Remedial 45% 32% 35%
Test Related 25% 25% 25%

There was a major code culling during Part 4. Some of this code could have been removed in Part 3, which would have made the “Obsolesced” numbers more even. It is interesting to note that the remedial changes seems to hold to about ⅓ of the total, and that test changes seem to consistently make up about ¼ of the total. We’ll keep tracking these numbers over future posts to see if this trend plays out.


Gradle tooling allows us to examine our dependencies. Java and Spring are notorious for their mega dependency stacks. We can look back and see what is happening with the runtime dependencies for the Data Module to see the affect of our changes via the use of the “gradle dependencies” task:

  Part 1 Part 2 Part 3 Part 4
Data Module Dependencies 61 67 197 173

After an alarming increase in Part 3, the dependencies have dropped a bit in Part 4.

Emboldened by this, we re-tested the SpringTrader app, and it now runs under JDK 8. There are still issues with ASM that keep us at 1.7 bytecode level—we will need to address these in the future.


  • By extracting two more microservices, we have reduce complexity within the data module and have eliminated two other modules.
  • We have also started to chip away at our dependencies puzzle by partly remediating the data module’s library stack.
  • We are able to run under JDK 8 provided we stick to 1.7 bytecode compatibility.

This leaves our current technical debt as:

  • Rationalize dependency management and revisit library versions (from Part 1).
  • Implement security across our microservices (new in Part 4).

Let’s see if we can tackle these in our next post.

Learning More:


About the Author


Pivotal’s Big Data Story—SiliconANGLE theCube Interview
Pivotal’s Big Data Story—SiliconANGLE theCube Interview

Recently, Pivotal’s Michael Cucchi participated in an interview with two of the hosts of theCUBE, SiliconAN...

Pivotal Conversations—The Story of Project Sputnik at Dell, with Barton George
Pivotal Conversations—The Story of Project Sputnik at Dell, with Barton George

Our customers are always tasked with one of the most difficult jobs in a company: being a change agent. In ...

SpringOne 2022

Register Now