This is the third installment of a series of articles covering the migration of a legacy monolithic application into a federation of related microservices. Part 1 covered the context for the case study, explained our methodology, and implemented the changes needed to get the app running on Pivotal Cloud Foundry®. In Part 2 we introduced a live Quote data feed provided by a new microservice. This added value to our application, but also added risk due to increased network distribution.
Picking Up From Where We Left Off…
Initially, SpringTrader followed an n-tier logical architecture with a shared data layer, as below:
In the previous post, we replaced an “embedded” database-backed QuoteService with an external microservice that provided near-real-time market data. This validated our proposed methodology of identifying potential candidates for microservice-ification, and then integrating them using the Proxy Pattern.
Before we can continue spawning new microservices, we will need to address the issue of service availability—more moving parts (additional microservices) equals more things that can go wrong. What can we do to add resilience to an increasingly distributed app?
To summarize, our key goals for this round of changes are:
- Provide a way for SpringTrader to handle failures in remote services
- Provide a way to locate and manage (potentially many) loosely coupled services
- Do this while minimizing changes to the existing codebase
- Make sure we have a working system when this round of changes has been completed
In the previous post we utilized Spring Cloud Connectors to hook SpringTrader to a microservice. There were some drawbacks to this—we needed to create three new classes, several property files, and some XML configuration changes to get it working. And, this was just to connect a single microservice to our app. We need a simpler approach, and we are going to replace the Spring Cloud Connector mechanism with Service Discovery, using Eureka. Eureka will enable us to locate our microservices by name at a known catalog endpoint. This way, SpringTrader can look them up dynamically at runtime.
Being able to find and attach to services easily is great, but how do we handle service outages?
Hystrix can help us here. From the website:
“Hystrix is a latency and fault tolerance library designed to isolate points of access to remote systems, services and 3rd party libraries, stop cascading failure and enable resilience in complex distributed systems where failure is inevitable.”
Hystrix does this by implementing the circuit breaker pattern—if a service becomes unavailable, a “fallback” method (to an alternate service with similar functionality) can be invoked instead. Later, normal operation is automatically restored once the failing service is back on its feet.
Putting it Together
Continuing with the Proxy Pattern, what if we were able to create several different Quote Services that can provide us with Quotes from alternate sources? If one (primary) service is unavailable, we could turn to another (secondary) service.
If we then create clients that implement the QuoteService interface, we can swap services in and out, and the rest of the monolith would not know the difference. This would allow SpringTrader continue to function in the face of service failures, and help mitigate risk due to increased distribution.
The New Services
The microservice we created in Part 2 will be our primary service—it provides live Quote data as per our requirements. Comparing our new, Part 3 version with the previous version via the diff, there is very little substantive change—most notable is the addition of the CloudConfig class. The @EnableEurekaClient annotation, plus some entries in the yaml file, enable the service to register itself with Eureka upon start up.
Our secondary service will be a database-backed Quote service resurrected from the pre-refactor SpringTrader, but rewritten to be a Spring-Boot microservice. This service provides HATEOAS flavored JSON, just to keep things interesting. For an insightful discussion of the benefits of HATEOAS, please refer to this post.
We also created a “service of last resort” that will live within the monolith. This is guaranteed to be up and available, but it will provide data of no real value. It is just there to make sure the app has something to talk to if both the primary and secondary services are unavailable.
Finally, here is a stand-alone Eureka service we can use to register our microservices, so SpringTrader can locate the primary and secondary services at runtime.
Making Use Of Our Multiple Services
Turning to our monolith, there’s a bit of work needed to connect things up to the new microservices:
- We need to create clients so the monolith can interact with the microservices. To take part in the Proxy Pattern, each of these clients will implement the QuoteService interface.
- We also need to mediate between the disparate JSON formats returned by the microservices, turning them into the Quote and MarketSummary domain objects expected by the rest of the SpringTrader application.
- We need a way for the monolith to find the services via Eureka.
- We need to weave in Hystrix fallbacks, as per the circuit-breaker pattern.
- Then, for testing and demo purposes, we need a way to bring services up and down, to make sure failover is working.
The Solution Details
To see the code from this round of changes, please refer to the diff between the part2 and part3 branches. As in the previous posts, the readme for the Part 3 branch goes into greater technical detail than we can cover here in the blog.
Now, let’s discuss the steps listed above in a little more detail.
1. The Clients
The client for the primary (live) QuoteService is here. This is fundamentally the same as the version from Part 2, but renamed to help differentiate it from the other implementations. The secondary (database) version of the service can be seen here.
2. Dealing With The JSON(s)
Our external services return different flavors of JSON—there is no reason to constrain them based upon existing SpringTrader idiosyncrasies. The mechanism to mediate between these JSON formats and the existing Domain classes is to implement GsonDecoders. These turn JSON responses into our domain objects. By encapsulating this functionality within the decoders, we are able to isolate any future API changes from the rest of SpringTrader. An example of this can be found here.
3. Service Discovery
Also inside the client code (example) we can see that some methods have @HystrixCommand annotations, and define a fallback method. This, plus some code in the CloudConfiguration class, plus adding the Hystrix libraries to our build is all we need to implement the circuit breaker pattern for our monolith.
After these changes, SpringTrader now looks as follows:
5. Watch It Go
- Push the eureka service
- Push the real time and database microservices—they will register themselves with Eureka
- Push the SpringTrader app using the deployApp script—it looks up the services, strings everything together, and starts displaying Quotes
Then, during market trading hours, you will see Quotes and market index information update every minute or so in the UI automatically.
Try the following steps to test it out:
- Stop the real-time primary service to simulate a failure. SpringTrader will failover to the secondary service and the UI will stop updating the Quote information.
- Stop the database-backed secondary service to simulate an additional failure. The UI will display mock data as SpringTrader fails over to the “service of last resort.”
- When we bring either of the microservices back online it will have a restorative effect, and the UI will respond accordingly.
Where We Stand At The End Of Part 3
We have accomplished a lot, adding both flexibility and resilience to SpringTrader. Importantly, we can now continue to confidently microservice-ify the monolith to our heart’s content.
- We designed and implemented a series of Quote microservices with varying capabilities and characteristics.
- We strung these together using Hystrix, and made them discoverable with Eureka.
- We showed how to manipulate them using simple CF commands and watched the monolith react to service failures and recoveries.
In terms of our technical debt, we’ve been able to mitigate the risk due to increased distribution that was identified in Part 2 of the series. But we are really pushing the limit of our original stack. Scrutinizing the logs during application startup uncovers various warnings and complaints from Spring.
Our current technical debt is now:
- Upgrade from JDK 7 to JDK 8 (from Part 1)
- Rationalize dependency management and revisit library versions (from Part 1, exacerbated in Part 3)
Finally, it is worth noting one thing. Life would have been easier if we could have used some of the advanced capabilities of the Spring-Cloud. Unfortunately, we are not yet able to take full advantage of these within SpringTrader because we are still constrained to using JDK7, Spring 3, and an obsolescing library stack. In the next post, we will finish micro-servicing our data layer, and then take a step back to see where we want to take this next.
- Read Part 1 and Part 2 in this blog post series
- Check out Part 1, Part 2, or Part 3 of the Cloud-Native Journey, which addresses a higher level of considerations for choosing and planning greenfield, legacy, or IT transformation projects
- Find more Cloud Foundry blog articles
About the Author