An Introduction to Look-Aside Caching

February 7, 2017 Jagdish Mirani

Performance is critical to the success of any given microservice.  Overall performance is the result of applying ‘performance friendly’ techniques at various points in the design, development, and delivery of microservices. In many cases, however, you can make vast performance improvements through basic techniques like implementing and optimizing caching at various points in between the consumers of data (users and applications) and servers that store data. Caches can return data much faster than the disk-based databases that originate the data because of caches’ use of memory to provide lower latency access. Caches are also usually located much closer to the consumers of data from a network topology perspective.

A cache can be inserted anywhere in the infrastructure where there is congestion with data delivery. In this post, we’ll focus on look-aside caching that serves as a highly performant alternative to accessing data from a microservice’s backing store. We will also clarify the meaning of various terms associated with caching patterns - such as read-aside, read thru, write through, and write behind caches - and when to choose each pattern.

Look-Aside Cache vs. Inline Cache

The two main caching patterns are the look-aside caching pattern and the inline caching pattern. The descriptions and differences between these patterns are shown in the table below.

 
Pattern How it reads How it writes

Look-Aside Cache 

  • Application requests data from cache

  • Cache delivers data, if available

  • If data not available, application gets data from backing store and writes it to the cache for future requests (read aside)

 

  • Application writes new data or updates to existing data in both the cache and the backing store -or- all writes are done to the backing store and the cache copy is invalidated

Inline Cache

  • Application requests data from cache

  • Cache delivers data if available

  • Key Difference: If data not available, cache retrieves data from backing store (read thru), caches it, then returns the value to the requesting application

  • Application writes new data or updates existing data in cache

  • Cache will synchronously (write through) or asynchronously (write behind) write data to the backing store 

 

Look-Aside Caching 101

In the look-aside caching pattern, if the data is not cached, the application gets the data from the backing store and puts it into the cache for subsequent reads.

The upside of this pattern is that it doesn’t require the developer to deploy any code to the cache servers. Instead, the look-aside pattern puts the developer and the application code in charge of managing the cache. The benefits of control over the cache come with the burden of managing the cache. Coding frameworks, like the Spring Framework, can mitigate this burden via a caching abstraction, which provides a uniform mechanism for developers to work with a cache, regardless of which specific caching technology is being used.

The abstraction provides a set of Java annotations, like the @Cacheable annotation on a method, which executes a function and caches the result when there is a cache miss. Developers can learn and use Spring’s cache abstraction rather than specifics related to each caching technology. Time-based expiration of data built into caching products can further reduce the cache management burden.

Look-aside caching is primarily used for data that does not change often. If the data in the backing store changes fast, then the volume of notifications for invalidating entries can erode the benefits of caching.

More Control in the Application Layer

In contrast to inline caching, look-aside caching is declarative - the developer tells the application what to cache, not how to do it. For example, with inline caching, the developer must deploy code to the cache server. The developer must also imperatively handle cache misses. The developer optionally deploys code to allow writes to the cache to be pushed into the backing-store either synchronously or asynchronously.

So, a key difference between inline and look-aside caching patterns is what the application code does versus what the cache does. In the look-aside caching pattern, there is more control in the application layer. In the inline caching pattern, code is deployed into the cache servers, and then the cache takes control of reading from and writing to the backing store.

Rapid Self-Service Platform

Caching and invalidation are considered to be some of the deeper topics in computer science. The patterns we discussed in this article only begin to scratch the surface of caching techniques. Understanding the terminology around caching patterns provides a good grounding for approaching deeper, more advanced topics.

As cloud-native platforms and microservices continue to rise in popularity, developers are turning to tools like Pivotal Cloud Foundry to provision caching infrastructure on-demand as a backing service to their application deployments. Providing developers with a platform to rapidly self-service their infrastructure needs is just one of the ways Pivotal is helping customers transform how they build software.

About the Author

Jagdish Mirani

Jagdish Mirani is an enterprise software executive with extensive experience in Product Management and Product Marketing. Currently he is in charge of Product Marketing for Pivotal's data services (Cloud Cache, MySQL, Redis, PostgreSQL). Prior to Pivotal, Jagdish was at Oracle for 10 years in their Data Warehousing and Business Intelligence groups. More recently, Jag was at AgilOne, a startup in the predictive marketing cloud space. Prior to AgilOne, Jag held various Business Intelligence roles at Business Objects (now part of SAP), Actuate (now part OpenText), and NetSuite (now part of Oracle). Jagdish holds a B.S. in Electrical Engineering and Computer Science from Santa Clara University and an MBA from the U.C. Berkeley Haas School of Business.

More Content by Jagdish Mirani
Previous
How to Build Modern Data Pipelines with Pivotal GemFire and Spring Cloud Data Flow
How to Build Modern Data Pipelines with Pivotal GemFire and Spring Cloud Data Flow

Getting data from point A to B in a timely way is often easier said than done. Learn how to breakdown data ...

Next Video
New Security in Apache Geode
New Security in Apache Geode

Hear about the new security framework released in Apache Geode 1.0 M3. This framework provides a simplified...