####Background In current tech world, performance and scaling are 2 main factors when we design a system. Cache is born for performance improvemant but when it meets with scaling requirement, things are getting interesting. In this post and upcoming posts, I will describe couple caching strategies and try to compare them in a distributed environment.
####Strategies In this section, I will describe 5 types of caching strategies: cache aside, read through, read ahead, write through, write behind.
This is where application is responsible for reading and writing from the database and the cache doesn’t interact with the database at all. The cache is “kept aside” as a faster and more scalable in-memory data store. The application checks the cache before reading anything from the database. And, the application updates the cache after making any updates to the database. This way, the application ensures that the cache is kept synchronized with the database.
This is where cache store is responsible for reading from the database. When an application asks the cache for certain key and miss, that will delegate cache store to fetch data from database and populate it in cache for future use. And then return data to application. This performance can be improved by read ahead
This will allow developers to configure a cache to automatically and asynchronously reload any recently accessed cache entry from the cache loader before its expiration. The result is that after a frequently accessed entry has entered the cache, the application will not fell the impact of a read against a potentially slow cache when the entry is reloaded due to expiration. The asynchronous refresh is only triggered when a object that is sufficiently close to its expiration time is accessed.
When the application updates a piece of data in the cache, the operation will not complete/return until the database is also got updated. This does not improve write performance since you are still dealing with the latency of the write to database. Improving write performance is the purpose for the write behind.
In this strategy, modified cache entries are asynchronously written to the data source after a configurable delay, whether after 10 seconds, 20 minutes, a day or even a week or longer.For Write-Behind caching, we will maintains a write-behind queue of the data that must be updated in the data source. When the application updates X in the cache, X is added to the write-behind queue (if it isn’t there already; otherwise, it is replaced), and after the specified write-behind delay we will call the cache to update the underlying database with the latest state of X. Note that the write-behind delay is relative to the first of a series of modifications—in other words, the data in the data source will never lag behind the cache by more than the write-behind delay.
Read-Through/Write-Through vs. cache-aside
There are two common approaches to the cache-aside pattern in a clustered environment. One involves checking for a cache miss, then querying the database, populating the cache, and continuing application processing. This can result in multiple database hits if different application threads perform this processing at the same time. Alternatively, applications may perform double-checked locking (which works since the check is atomic with respect to the cache entry). This, however, results in a substantial amount of overhead on a cache miss or a database update (a clustered lock, additional read, and clustered unlock – up to 10 additional network hops, or 6-8ms on a typical gigabit ethernet connection, plus additional processing overhead and an increase in the “lock duration” for a cache entry).
By using inline caching, the entry is locked only for the 2 network hops (while the data is copied to the backup server for fault-tolerance). Additionally, the locks are maintained locally on the partition owner. Furthermore, application code is fully managed on the cache server, meaning that only a controlled subset of nodes will directly access the database (resulting in more predictable load and security). Additionally, this decouples cache clients from database logic.
Refresh ahead vs. Read Through
Refresh-ahead offers reduced latency compared to read-through, but only if the cache can accurately predict which cache items are likely to be needed in the future. With full accuracy in these predictions, refresh-ahead will offer reduced latency and no added overhead. The higher the rate of misprediction, the greater the impact will be on throughput (as more unnecessary requests will be sent to the database) – potentially even having a negative impact on latency should the database start to fall behind on request processing.
Write Behind vs. Write Through
If the requirements for write-behind caching can be satisfied, write-behind caching may deliver considerably higher throughput and reduced latency compared to write-through caching. Additionally write-behind caching lowers the load on the database (fewer writes), and on the cache server (reduced cache value deserialization).