Cache Eviction Policies | System Design

ZeeshanAli-0704 - Oct 31 - - Dev Community

Table of Contents

  1. What Are Cache Eviction Policies?
  2. Cache Eviction Policies

What Are Cache Eviction Policies?

Cache eviction policies define the rules for deciding which data to remove when the cache reaches its storage limit. These policies help manage cache space efficiently, ensuring quick access to frequently or recently accessed data. Let’s look at some common cache eviction policies with examples to illustrate their functionality.


1. Least Recently Used (LRU)

Description:

The Least Recently Used (LRU) policy evicts the data that has not been accessed for the longest period. LRU assumes that data accessed recently will likely be accessed again soon, while data that hasn't been used in a while is less likely to be accessed.

Example:

Imagine a web server caching pages for a website, where the cache can hold three pages. The server is set to use the LRU policy. Suppose users access the pages in this order:

  1. Access page A
  2. Access page B
  3. Access page C
  4. Access page A (making A the most recently used)
  5. Access page D

Now, the cache is full with pages A, B, and C. When page D is accessed, page B, which is the least recently used page, will be evicted to make space for page D. The cache now holds pages A, C, and D.

How It Works:

  • LRU is often implemented using a linked list or a hashmap, where each access updates the recency of the data.
  • The least recently used data can then be easily identified and removed.

2. Least Frequently Used (LFU)

Description:

The Least Frequently Used (LFU) policy evicts the data that has been accessed the fewest times. LFU assumes that data accessed less often is less valuable and should be evicted first when the cache is full.

Example:

Suppose a mobile application caches images for faster access. The cache can hold three images, and the access pattern is as follows:

  1. Access image X (access count for X = 1)
  2. Access image Y (access count for Y = 1)
  3. Access image Z (access count for Z = 1)
  4. Access image X again (access count for X = 2)
  5. Access image W (eviction needed)

The cache contains images X, Y, and Z, but it’s full. Since image Y and image Z each have the lowest access count (1), either could be evicted to make space for image W. In most implementations, if there's a tie, LFU might choose the least recently used of the tied items.

How It Works:

  • LFU can be implemented using a frequency counter that tracks how many times each item is accessed.
  • It is usually combined with LRU as a secondary policy to resolve ties between items with the same frequency count.

3. First-In-First-Out (FIFO)

Description:

The First-In-First-Out (FIFO) policy removes data in the order it was added to the cache, irrespective of how frequently or recently it has been accessed. FIFO is simple but can lead to poor cache performance in cases where older data may still be relevant.

Example:

Consider an IoT device storing sensor readings in a cache that can hold only three data entries. When readings come in, they are added to the cache in the order they arrive:

  1. Reading A arrives (cache: A)
  2. Reading B arrives (cache: A, B)
  3. Reading C arrives (cache: A, B, C)
  4. Reading D arrives (cache is full; A is evicted based on FIFO)

With each new reading, the oldest entry (A) is removed first to make room for the new entry (D). The cache now holds B, C, and D, regardless of the importance or recency of these readings.

How It Works:

  • FIFO is typically implemented using a simple queue where the oldest element is at the front.
  • This makes FIFO easy to implement but can be less efficient for caching relevant data.

4. Random Replacement

Description:

The Random Replacement policy removes a randomly selected item from the cache to make room for new data. This approach doesn’t rely on any access history or usage pattern, which makes it unpredictable but simple to implement. Random replacement can work well in cases where access patterns are hard to predict.

Example:

Assume an online gaming server caches assets for active players. The cache can hold three assets, and the server randomly selects an asset to evict whenever the cache fills up. For instance:

  1. Asset A is cached
  2. Asset B is cached
  3. Asset C is cached
  4. Asset D needs to be cached (random eviction)

When asset D needs to be cached, the server randomly picks and evicts one of the existing assets (A, B, or C), regardless of how recently or frequently they have been accessed. Let’s say the server evicts asset B randomly. The cache now holds assets A, C, and D.

How It Works:

  • This policy doesn’t require any tracking of access patterns, making it lightweight.
  • It can be suitable for applications where data access patterns are unpredictable, but it may not perform well in systems that benefit from more strategic data retention.

Summary

Policy Key Principle Example Use Case Advantages Disadvantages
LRU Evict least recently accessed Web page caching Retains recently used data Overhead in tracking access order
LFU Evict least frequently accessed Image or media caching Prioritizes frequently used data Can require more complex implementation
FIFO Evict oldest data in cache Sequential data storage Simple and predictable May evict relevant data
Random Evict data at random Gaming assets, unpredictable data access Low overhead Non-optimized data eviction

Each policy has unique benefits and drawbacks, and the choice of eviction policy depends on the specific access patterns and requirements of the application.
More Details:

Get all articles related to system design
Hastag: SystemDesignWithZeeshanAli

systemdesignwithzeeshanali

Git: https://github.com/ZeeshanAli-0704/SystemDesignWithZeeshanAli

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .