Ink&Horizon
HomeBlogTutorialsLanguages
Ink&Horizon— where knowledge meets the horizon —Learn to build exceptional software. Tutorials, guides, and references for developers — from first brushstroke to masterwork.

Learn

  • Blog
  • Tutorials
  • Languages

Company

  • About Us
  • Contact Us
  • Privacy Policy

Account

  • Sign In
  • Register
  • Profile
Ink & Horizon

© 2026 InkAndHorizon. All rights reserved.

Privacy PolicyTerms of Service
Back to Blog
Database

Redis 8 Caching Strategies: From Basics to Production Patterns

Cache-aside, write-through, pub/sub, Redis Stack JSON, search, and real-time leaderboards

2026-02-20 22 min read
ContentsWhy Redis Is EverywhereCaching Patterns: Cache-Aside vs Write-ThroughPub/Sub: Real-Time NotificationsRate Limiting with RedisKey Takeaways

Why Redis Is Everywhere

Redis (Remote Dictionary Server) is an in-memory data structure store. It is not just a cache — it is a full-featured database that happens to be incredibly fast because it keeps all data in RAM. Redis handles 100,000+ operations per second on a single thread.

Redis 8 introduces significant improvements: multi-threaded I/O for network operations, enhanced Redis Stack modules (JSON, Search, TimeSeries, Bloom filters), improved cluster stability, and better memory efficiency.

Every major application uses Redis for: caching (reduce database load by 90%), sessions (stateless auth across multiple servers), rate limiting (API throttling), pub/sub (real-time notifications), leaderboards (sorted sets), and queues (task processing).

Key Takeaways

In-memory: 100,000+ ops/sec on a single thread.
Not just a cache: pub/sub, leaderboards, queues, sessions, rate limiting.
Redis 8: multi-threaded I/O, enhanced Stack modules, better clustering.
Reduces database load by 80-95% for read-heavy workloads.

Caching Patterns: Cache-Aside vs Write-Through

Cache-Aside (Lazy Loading): Application checks cache first. On cache miss, reads from DB, then writes to cache. Most common pattern — simple, cache only what is actually accessed. But first requests after cache expiry hit the DB.

Write-Through: Application writes to cache AND database simultaneously. Cache is always up-to-date but every write goes through cache (even data that is never read). Use for critical data that must always be fresh.

Write-Behind (Async Write): Application writes to cache, and a background process syncs to DB periodically. Fastest writes but risk of data loss if Redis crashes before sync. Use for non-critical counters and analytics.

Snippet
import Redis from 'ioredis';
const redis = new Redis(process.env.REDIS_URL);

// Cache-Aside Pattern (most common)
async function getUserWithCache(userId: string) {
  const cacheKey = `user:${userId}`;
  
  // 1. Check cache
  const cached = await redis.get(cacheKey);
  if (cached) {
    console.log('Cache HIT');
    return JSON.parse(cached);
  }
  
  // 2. Cache MISS — read from database
  console.log('Cache MISS — fetching from DB');
  const user = await db.user.findUnique({ where: { id: userId } });
  
  if (!user) return null;
  
  // 3. Write to cache with TTL (1 hour)
  await redis.setex(cacheKey, 3600, JSON.stringify(user));
  
  return user;
}

// Cache Invalidation (on update)
async function updateUser(userId: string, data: UpdateUserInput) {
  // 1. Update database
  const user = await db.user.update({ where: { id: userId }, data });
  
  // 2. Invalidate cache (next read will refresh)
  await redis.del(`user:${userId}`);
  
  // Also invalidate any list caches
  await redis.del('users:list:page:*');
  
  return user;
}

// Write-Through Pattern
async function createPost(data: CreatePostInput) {
  // 1. Write to database
  const post = await db.post.create({ data });
  
  // 2. Immediately write to cache (always fresh)
  await redis.setex(`post:${post.id}`, 3600, JSON.stringify(post));
  
  // 3. Invalidate list cache
  await redis.del('posts:latest');
  
  return post;
}

Key Takeaways

Cache-Aside: check cache → miss → read DB → write cache. Most common.
Write-Through: write DB + cache simultaneously. Always fresh.
Always set TTL (expiry) — unbounded caches grow until OOM.
Invalidate on write: delete cache key when data changes.
JSON.parse/stringify is fine for small objects; use Redis JSON for large ones.

Pub/Sub: Real-Time Notifications

Redis Pub/Sub enables real-time communication between services. A publisher sends messages to a channel, and all subscribers on that channel receive the message instantly. This powers real-time notifications, chat, live dashboards, and cache invalidation across servers.

Pub/Sub is fire-and-forget: if no subscriber is listening, messages are lost. For guaranteed delivery (task queues, job processing), use Redis Streams instead.

Snippet
// Publisher: Notify all servers of cache invalidation
async function publishCacheInvalidation(key: string) {
  await redis.publish('cache:invalidate', JSON.stringify({
    key,
    timestamp: Date.now(),
    source: process.env.SERVER_ID,
  }));
}

// Subscriber: Listen for invalidation events
const subscriber = new Redis(process.env.REDIS_URL);
subscriber.subscribe('cache:invalidate', (err) => {
  if (err) console.error('Subscribe failed:', err);
});

subscriber.on('message', async (channel, message) => {
  const { key, source } = JSON.parse(message);
  
  // Don't process our own messages
  if (source === process.env.SERVER_ID) return;
  
  // Invalidate local cache
  await localCache.delete(key);
  console.log(`Cache invalidated: ${key}`);
});

// Real-time notifications
await redis.publish('notifications:user:123', JSON.stringify({
  type: 'new_comment',
  postTitle: 'React Server Components Guide',
  commenter: 'Priya',
}));

Rate Limiting with Redis

Redis is the standard backend for rate limiting because of its atomic operations and built-in expiry. The sliding window counter algorithm provides accurate rate limiting with minimal memory usage.

This pattern is essential for protecting APIs from abuse, preventing brute-force login attacks, and implementing fair usage policies.

Snippet
// Sliding Window Rate Limiter
async function checkRateLimit(
  key: string,
  limit: number,
  windowSeconds: number
): Promise<{ allowed: boolean; remaining: number; resetAt: number }> {
  const now = Date.now();
  const windowStart = now - (windowSeconds * 1000);
  
  // Use sorted set with timestamp as score
  const multi = redis.multi();
  multi.zremrangebyscore(key, 0, windowStart); // Remove expired entries
  multi.zadd(key, now, `${now}:${Math.random()}`); // Add current request
  multi.zcard(key); // Count requests in window
  multi.expire(key, windowSeconds); // Auto-cleanup
  
  const results = await multi.exec();
  const requestCount = results![2][1] as number;
  
  return {
    allowed: requestCount <= limit,
    remaining: Math.max(0, limit - requestCount),
    resetAt: Math.ceil((windowStart + windowSeconds * 1000) / 1000),
  };
}

// Usage in middleware
app.addHook('onRequest', async (request, reply) => {
  const result = await checkRateLimit(
    `ratelimit:${request.ip}`,
    100,  // 100 requests
    60    // per 60 seconds
  );
  
  reply.header('X-RateLimit-Limit', '100');
  reply.header('X-RateLimit-Remaining', result.remaining.toString());
  reply.header('X-RateLimit-Reset', result.resetAt.toString());
  
  if (!result.allowed) {
    return reply.status(429).send({ error: 'Too many requests' });
  }
});

Key Takeaways

Sorted set + timestamps = accurate sliding window rate limiting.
Atomic operations (MULTI/EXEC) prevent race conditions.
Always return rate limit headers (X-RateLimit-*) for client visibility.
Use separate rate limits for auth routes (5/15min) vs API routes (100/min).

Key Takeaways

Redis is the most versatile tool in backend development: caching, sessions, rate limiting, pub/sub, leaderboards, and queues — all from one service. Master it and you solve 80% of backend scaling challenges.

The essential patterns: cache-aside for general caching, pub/sub for real-time features, sorted sets for rate limiting and leaderboards, and always set TTL to prevent memory exhaustion.

For interviews: explain cache-aside vs write-through trade-offs, demonstrate rate limiting with sorted sets, discuss Redis persistence (RDB snapshots vs AOF log), and know the difference between Pub/Sub (fire-and-forget) and Streams (guaranteed delivery).

Key Takeaways

Cache-aside: most common pattern. Check cache → miss → read DB → write cache.
Always set TTL on every key. Unbounded caches cause OOM crashes.
Pub/Sub: real-time messaging. Fire-and-forget — use Streams for guaranteed delivery.
Rate limiting: sorted set with timestamps for sliding window accuracy.
Separate Redis connections for cache vs pub/sub subscribers.
Redis 8: multi-threaded I/O, JSON module, RedisSearch for secondary indexing.
AS
Article Author
Ashutosh
Lead Developer

Related Knowledge

Tutorial

Rust for Systems Programming

5m read
Tutorial

Go Concurrency in Practice

5m read
Article

Understanding Closures in JavaScript: The Complete 2026 Guide

22 min read
Article

React 19 Server Components: The Definitive 2026 Guide

28 min read
Article

Next.js 15 App Router Masterclass: Everything You Need to Know

25 min read