Why Redis Is Everywhere
Redis (Remote Dictionary Server) is an in-memory data structure store. It is not just a cache — it is a full-featured database that happens to be incredibly fast because it keeps all data in RAM. Redis handles 100,000+ operations per second on a single thread.
Redis 8 introduces significant improvements: multi-threaded I/O for network operations, enhanced Redis Stack modules (JSON, Search, TimeSeries, Bloom filters), improved cluster stability, and better memory efficiency.
Every major application uses Redis for: caching (reduce database load by 90%), sessions (stateless auth across multiple servers), rate limiting (API throttling), pub/sub (real-time notifications), leaderboards (sorted sets), and queues (task processing).
Key Takeaways
Caching Patterns: Cache-Aside vs Write-Through
Cache-Aside (Lazy Loading): Application checks cache first. On cache miss, reads from DB, then writes to cache. Most common pattern — simple, cache only what is actually accessed. But first requests after cache expiry hit the DB.
Write-Through: Application writes to cache AND database simultaneously. Cache is always up-to-date but every write goes through cache (even data that is never read). Use for critical data that must always be fresh.
Write-Behind (Async Write): Application writes to cache, and a background process syncs to DB periodically. Fastest writes but risk of data loss if Redis crashes before sync. Use for non-critical counters and analytics.
import Redis from 'ioredis'; const redis = new Redis(process.env.REDIS_URL); // Cache-Aside Pattern (most common) async function getUserWithCache(userId: string) { const cacheKey = `user:${userId}`; // 1. Check cache const cached = await redis.get(cacheKey); if (cached) { console.log('Cache HIT'); return JSON.parse(cached); } // 2. Cache MISS — read from database console.log('Cache MISS — fetching from DB'); const user = await db.user.findUnique({ where: { id: userId } }); if (!user) return null; // 3. Write to cache with TTL (1 hour) await redis.setex(cacheKey, 3600, JSON.stringify(user)); return user; } // Cache Invalidation (on update) async function updateUser(userId: string, data: UpdateUserInput) { // 1. Update database const user = await db.user.update({ where: { id: userId }, data }); // 2. Invalidate cache (next read will refresh) await redis.del(`user:${userId}`); // Also invalidate any list caches await redis.del('users:list:page:*'); return user; } // Write-Through Pattern async function createPost(data: CreatePostInput) { // 1. Write to database const post = await db.post.create({ data }); // 2. Immediately write to cache (always fresh) await redis.setex(`post:${post.id}`, 3600, JSON.stringify(post)); // 3. Invalidate list cache await redis.del('posts:latest'); return post; }
Key Takeaways
Pub/Sub: Real-Time Notifications
Redis Pub/Sub enables real-time communication between services. A publisher sends messages to a channel, and all subscribers on that channel receive the message instantly. This powers real-time notifications, chat, live dashboards, and cache invalidation across servers.
Pub/Sub is fire-and-forget: if no subscriber is listening, messages are lost. For guaranteed delivery (task queues, job processing), use Redis Streams instead.
// Publisher: Notify all servers of cache invalidation async function publishCacheInvalidation(key: string) { await redis.publish('cache:invalidate', JSON.stringify({ key, timestamp: Date.now(), source: process.env.SERVER_ID, })); } // Subscriber: Listen for invalidation events const subscriber = new Redis(process.env.REDIS_URL); subscriber.subscribe('cache:invalidate', (err) => { if (err) console.error('Subscribe failed:', err); }); subscriber.on('message', async (channel, message) => { const { key, source } = JSON.parse(message); // Don't process our own messages if (source === process.env.SERVER_ID) return; // Invalidate local cache await localCache.delete(key); console.log(`Cache invalidated: ${key}`); }); // Real-time notifications await redis.publish('notifications:user:123', JSON.stringify({ type: 'new_comment', postTitle: 'React Server Components Guide', commenter: 'Priya', }));
Rate Limiting with Redis
Redis is the standard backend for rate limiting because of its atomic operations and built-in expiry. The sliding window counter algorithm provides accurate rate limiting with minimal memory usage.
This pattern is essential for protecting APIs from abuse, preventing brute-force login attacks, and implementing fair usage policies.
// Sliding Window Rate Limiter async function checkRateLimit( key: string, limit: number, windowSeconds: number ): Promise<{ allowed: boolean; remaining: number; resetAt: number }> { const now = Date.now(); const windowStart = now - (windowSeconds * 1000); // Use sorted set with timestamp as score const multi = redis.multi(); multi.zremrangebyscore(key, 0, windowStart); // Remove expired entries multi.zadd(key, now, `${now}:${Math.random()}`); // Add current request multi.zcard(key); // Count requests in window multi.expire(key, windowSeconds); // Auto-cleanup const results = await multi.exec(); const requestCount = results![2][1] as number; return { allowed: requestCount <= limit, remaining: Math.max(0, limit - requestCount), resetAt: Math.ceil((windowStart + windowSeconds * 1000) / 1000), }; } // Usage in middleware app.addHook('onRequest', async (request, reply) => { const result = await checkRateLimit( `ratelimit:${request.ip}`, 100, // 100 requests 60 // per 60 seconds ); reply.header('X-RateLimit-Limit', '100'); reply.header('X-RateLimit-Remaining', result.remaining.toString()); reply.header('X-RateLimit-Reset', result.resetAt.toString()); if (!result.allowed) { return reply.status(429).send({ error: 'Too many requests' }); } });
Key Takeaways
Key Takeaways
Redis is the most versatile tool in backend development: caching, sessions, rate limiting, pub/sub, leaderboards, and queues — all from one service. Master it and you solve 80% of backend scaling challenges.
The essential patterns: cache-aside for general caching, pub/sub for real-time features, sorted sets for rate limiting and leaderboards, and always set TTL to prevent memory exhaustion.
For interviews: explain cache-aside vs write-through trade-offs, demonstrate rate limiting with sorted sets, discuss Redis persistence (RDB snapshots vs AOF log), and know the difference between Pub/Sub (fire-and-forget) and Streams (guaranteed delivery).