Ink&Horizon
HomeBlogTutorialsLanguages
Ink&Horizon— where knowledge meets the horizon —Learn to build exceptional software. Tutorials, guides, and references for developers — from first brushstroke to masterwork.

Learn

  • Blog
  • Tutorials
  • Languages

Company

  • About Us
  • Contact Us
  • Privacy Policy

Account

  • Sign In
  • Register
  • Profile
Ink & Horizon

© 2026 InkAndHorizon. All rights reserved.

Privacy PolicyTerms of Service
Back to Blog
Backend

Microservices Architecture Patterns: The Complete 2026 Guide

Event-driven design, CQRS, Saga, API Gateway, service mesh, and when to use (or avoid) microservices

2026-03-10 25 min read
ContentsMicroservices: The Real DefinitionCommunication Patterns: Sync vs AsyncCQRS: Command Query Responsibility SegregationSaga Pattern: Distributed TransactionsAPI Gateway PatternService Discovery & Health ChecksWhen NOT to Use MicroservicesKey Takeaways

Microservices: The Real Definition

A microservice is an independently deployable service that owns its own data store and communicates with other services through well-defined APIs or events. The key word is "independently" — if you cannot deploy one service without coordinating with others, you have a distributed monolith, not microservices.

Microservices solve organizational scaling problems, not technical ones. A team of 5 engineers building a startup should use a monolith. A company with 50+ engineers working on the same product needs microservices so teams can deploy independently without blocking each other.

The common mistake is splitting too early and too small. A good microservice boundary maps to a business domain (Users, Orders, Payments, Inventory), not a technical layer (AuthService, DatabaseService, CacheService).

Key Takeaways

Microservice = independently deployable + owns its data store.
If you can't deploy independently, you have a distributed monolith.
Solve for organizational scaling, not technical complexity.
Boundaries = business domains (DDD bounded contexts), not technical layers.
Start monolith → extract microservices when you hit team scaling problems.

Communication Patterns: Sync vs Async

Microservices communicate through two patterns: synchronous (request-response) and asynchronous (events/messages). The choice fundamentally affects your system's reliability, coupling, and complexity.

Synchronous (REST, gRPC): Service A calls Service B and waits for a response. Simple but creates tight coupling — if B is down, A fails. Use for queries that NEED immediate responses (user profile lookup, permission checks).

Asynchronous (events via Kafka, RabbitMQ, NATS): Service A publishes an event, Service B processes it later. Decoupled — if B is down, events queue up and are processed when B recovers. Use for commands where the caller does not need an immediate result (order placed, email sent, audit log).

Snippet
// Synchronous: gRPC (faster than REST for service-to-service)
// user.proto
syntax = "proto3";
service UserService {
  rpc GetUser (GetUserRequest) returns (User);
  rpc ListUsers (ListUsersRequest) returns (stream User);
}
message GetUserRequest { string user_id = 1; }
message User {
  string id = 1;
  string name = 2;
  string email = 3;
}

// Asynchronous: Event-driven with message broker
// order-service publishes event
await kafka.produce('order.placed', {
  orderId: 'ord_123',
  userId: 'usr_456',
  items: [{ productId: 'prod_789', quantity: 2 }],
  total: 99.99,
  timestamp: new Date().toISOString(),
});

// payment-service consumes event (independently)
kafka.subscribe('order.placed', async (event) => {
  await processPayment(event.userId, event.total);
  await kafka.produce('payment.completed', {
    orderId: event.orderId,
    paymentId: 'pay_321',
  });
});

// notification-service also consumes the same event
kafka.subscribe('order.placed', async (event) => {
  await sendOrderConfirmationEmail(event.userId, event.orderId);
});

Key Takeaways

Sync (REST/gRPC): tight coupling, simple, use for queries needing immediate response.
Async (events): loose coupling, resilient, use for fire-and-forget commands.
gRPC: binary protocol, 2-10x faster than REST, ideal for service-to-service.
Events: multiple consumers can react to the same event independently.
Rule of thumb: queries = sync, commands = async.

CQRS: Command Query Responsibility Segregation

CQRS separates your read operations (queries) from write operations (commands) into different models — potentially different databases. Writes go to a normalized database optimized for consistency. Reads go to a denormalized database optimized for query performance.

This sounds over-engineered until you hit real scale: your e-commerce product page needs joins across 8 tables for a single render, but writes only touch 1-2 tables. CQRS lets you optimize each side independently — the read model is a pre-computed "view" that can be served in 1ms without joins.

CQRS is almost always paired with Event Sourcing (events update the write model, then project into the read model). But you can use CQRS without event sourcing — just sync the read model from the write model asynchronously.

Snippet
// CQRS Architecture
//
// [Client] → Command → [Write API] → [Write DB (PostgreSQL)]
//                                         ↓ (events)
//                                    [Event Bus]
//                                         ↓
//                              [Read Model Updater]
//                                         ↓
// [Client] ← Query ← [Read API] ← [Read DB (Redis/Elasticsearch)]

// Write side: normalized, consistent
app.post('/api/orders', async (req) => {
  const order = await writeDb.order.create({
    data: { userId: req.userId, items: req.items, total: req.total },
  });
  
  // Publish event for read model sync
  await eventBus.publish('order.created', {
    orderId: order.id,
    userId: req.userId,
    items: req.items,
    total: req.total,
    createdAt: order.createdAt,
  });
  
  return { orderId: order.id };
});

// Read side: denormalized, fast
eventBus.subscribe('order.created', async (event) => {
  // Pre-compute the "order dashboard" view
  await readDb.set(`user:${event.userId}:orders`, {
    orderId: event.orderId,
    total: event.total,
    itemCount: event.items.length,
    status: 'pending',
  });
});

// Query: instant read from pre-computed view
app.get('/api/my-orders', async (req) => {
  return readDb.get(`user:${req.userId}:orders`); // Sub-millisecond
});

Saga Pattern: Distributed Transactions

In a monolith, database transactions guarantee consistency (ACID). In microservices, each service has its own database — you cannot use a single transaction across services. The Saga pattern solves this.

A Saga is a sequence of local transactions. Each service executes its local transaction and publishes an event. If one step fails, compensating transactions undo the previous steps. There are two types: Choreography (event-driven, no coordinator) and Orchestration (central coordinator manages the flow).

Orchestration is preferred for complex flows (>3 services) because it centralizes the logic and makes the flow visible. Choreography works well for simple 2-3 service flows but becomes a debugging nightmare at scale.

Snippet
// Saga Orchestrator: Order Processing
class OrderSaga {
  private steps = [
    { execute: 'reserveInventory', compensate: 'releaseInventory' },
    { execute: 'processPayment',   compensate: 'refundPayment' },
    { execute: 'confirmShipping',  compensate: 'cancelShipping' },
  ];

  async execute(orderId: string) {
    const completedSteps = [];

    for (const step of this.steps) {
      try {
        await this[step.execute](orderId);
        completedSteps.push(step);
        console.log(`✅ ${step.execute} succeeded`);
      } catch (error) {
        console.log(`❌ ${step.execute} failed — rolling back`);
        
        // Compensate in reverse order
        for (const completed of completedSteps.reverse()) {
          try {
            await this[completed.compensate](orderId);
            console.log(`↩️ ${completed.compensate} completed`);
          } catch (compError) {
            // Compensation failure — needs manual intervention
            console.error(`🚨 ${completed.compensate} FAILED`, compError);
            await this.alertOps(orderId, completed.compensate, compError);
          }
        }
        throw new Error(`Order ${orderId} saga failed at ${step.execute}`);
      }
    }
  }

  // Each step calls the respective microservice
  async reserveInventory(orderId) { /* Call inventory service */ }
  async releaseInventory(orderId) { /* Compensating transaction */ }
  async processPayment(orderId)   { /* Call payment service */ }
  async refundPayment(orderId)    { /* Compensating transaction */ }
}

Key Takeaways

Saga = sequence of local transactions with compensating transactions for rollback.
Choreography: services react to events autonomously. Simple but hard to debug.
Orchestration: central coordinator manages the flow. Complex but visible.
Compensation failures need alerting and manual intervention.
Use Orchestration for flows involving 3+ services.

API Gateway Pattern

An API Gateway sits between clients and microservices, providing a single entry point. It handles: request routing (mapping /api/users to the User Service), authentication (verifying JWT before forwarding), rate limiting, response aggregation (combining data from multiple services into one response), and protocol translation (REST to gRPC).

Without an API Gateway, clients must know about every microservice and communicate with them directly. This creates tight coupling and makes it impossible to change service boundaries without updating all clients.

Popular API Gateways: Kong, AWS API Gateway, Traefik, and Envoy. For smaller setups, a Next.js API route or Nginx reverse proxy works fine.

Key Takeaways

Single entry point: clients call one URL, gateway routes to services.
Cross-cutting concerns: auth, rate limiting, logging, CORS in one place.
Response aggregation: combine data from multiple services into one response.
BFF (Backend for Frontend): separate gateways for web, mobile, and TV clients.
Without a gateway, clients must know about every service — tight coupling.

Service Discovery & Health Checks

In production, services run on multiple instances with dynamic IPs (containers, VMs). Service discovery solves the "how does Service A find Service B?" problem. There are two approaches: client-side discovery (service queries a registry like Consul) and server-side discovery (load balancer handles routing — Kubernetes default).

Health checks are equally critical. Every microservice must expose a /health endpoint that the orchestrator (Kubernetes, ECS) uses to determine if the instance is alive. If a health check fails, the orchestrator kills the instance and starts a new one.

The health check should verify: database connectivity, cache connectivity, and any critical dependencies. A service that returns 200 OK but cannot reach its database is as bad as a crashed service.

Snippet
// Production health check endpoint
app.get('/health', async (req, reply) => {
  const checks = {
    status: 'ok',
    timestamp: new Date().toISOString(),
    uptime: process.uptime(),
    checks: {
      database: 'checking...',
      redis: 'checking...',
    },
  };

  try {
    // Check database connectivity
    await db.$queryRaw`SELECT 1`;
    checks.checks.database = 'healthy';
  } catch (err) {
    checks.checks.database = 'unhealthy';
    checks.status = 'degraded';
  }

  try {
    // Check Redis connectivity
    await redis.ping();
    checks.checks.redis = 'healthy';
  } catch (err) {
    checks.checks.redis = 'unhealthy';
    checks.status = 'degraded';
  }

  const statusCode = checks.status === 'ok' ? 200 : 503;
  return reply.status(statusCode).send(checks);
});

// Kubernetes uses these for liveness and readiness probes:
// livenessProbe:  /health (kills and restarts if failing)
// readinessProbe: /health (stops routing traffic if failing)

When NOT to Use Microservices

Microservices add complexity: distributed tracing, eventual consistency, network latency, deployment coordination, data ownership disputes, and debugging across services. This tax is justified only when the organizational benefits outweigh the engineering costs.

Do NOT use microservices when: your team is < 10 engineers, you are building an MVP/startup, you do not have CI/CD automation, or you cannot afford the operational cost of running a container orchestrator (Kubernetes).

The "modular monolith" is the ideal starting architecture: a single deployable with clearly separated modules (Users, Orders, Payments) that can be extracted into microservices later when team scaling demands it. This gives you the clean boundaries of microservices without the distributed systems tax.

Key Takeaways

Microservices solve PEOPLE problems (team scaling), not code problems.
Start with a modular monolith — extract services only when needed.
If your team < 10, you almost certainly do not need microservices.
Distributed monolith = worst of both worlds (complexity without independence).
The tax: distributed tracing, eventual consistency, network failures, deployment coordination.

Key Takeaways

Microservices architecture is about trade-offs, not technology. The patterns that matter are: sync vs async communication (REST/gRPC vs events), CQRS for read/write scaling, Saga for distributed transactions, API Gateway for single-entry communication, and health checks for reliability.

For interviews: explain microservices as an organizational solution, discuss the Saga pattern with compensating transactions, know when CQRS adds value vs complexity, and always recommend starting with a modular monolith.

The most common interview mistake is advocating for microservices everywhere. Senior engineers know when NOT to use them.

Key Takeaways

Microservices = independently deployable + own their data store.
Sync (REST/gRPC): use for queries needing immediate response.
Async (events): use for commands, fire-and-forget, loose coupling.
CQRS: separate read/write models for different performance requirements.
Saga: distributed transactions with compensating rollbacks.
API Gateway: single entry, auth, rate limiting, response aggregation.
Start monolith → extract services when team scaling demands it.
AS
Article Author
Ashutosh
Lead Developer

Related Knowledge

Tutorial

Python Async Patterns

5m read
Tutorial

Go Concurrency in Practice

5m read
Tutorial

Java Virtual Threads

5m read
Article

Understanding Closures in JavaScript: The Complete 2026 Guide

22 min read
Article

React 19 Server Components: The Definitive 2026 Guide

28 min read
Article

Next.js 15 App Router Masterclass: Everything You Need to Know

25 min read