04/20/26

Node.js Microservices: A Practical Guide

Patterns, communication, deployment, and avoiding distributed-monolith mistakes

9 Min Read

Node.js is a natural fit for microservices. Fast cold starts, small memory footprint, strong async story, and an HTTP-first runtime make it easy to spin up services that do one thing well. That's also why so many teams end up with a fleet of Node services, some owned by different teams, some copy-pasted from a template, some behind an API gateway, some not, and a nagging sense that the system got away from them.

This guide covers what actually works when you build microservices in Node.js: where service boundaries should fall, how services should talk to each other, where the data lives, how to deploy, and the failure modes that turn a microservice system into a distributed monolith. We also introduce a framework at the end that handles a lot of the scaffolding for you.

When to Use Microservices (Honestly)

Before anything else: microservices are an organizational tool with performance side effects. They're not a default architecture.

Stay with a monolith if:

  • Team is under ~5 people.
  • Domain boundaries aren't stable.
  • You haven't hit a scaling pain that clearly maps to splitting a service out.

Consider microservices when:

  • Multiple teams need to deploy independently.
  • Different parts of the system have materially different scaling profiles.
  • You need fault isolation between components.
  • Your domain has well-understood bounded contexts.

Most "we should split this into microservices" conversations are really "we should clean up our module boundaries." Splitting a well-factored monolith later is cheap. Merging a poorly-factored microservice mesh is not.

Defining Service Boundaries

The first and hardest problem. Bad boundaries create services that gossip constantly, own slivers of each other's data, and have to deploy together. That's a distributed monolith.

Good boundaries follow domain lines, not technical layers:

  • Bad: auth-service, database-service, logging-service, api-service. Technical layers pretending to be domains.
  • Good: users, orders, catalog, payments, shipping. Each owns its data, its business logic, and its read/write API.

A useful smoke test: if "add a field to this entity" requires changing three services, your boundaries are wrong.

Start with fewer, larger services. You can always split later. Going the other direction (merging services that shouldn't have been split) is far more painful.

Communication Patterns

Three patterns, different tradeoffs.

Synchronous HTTP/gRPC

Service A calls Service B over HTTP. Simple. Familiar.

// Service A calling Service B
const resp = await fetch("http://users-service/users/" + id);
const user = await resp.json();

Advantages: easy to reason about, easy to test.

Tradeoffs: tight coupling on availability (B goes down, A fails), latency compounds across hops, cascading failures are easy.

Use synchronous calls for reads that genuinely need a fresh answer. Avoid them for anything that can happen asynchronously.

Asynchronous Messaging / Events

Service A publishes an event. Any number of services consume it.

// Service A
await queue.publish("order.created", { orderId, userId });

// Service B (inventory)
queue.subscribe("order.created", async (event) => {
  await reserveStock(event.orderId);
});

Advantages: loose coupling, services can be down without breaking the producer, easy to add new consumers.

Tradeoffs: eventual consistency (responses are "it happened, eventually"), harder to debug than synchronous calls, requires a message broker.

Use events for workflows that span services, "order placed" → "reserve inventory" → "charge payment" → "send email".

API Gateway Pattern

A single entry point that routes to backend services. Handles auth, rate limiting, response aggregation.

Client → Gateway → { users-service, orders-service, catalog-service }

Advantages: clients don't see the microservice fragmentation, central place for cross-cutting concerns.

Tradeoffs: the gateway becomes a critical path (single point of failure, deploy bottleneck), aggregations are network-heavy.

Use a gateway as soon as you have more than a few services exposed to clients. Don't let every service be publicly addressable.

Message Brokers

If you go async, you need a broker. The common choices:

  • Redis Pub/Sub: cheapest, pub/sub only, no durability. Good for cross-service notifications where losing a message is okay.
  • RabbitMQ: durable queues, routing flexibility, dead-letter queues. Solid default for work-queue patterns.
  • NATS: lightweight, with JetStream adding persistence. Good middle ground.
  • Kafka: durable, replayable, partitioned. Right when events are part of your source of truth. Operationally heavy.
  • AWS SQS + SNS: managed, no ops overhead, integrates with the rest of AWS. Default if you're on AWS.
  • GCP Pub/Sub: same story for GCP.

For most Node.js microservice systems, SQS + SNS (on AWS) or GCP Pub/Sub is the pragmatic pick, no broker to operate, pay-per-use, durable by default.

Service-to-Service Authentication

Don't let services call each other without authentication. A leaked internal service URL is an incident. Options:

  • mTLS: mutual TLS between services. Secure, operationally complex (cert rotation, CA management).
  • Internal JWT: services sign short-lived tokens using a shared secret or KMS key.
  • Service mesh: Linkerd, Istio, or AWS App Mesh handle mTLS and routing transparently. Adds operational weight.

For small Node.js fleets, internal JWTs with a KMS-managed key are often enough. For large fleets, a service mesh becomes worth it.

Databases

Core rule: each service owns its database. No other service reads or writes it directly. Cross-service queries happen through APIs or events.

If Service A and Service B share a database, you don't have microservices, you have a distributed deployment of a shared schema. You get the ops cost without the isolation benefit.

Typical setup per service:

  • Its own Postgres (or Mongo, or whatever).
  • Its own migrations, run on deploy.
  • Its own connection pool.
  • Its own backup policy.

This is where small teams give up, running five Postgres instances sounds heavy. Managed services (AWS RDS, GCP Cloud SQL, Neon, Supabase, Railway) make it survivable. Skip this discipline and you'll regret it.

Deployment

Each service deploys independently. In practice:

  • Container per service: Docker image, pushed to a registry, pulled into the runtime.
  • Runtime: Kubernetes, ECS, Fly, Railway, Render, or a custom orchestrator. Kubernetes is standard at scale; PaaS is fine below that.
  • CI/CD per service: separate pipelines, separate versioning, separate rollouts. Feature flags for cross-service changes.
  • Blue/green or canary deployment: don't bring down production to deploy one service.

A common mistake: monorepo with shared CI that rebuilds and redeploys everything on every change. This kills one of microservices' main benefits (independent deploy). Use build caching and path-based triggers.

Observability

Non-negotiable. Without it, debugging a multi-service request is archaeology.

Three things you need:

  1. Structured logging: JSON logs with a shared request ID across services. Pino or Winston, with a correlation ID middleware.
  2. Distributed tracing: OpenTelemetry, with trace context propagated across service calls. Jaeger, Grafana Tempo, Datadog, or AWS X-Ray for visualization.
  3. Metrics and alerts: request rate, error rate, latency (RED metrics) per service. Prometheus + Grafana is a common self-hosted stack; Datadog or New Relic for managed.

Setting all this up from scratch for a 5-service Node fleet is a week of work. Doing it right once and templating it for new services saves that week every time.

Local Development

The ugly truth of microservices: local dev is hard. Five Node services, a database per service, a broker, maybe an API gateway, that's a lot to run on a laptop.

Options, in order of complexity:

  1. Docker Compose: everything in containers, one docker-compose up. Fine for small fleets, slow as they grow.
  2. Run what you're working on, mock the rest: each service has a contract, other services are stubs. Requires disciplined contract management.
  3. Shared dev environment: one remote dev cluster, each developer gets a namespace. Works but requires infrastructure investment.
  4. Platform that handles this for you: e.g., Encore (below) spins up all local services, databases, and brokers with one command.

The right answer depends on team size. A three-person team can live with Docker Compose. A thirty-person team can't.

Failure Patterns

Failures that reliably happen in Node.js microservice systems:

  1. Cascading timeouts. A slow service upstream causes timeouts downstream. Add circuit breakers (opossum is a Node circuit breaker library) and backpressure.
  2. Distributed transactions. You can't 2PC across microservices cleanly. Use sagas, sequences of local transactions with compensating actions on failure.
  3. Message duplication. All brokers can redeliver. Make consumers idempotent (use idempotency keys, deduplicate on a unique constraint).
  4. Dead letters. Messages that can't be processed need somewhere to go, or they'll block the queue. Every queue needs a dead-letter policy.
  5. Schema drift. Service A's event schema changes, Service B breaks. Use contract tests or a schema registry.

A Framework That Handles the Wiring: Encore

Most of this guide is "here's the problem, here's a pattern, here's what to wire up." A lot of the wiring is the same across every Node microservice system: Pub/Sub setup, trace context propagation, structured logs, service-to-service auth, per-service DB provisioning. Doing it from scratch for each new project is rebuild-the-template work.

Encore is a framework that handles that scaffolding. You declare services, APIs, databases, and Pub/Sub topics as typed objects in TypeScript, and Encore generates the infrastructure, running locally for dev, provisioned on AWS or GCP for prod.

Defining services

// users/encore.service.ts
import { Service } from "encore.dev/service";
export default new Service("users");

APIs that are callable across services

// users/users.ts
import { api } from "encore.dev/api";
import { SQLDatabase } from "encore.dev/storage/sqldb";

const db = new SQLDatabase("users", { migrations: "./migrations" });

export interface User { id: number; email: string; }

export const get = api(
  { method: "GET", path: "/users/:id", expose: true },
  async ({ id }: { id: number }): Promise<User> => {
    return await db.queryRow`SELECT * FROM users WHERE id = ${id}`;
  },
);
// orders/orders.ts
import { users } from "~encore/clients";

export const createOrder = api(
  { method: "POST", path: "/orders", expose: true },
  async (req: CreateOrder) => {
    const user = await users.get({ id: req.userId }); // compile-time checked
    // ...
  },
);

Cross-service calls are type-checked at compile time, no silent breakage when a field renames.

Events

// orders/events.ts
import { Topic } from "encore.dev/pubsub";
export const orderCreated = new Topic<OrderCreated>("order-created", {
  deliveryGuarantee: "at-least-once",
});

// inventory/subscriptions.ts
import { Subscription } from "encore.dev/pubsub";
import { orderCreated } from "~encore/clients/orders";
new Subscription(orderCreated, "reserve-stock", {
  handler: async (event) => { /* ... */ },
});

Encore picks SNS+SQS on AWS or Pub/Sub on GCP. You don't configure the broker.

What you get without writing it

  • Distributed tracing end-to-end with every request traced across services, DB calls, and Pub/Sub messages.
  • Local dev with one command: encore run starts every service, database, and queue locally.
  • Infrastructure from code: the TypeScript that declares your services and topics provisions the actual cloud resources.
  • Service catalog and API docs: auto-generated from your code.

For a new Node.js microservice project, this replaces several weeks of template-building. Encore is open source (11k+ GitHub stars) and runs in production at companies including Groupon.

Deploy with Encore

Want to jump straight to a running app? Clone this starter and deploy it to your own cloud.

Deploy

When Encore fits

  • You're starting a new Node.js microservice system and don't want to rebuild the scaffolding.
  • You want type safety across services, not just within each one.
  • You're on AWS or GCP and would rather not run your own broker and tracing stack.

When to stick with a DIY Node setup

  • You have existing Node microservices with patterns you don't want to migrate.
  • You require a broker or cloud Encore doesn't target natively.

Getting Started

If you're building microservices from scratch in Node.js, pick one of these paths:

# DIY Express/Fastify + Docker Compose
mkdir -p services/users services/orders
# ... build templates, wire up broker, logging, tracing

# NestJS microservices
npm i -g @nestjs/cli && nest new project

# Encore
brew install encoredev/tap/encore
encore app create my-app --example=ts/empty
cd my-app && encore run
Deploy with Encore

Want to jump straight to a running app? Clone this starter and deploy it to your own cloud.

Deploy

Ready to build your next backend?

Encore is the Open Source framework for building robust type-safe distributed systems with declarative infrastructure.