04/20/26

NestJS Microservices: A Practical Guide

Transports, patterns, deployment, and when a simpler approach makes sense

11 Min Read

NestJS ships a dedicated microservices package that lets you split your application into services that communicate over TCP, Redis, NATS, gRPC, Kafka, or RabbitMQ. It's a capable foundation if you want NestJS's opinionated architecture across a distributed system. It's also one of the heavier ways to build microservices in TypeScript, and the overhead compounds as the service count grows.

This guide walks through the NestJS microservices model: how the transports work, how services talk to each other, how you deploy them, and where the model starts to strain. At the end we look at a lighter approach that handles the same architectural goals with less ceremony.

When You Actually Want Microservices

Before getting into NestJS specifics, a reality check. Microservices are an organizational and scaling tool, not a default architecture.

Stick with a monolith if:

  • You're a team of fewer than 5 developers
  • Your domain boundaries aren't stable yet
  • You haven't hit scaling pain that's clearly bounded by a single service

Consider microservices when:

  • Different parts of the system have genuinely different scaling profiles
  • Separate teams need to deploy independently without coordination
  • You need fault isolation between components that process untrusted input
  • Your domain has clear, well-understood bounded contexts

Most "we need microservices" conversations are really "we need better code organization." Splitting a well-factored monolith later is cheaper than merging a poorly-factored microservice mesh.

NestJS Microservices: How It Works

NestJS's microservices package (@nestjs/microservices) replaces the default HTTP transport with a message-oriented transport. Instead of controllers mapped to HTTP routes, you expose message handlers keyed by patterns. A service can listen for patterns from one or more transports simultaneously.

The core concepts:

  • Transporter: the wire protocol. TCP (built-in), Redis (pub/sub), NATS, MQTT, gRPC, Kafka, RabbitMQ. Each has its own delivery semantics and dependencies.
  • Message pattern: a key (string or object) that maps an incoming message to a handler. Similar to a route, but transport-agnostic.
  • ClientProxy: the outbound side. Lets a service send messages or emit events to another service through the same transport.
  • Hybrid application: a NestJS instance that listens for HTTP traffic and microservice messages at the same time. Common for services that need a public API plus internal RPC.

A TCP Microservice

TCP is the simplest transport and ships with NestJS by default, no external broker required. Good for learning the model, less suitable for production at scale (you're on the hook for load balancing and reconnection).

Create the microservice entry point:

// apps/users/src/main.ts
import { NestFactory } from "@nestjs/core";
import { MicroserviceOptions, Transport } from "@nestjs/microservices";
import { UsersModule } from "./users.module";

async function bootstrap() {
  const app = await NestFactory.createMicroservice<MicroserviceOptions>(
    UsersModule,
    {
      transport: Transport.TCP,
      options: {
        host: "0.0.0.0",
        port: 3001,
      },
    },
  );
  await app.listen();
}
bootstrap();

The controller exposes message handlers instead of HTTP routes:

// apps/users/src/users.controller.ts
import { Controller } from "@nestjs/common";
import { MessagePattern, Payload } from "@nestjs/microservices";
import { UsersService } from "./users.service";

@Controller()
export class UsersController {
  constructor(private readonly usersService: UsersService) {}

  @MessagePattern({ cmd: "get_user" })
  async getUser(@Payload() id: number) {
    return this.usersService.findOne(id);
  }

  @MessagePattern({ cmd: "create_user" })
  async createUser(@Payload() dto: CreateUserDto) {
    return this.usersService.create(dto);
  }
}

And the service that consumes it needs a module import plus a client registration:

// apps/orders/src/orders.module.ts
import { Module } from "@nestjs/common";
import { ClientsModule, Transport } from "@nestjs/microservices";
import { OrdersController } from "./orders.controller";
import { OrdersService } from "./orders.service";

@Module({
  imports: [
    ClientsModule.register([
      {
        name: "USERS_SERVICE",
        transport: Transport.TCP,
        options: { host: "users-service", port: 3001 },
      },
    ]),
  ],
  controllers: [OrdersController],
  providers: [OrdersService],
})
export class OrdersModule {}

Then inject and call it:

// apps/orders/src/orders.service.ts
import { Inject, Injectable } from "@nestjs/common";
import { ClientProxy } from "@nestjs/microservices";
import { firstValueFrom } from "rxjs";

@Injectable()
export class OrdersService {
  constructor(
    @Inject("USERS_SERVICE") private readonly usersClient: ClientProxy,
  ) {}

  async createOrder(userId: number, items: OrderItem[]) {
    const user = await firstValueFrom(
      this.usersClient.send({ cmd: "get_user" }, userId),
    );

    if (!user) {
      throw new NotFoundException("User not found");
    }
    // ... create the order
  }
}

A few things to notice:

  1. The payload type isn't enforced across the wire. The consumer and producer both assert the shape, but there's no compile-time guarantee they agree. A field renamed on the producer silently breaks the consumer at runtime.
  2. Every new consumer requires a module registration. If OrdersService later needs inventory and notifications, that's two more ClientsModule.register entries, two more injected clients, two more firstValueFrom wrappers.
  3. The transport is configured inline. Moving from TCP to Redis means editing every producer and every consumer's bootstrap code and module imports.

Event Patterns vs. Message Patterns

MessagePattern is request-response, the client awaits a reply. EventPattern is fire-and-forget, the producer emits, any number of consumers can subscribe, no response expected.

// Producer side
await this.client.emit("order.created", { orderId, userId, total });

// Consumer side
@EventPattern("order.created")
async handleOrderCreated(@Payload() data: OrderCreatedEvent) {
  await this.inventoryService.reserve(data);
  await this.emailService.sendConfirmation(data);
}

Events are how you avoid tight coupling between services. The producer doesn't know which services consume its events, and you can add new consumers without touching the producer. Request-response is fine for synchronous lookups; events are how you build workflows that span services.

Scaling Beyond TCP: Redis, NATS, Kafka

TCP is fine for local development and small deployments. Once you need load balancing, message durability, or fan-out, you swap the transport for a broker.

Redis: cheapest to operate. Pub/sub for events, but no built-in queuing or replay. Good first choice if you already run Redis for caching.

NestFactory.createMicroservice(AppModule, {
  transport: Transport.REDIS,
  options: { host: "redis", port: 6379 },
});

NATS: lightweight, designed for this workload. JetStream adds persistence and replay. Good middle ground when Redis isn't enough but Kafka is overkill.

Kafka: durable, replayable, partitioned. The right choice when events are part of your source of truth (order processing, audit logs). Operationally heavy: you'll run a Kafka cluster, manage schemas, and deal with consumer group lag.

RabbitMQ: traditional work queue semantics. Routing flexibility, dead-letter queues, delayed delivery. Well-suited for task processing.

Switching transports in NestJS is a config change in principle, but each broker behaves differently around delivery guarantees, ordering, and backpressure. Your application logic often has to change to accommodate those differences. "Transport-agnostic" is true at the API surface and not much deeper.

Deploying NestJS Microservices

This is where NestJS stops opining and hands you the keys. A typical production setup requires:

  • A container per service: each NestJS microservice is a separate build and deployment target.
  • Service discovery: hardcoded hostnames work in Docker Compose but not in Kubernetes. You'll end up with a service mesh, internal DNS, or a discovery library.
  • A message broker (unless you're on TCP): Redis, NATS, or Kafka running somewhere, with its own HA setup.
  • Databases per service: the whole point of microservices is isolation. That means a Postgres (or whatever) per service, connection strings in secrets, migrations run independently.
  • Observability: NestJS gives you basic logging. Distributed tracing across services means adding OpenTelemetry manually, wiring context propagation through every ClientProxy.send call, and running a collector.
  • CI/CD per service: independent pipelines, independent versioning, independent rollout.

None of this is unique to NestJS, any microservice stack faces the same list. But NestJS doesn't solve any of it for you. The framework ends at the application boundary; everything outside that is your responsibility.

Teams typically end up with a dedicated platform team maintaining Terraform or Pulumi modules for infrastructure, a Kubernetes cluster, a service mesh (Linkerd or Istio), and internal tooling to keep the developer experience tolerable. That's a lot of machinery to ship a second service.

Common Pain Points

Recurring themes from teams running NestJS microservices in production:

  1. Type safety ends at the service boundary. The NestJS compiler checks your controllers and services, but ClientProxy.send returns Observable<any>. A schema change in one service can break consumers silently until runtime.
  2. Boilerplate scales linearly with services. Each new service needs its own module tree, DTO duplication across repos (or a shared package that now has to be versioned), client registrations in every consumer.
  3. Local development is painful. Running 8 microservices locally means 8 Node processes, a broker, and enough RAM. Tools like Docker Compose help but don't feel like "the framework."
  4. Tracing is manual. Without setting up OpenTelemetry end-to-end, a slow request across three services just shows up as "slow" with no attribution.
  5. Infrastructure drifts from code. Your NestJS code says "I publish to order.created." Your Terraform says "there's a SQS queue called order-events." Keeping those in sync is a full-time job once you have 20 services.

These aren't fatal, large teams do run NestJS microservices successfully. But the overhead is real, and it mostly falls on platform engineering rather than feature teams.

A Simpler Alternative: Encore

Encore approaches the same problem differently: you declare services and infrastructure in TypeScript, and the framework generates the wiring, provisions the infrastructure, and runs the distributed system for you. No NestJS module registrations, no ClientProxy, no manual Terraform, no separate OpenTelemetry setup.

The difference in practice is that a NestJS microservice is a separate application you have to deploy and wire together, while an Encore service is a directory with typed functions that Encore composes into a running system.

Encore has over 11,000 GitHub stars and is used in production by teams including Groupon.

Deploy with Encore

Want to jump straight to a running app? Clone this starter and deploy it to your own cloud.

Deploy

Defining Services

An Encore service is a folder with an encore.service.ts file:

// users/encore.service.ts
import { Service } from "encore.dev/service";
export default new Service("users");

That's the entire service definition. No module, no provider registration, no bootstrap file.

Typed APIs

Endpoints are functions with typed parameters and typed returns:

// users/users.ts
import { api } from "encore.dev/api";
import { SQLDatabase } from "encore.dev/storage/sqldb";

// Provisions a managed Postgres database automatically.
// Locally: Docker. In production: RDS on AWS or Cloud SQL on GCP.
const db = new SQLDatabase("users", { migrations: "./migrations" });

interface User {
  id: number;
  email: string;
  name: string;
}

export const get = api(
  { method: "GET", path: "/users/:id", expose: true },
  async ({ id }: { id: number }): Promise<User> => {
    return await db.queryRow`SELECT * FROM users WHERE id = ${id}`;
  },
);

There's no controller class, no DTO, and no module registration. The endpoint's type signature is the contract, and Encore generates an OpenAPI spec, client SDKs, and tracing hooks from it.

Service-to-Service Calls

Cross-service calls look like regular function calls, with full end-to-end type safety:

// orders/orders.ts
import { api } from "encore.dev/api";
import { users } from "~encore/clients";

export const createOrder = api(
  { method: "POST", path: "/orders", expose: true },
  async (req: CreateOrderRequest): Promise<Order> => {
    const user = await users.get({ id: req.userId });
    if (!user) throw new Error("user not found");
    // ... create the order
  },
);

users.get is type-checked at compile time. Renaming a field on the users service breaks the orders service at build time, not at 2am in production.

Events Instead of ClientProxy.emit

Encore has native Pub/Sub primitives. Defining a topic:

// orders/events.ts
import { Topic } from "encore.dev/pubsub";

export interface OrderCreated {
  orderId: number;
  userId: number;
  total: number;
}

// Provisions SQS/SNS (AWS) or Pub/Sub (GCP) automatically.
export const orderCreated = new Topic<OrderCreated>("order-created", {
  deliveryGuarantee: "at-least-once",
});

Publishing:

await orderCreated.publish({ orderId, userId, total });

Subscribing, from any other service:

// inventory/subscriptions.ts
import { Subscription } from "encore.dev/pubsub";
import { orderCreated } from "~encore/clients/orders";

new Subscription(orderCreated, "reserve-inventory", {
  handler: async (event) => {
    await reserveStock(event.orderId);
  },
});

There's no broker configuration or NATS connection string to manage. Encore picks SNS+SQS on AWS or Pub/Sub on GCP and wires them up at deploy time.

What You Get Without Writing It

  • Distributed tracing out of the box. Every request has a trace showing every service call, database query, and Pub/Sub message, with timing and payloads.
  • Infrastructure from code. The same TypeScript that declares your topics, databases, and cron jobs provisions them on AWS or GCP. No Terraform, no Pulumi, no drift.
  • Local parity. encore run starts the whole system locally, databases in Docker, in-memory Pub/Sub, live reload across services.
  • Type-safe clients everywhere. Frontend, mobile, CLI, all generated from your API definitions.
  • Auto-generated API docs and service catalog. No Swagger decorators, no manual docs site.

Side-by-Side Comparison

AspectNestJS MicroservicesEncore
Service definitionModule + controller + service + DTOs + bootstrapOne folder with encore.service.ts
Cross-service type safetyManual, payloads typed as any at wire levelCompile-time across services
InfrastructureYou write Terraform / Pulumi / K8s manifestsGenerated from TypeScript declarations
Pub/Sub@EventPattern + broker config per servicenew Topic<T>() + new Subscription()
Distributed tracingManual OpenTelemetry setupBuilt in
Local developmentDocker Compose + broker + per-service configsencore run
API documentationSwagger decorators + generator setupAuto-generated
DeploymentCI/CD per service, manual infragit push to Encore Cloud (AWS/GCP)

When NestJS Microservices Still Makes Sense

There are scenarios where sticking with NestJS microservices is the right call:

  • You're already deep in the NestJS ecosystem with substantial investment in modules, guards, interceptors, and testing infrastructure
  • You need a transport Encore doesn't provide natively (e.g., gRPC interop with non-Encore services, Kafka with specific tuning requirements)
  • Your team has existing platform engineering capacity and prefers assembling tools over using a platform
  • You're constrained to specific cloud-native patterns mandated by a parent org

For most greenfield backends, the overhead of NestJS microservices outweighs the benefits. You can ship a distributed system with Encore in an afternoon that would take NestJS a week to scaffold.

Getting Started

If you want to try the Encore approach, the fastest path is:

brew install encoredev/tap/encore
encore app create shop-app --example=ts/empty
cd shop-app
encore run

Open localhost:9400 for the local development dashboard. It shows every service, API, trace, and infrastructure resource in one place.

Deploy with Encore

Want to jump straight to a running app? Clone this starter and deploy it to your own cloud.

Deploy

Ready to build your next backend?

Encore is the Open Source framework for building robust type-safe distributed systems with declarative infrastructure.