NestJS ships a dedicated microservices package that lets you split your application into services that communicate over TCP, Redis, NATS, gRPC, Kafka, or RabbitMQ. It's a capable foundation if you want NestJS's opinionated architecture across a distributed system. It's also one of the heavier ways to build microservices in TypeScript, and the overhead compounds as the service count grows.
This guide walks through the NestJS microservices model: how the transports work, how services talk to each other, how you deploy them, and where the model starts to strain. At the end we look at a lighter approach that handles the same architectural goals with less ceremony.
Before getting into NestJS specifics, a reality check. Microservices are an organizational and scaling tool, not a default architecture.
Stick with a monolith if:
Consider microservices when:
Most "we need microservices" conversations are really "we need better code organization." Splitting a well-factored monolith later is cheaper than merging a poorly-factored microservice mesh.
NestJS's microservices package (@nestjs/microservices) replaces the default HTTP transport with a message-oriented transport. Instead of controllers mapped to HTTP routes, you expose message handlers keyed by patterns. A service can listen for patterns from one or more transports simultaneously.
The core concepts:
TCP is the simplest transport and ships with NestJS by default, no external broker required. Good for learning the model, less suitable for production at scale (you're on the hook for load balancing and reconnection).
Create the microservice entry point:
// apps/users/src/main.ts
import { NestFactory } from "@nestjs/core";
import { MicroserviceOptions, Transport } from "@nestjs/microservices";
import { UsersModule } from "./users.module";
async function bootstrap() {
const app = await NestFactory.createMicroservice<MicroserviceOptions>(
UsersModule,
{
transport: Transport.TCP,
options: {
host: "0.0.0.0",
port: 3001,
},
},
);
await app.listen();
}
bootstrap();
The controller exposes message handlers instead of HTTP routes:
// apps/users/src/users.controller.ts
import { Controller } from "@nestjs/common";
import { MessagePattern, Payload } from "@nestjs/microservices";
import { UsersService } from "./users.service";
@Controller()
export class UsersController {
constructor(private readonly usersService: UsersService) {}
@MessagePattern({ cmd: "get_user" })
async getUser(@Payload() id: number) {
return this.usersService.findOne(id);
}
@MessagePattern({ cmd: "create_user" })
async createUser(@Payload() dto: CreateUserDto) {
return this.usersService.create(dto);
}
}
And the service that consumes it needs a module import plus a client registration:
// apps/orders/src/orders.module.ts
import { Module } from "@nestjs/common";
import { ClientsModule, Transport } from "@nestjs/microservices";
import { OrdersController } from "./orders.controller";
import { OrdersService } from "./orders.service";
@Module({
imports: [
ClientsModule.register([
{
name: "USERS_SERVICE",
transport: Transport.TCP,
options: { host: "users-service", port: 3001 },
},
]),
],
controllers: [OrdersController],
providers: [OrdersService],
})
export class OrdersModule {}
Then inject and call it:
// apps/orders/src/orders.service.ts
import { Inject, Injectable } from "@nestjs/common";
import { ClientProxy } from "@nestjs/microservices";
import { firstValueFrom } from "rxjs";
@Injectable()
export class OrdersService {
constructor(
@Inject("USERS_SERVICE") private readonly usersClient: ClientProxy,
) {}
async createOrder(userId: number, items: OrderItem[]) {
const user = await firstValueFrom(
this.usersClient.send({ cmd: "get_user" }, userId),
);
if (!user) {
throw new NotFoundException("User not found");
}
// ... create the order
}
}
A few things to notice:
OrdersService later needs inventory and notifications, that's two more ClientsModule.register entries, two more injected clients, two more firstValueFrom wrappers.MessagePattern is request-response, the client awaits a reply. EventPattern is fire-and-forget, the producer emits, any number of consumers can subscribe, no response expected.
// Producer side
await this.client.emit("order.created", { orderId, userId, total });
// Consumer side
@EventPattern("order.created")
async handleOrderCreated(@Payload() data: OrderCreatedEvent) {
await this.inventoryService.reserve(data);
await this.emailService.sendConfirmation(data);
}
Events are how you avoid tight coupling between services. The producer doesn't know which services consume its events, and you can add new consumers without touching the producer. Request-response is fine for synchronous lookups; events are how you build workflows that span services.
TCP is fine for local development and small deployments. Once you need load balancing, message durability, or fan-out, you swap the transport for a broker.
Redis: cheapest to operate. Pub/sub for events, but no built-in queuing or replay. Good first choice if you already run Redis for caching.
NestFactory.createMicroservice(AppModule, {
transport: Transport.REDIS,
options: { host: "redis", port: 6379 },
});
NATS: lightweight, designed for this workload. JetStream adds persistence and replay. Good middle ground when Redis isn't enough but Kafka is overkill.
Kafka: durable, replayable, partitioned. The right choice when events are part of your source of truth (order processing, audit logs). Operationally heavy: you'll run a Kafka cluster, manage schemas, and deal with consumer group lag.
RabbitMQ: traditional work queue semantics. Routing flexibility, dead-letter queues, delayed delivery. Well-suited for task processing.
Switching transports in NestJS is a config change in principle, but each broker behaves differently around delivery guarantees, ordering, and backpressure. Your application logic often has to change to accommodate those differences. "Transport-agnostic" is true at the API surface and not much deeper.
This is where NestJS stops opining and hands you the keys. A typical production setup requires:
ClientProxy.send call, and running a collector.None of this is unique to NestJS, any microservice stack faces the same list. But NestJS doesn't solve any of it for you. The framework ends at the application boundary; everything outside that is your responsibility.
Teams typically end up with a dedicated platform team maintaining Terraform or Pulumi modules for infrastructure, a Kubernetes cluster, a service mesh (Linkerd or Istio), and internal tooling to keep the developer experience tolerable. That's a lot of machinery to ship a second service.
Recurring themes from teams running NestJS microservices in production:
ClientProxy.send returns Observable<any>. A schema change in one service can break consumers silently until runtime.order.created." Your Terraform says "there's a SQS queue called order-events." Keeping those in sync is a full-time job once you have 20 services.These aren't fatal, large teams do run NestJS microservices successfully. But the overhead is real, and it mostly falls on platform engineering rather than feature teams.
Encore approaches the same problem differently: you declare services and infrastructure in TypeScript, and the framework generates the wiring, provisions the infrastructure, and runs the distributed system for you. No NestJS module registrations, no ClientProxy, no manual Terraform, no separate OpenTelemetry setup.
The difference in practice is that a NestJS microservice is a separate application you have to deploy and wire together, while an Encore service is a directory with typed functions that Encore composes into a running system.
Encore has over 11,000 GitHub stars and is used in production by teams including Groupon.
Want to jump straight to a running app? Clone this starter and deploy it to your own cloud.
An Encore service is a folder with an encore.service.ts file:
// users/encore.service.ts
import { Service } from "encore.dev/service";
export default new Service("users");
That's the entire service definition. No module, no provider registration, no bootstrap file.
Endpoints are functions with typed parameters and typed returns:
// users/users.ts
import { api } from "encore.dev/api";
import { SQLDatabase } from "encore.dev/storage/sqldb";
// Provisions a managed Postgres database automatically.
// Locally: Docker. In production: RDS on AWS or Cloud SQL on GCP.
const db = new SQLDatabase("users", { migrations: "./migrations" });
interface User {
id: number;
email: string;
name: string;
}
export const get = api(
{ method: "GET", path: "/users/:id", expose: true },
async ({ id }: { id: number }): Promise<User> => {
return await db.queryRow`SELECT * FROM users WHERE id = ${id}`;
},
);
There's no controller class, no DTO, and no module registration. The endpoint's type signature is the contract, and Encore generates an OpenAPI spec, client SDKs, and tracing hooks from it.
Cross-service calls look like regular function calls, with full end-to-end type safety:
// orders/orders.ts
import { api } from "encore.dev/api";
import { users } from "~encore/clients";
export const createOrder = api(
{ method: "POST", path: "/orders", expose: true },
async (req: CreateOrderRequest): Promise<Order> => {
const user = await users.get({ id: req.userId });
if (!user) throw new Error("user not found");
// ... create the order
},
);
users.get is type-checked at compile time. Renaming a field on the users service breaks the orders service at build time, not at 2am in production.
ClientProxy.emitEncore has native Pub/Sub primitives. Defining a topic:
// orders/events.ts
import { Topic } from "encore.dev/pubsub";
export interface OrderCreated {
orderId: number;
userId: number;
total: number;
}
// Provisions SQS/SNS (AWS) or Pub/Sub (GCP) automatically.
export const orderCreated = new Topic<OrderCreated>("order-created", {
deliveryGuarantee: "at-least-once",
});
Publishing:
await orderCreated.publish({ orderId, userId, total });
Subscribing, from any other service:
// inventory/subscriptions.ts
import { Subscription } from "encore.dev/pubsub";
import { orderCreated } from "~encore/clients/orders";
new Subscription(orderCreated, "reserve-inventory", {
handler: async (event) => {
await reserveStock(event.orderId);
},
});
There's no broker configuration or NATS connection string to manage. Encore picks SNS+SQS on AWS or Pub/Sub on GCP and wires them up at deploy time.
encore run starts the whole system locally, databases in Docker, in-memory Pub/Sub, live reload across services.| Aspect | NestJS Microservices | Encore |
|---|---|---|
| Service definition | Module + controller + service + DTOs + bootstrap | One folder with encore.service.ts |
| Cross-service type safety | Manual, payloads typed as any at wire level | Compile-time across services |
| Infrastructure | You write Terraform / Pulumi / K8s manifests | Generated from TypeScript declarations |
| Pub/Sub | @EventPattern + broker config per service | new Topic<T>() + new Subscription() |
| Distributed tracing | Manual OpenTelemetry setup | Built in |
| Local development | Docker Compose + broker + per-service configs | encore run |
| API documentation | Swagger decorators + generator setup | Auto-generated |
| Deployment | CI/CD per service, manual infra | git push to Encore Cloud (AWS/GCP) |
There are scenarios where sticking with NestJS microservices is the right call:
For most greenfield backends, the overhead of NestJS microservices outweighs the benefits. You can ship a distributed system with Encore in an afternoon that would take NestJS a week to scaffold.
If you want to try the Encore approach, the fastest path is:
brew install encoredev/tap/encore
encore app create shop-app --example=ts/empty
cd shop-app
encore run
Open localhost:9400 for the local development dashboard. It shows every service, API, trace, and infrastructure resource in one place.
Want to jump straight to a running app? Clone this starter and deploy it to your own cloud.