03/20/26

Message Queues vs Pub/Sub: When to Use Which

Understanding asynchronous messaging patterns for backend systems

14 Min Read

Your backend has two services that need to communicate. The first service produces work or events. The second service needs to consume them. But you don't want the producer waiting for the consumer to finish before it can move on. You need asynchronous communication.

There are two fundamental patterns for this: message queues and publish/subscribe (pub/sub). They solve different problems, and choosing the wrong one creates headaches that surface months later when you're scaling, debugging lost messages, or trying to add a third consumer.

This guide explains both patterns, when to use each, and how to implement them in TypeScript backends. The code examples use Encore.ts, which has built-in Pub/Sub primitives that handle provisioning automatically - but the patterns themselves apply regardless of what you're building with.

Message Queues: Point-to-Point Delivery

A message queue is a buffer that sits between a producer and a consumer. The producer puts a message on the queue. Exactly one consumer picks it up, processes it, and acknowledges it. Once acknowledged, the message is removed from the queue.

The key characteristic is point-to-point delivery. Each message is processed by exactly one consumer. If you have five consumers reading from the same queue, each message goes to one of them. This is work distribution, not broadcasting.

How Message Queues Work

  1. Producer sends a message to the queue
  2. The queue stores the message durably (survives restarts)
  3. A consumer pulls the message (or the queue pushes it)
  4. The consumer processes the message and sends an acknowledgment
  5. The queue deletes the acknowledged message
  6. If the consumer crashes before acknowledging, the queue makes the message available again after a visibility timeout

This acknowledgment mechanism is what gives message queues their reliability. Messages aren't lost if a consumer fails. They're retried automatically.

Common Message Queue Implementations

Amazon SQS is the most widely used managed message queue. It's a pull-based system where consumers poll for messages. SQS guarantees at-least-once delivery (messages may be delivered more than once) and offers FIFO queues for strict ordering. Pricing is per-request, which makes it cheap for low-throughput workloads and predictable at scale.

RabbitMQ is an open-source message broker that supports multiple messaging patterns including point-to-point queues, topic exchanges, and fan-out. It's more flexible than SQS but requires you to run and maintain the broker yourself (or use a managed service like CloudAMQP). RabbitMQ supports push-based delivery, priority queues, and message TTL.

Redis Streams and Redis Lists can function as lightweight message queues. Redis Streams support consumer groups (multiple consumers sharing the work), acknowledgments, and message history. The tradeoff is durability: Redis can lose messages if the instance crashes between disk flushes, depending on your persistence configuration.

BullMQ is a popular TypeScript library built on Redis that adds job scheduling, retries, rate limiting, and priority queues. It's not a standalone message broker, but it's the most common choice for background job processing in Node.js applications.

Message Queue Example

Here's what a typical message queue pattern looks like conceptually:

Producer[Queue]Consumer AConsumer B (same queue, different messages)
                   → Consumer C (same queue, different messages)

Each message goes to exactly one of the consumers. If Consumer A is busy, the message goes to B or C. This is load balancing at the message level.

Pub/Sub: One-to-Many Broadcasting

Publish/subscribe (pub/sub) is a broadcasting pattern. A publisher sends a message to a topic. Every subscriber to that topic receives a copy of the message. Unlike message queues, pub/sub doesn't distribute work. It duplicates it.

The key characteristic is fan-out. One event, multiple independent consumers. Each subscriber has its own subscription with its own delivery tracking. If Subscriber A acknowledges a message, it doesn't affect Subscriber B's copy.

How Pub/Sub Works

  1. Publisher sends a message to a topic
  2. The topic delivers a copy of the message to each subscription
  3. Each subscription is independent: its own message backlog, its own acknowledgment tracking
  4. Each subscriber processes its copy and acknowledges it independently
  5. If a subscriber crashes, its copy is redelivered. Other subscribers aren't affected.

The decoupling here is the important part. The publisher doesn't know how many subscribers exist. Subscribers don't know about each other. You can add new subscribers without changing the publisher. You can remove subscribers without affecting anyone else.

Common Pub/Sub Implementations

Google Cloud Pub/Sub is a fully managed pub/sub service. It handles ordering (with ordering keys), exactly-once delivery (within a window), dead-letter topics, and schema validation. Messages are durably stored and replayed on failure. It's the most feature-complete managed option.

Amazon SNS (Simple Notification Service) is AWS's pub/sub offering. It's often paired with SQS: SNS handles the fan-out, and each subscriber is an SQS queue. This SNS+SQS pattern is one of the most common messaging architectures on AWS.

Apache Kafka is a distributed event streaming platform that functions as both a message queue and a pub/sub system. Kafka stores messages in an ordered, immutable log. Consumers track their position in the log (offset) and can re-read historical messages. Kafka excels at high-throughput event streaming but has significant operational complexity.

Redis Pub/Sub is a lightweight pub/sub system built into Redis. It's fast and simple but has no persistence: if a subscriber is offline when a message is published, that message is lost. This makes it suitable for real-time notifications but not for reliable event processing.

Pub/Sub Example

Publisher[Topic]Subscription AConsumer A (email service)
                    → Subscription BConsumer B (analytics service)
                    → Subscription CConsumer C (audit log service)

Every subscriber receives every message. Consumer A sends an email, Consumer B updates analytics, Consumer C writes to the audit log. Each processes the same event independently.

Comparison: Queues vs Pub/Sub

DimensionMessage QueuePub/Sub
Delivery modelPoint-to-point (one consumer)Fan-out (all subscribers)
Consumer countOne consumer per messageMultiple independent subscribers
Primary use caseWork distributionEvent broadcasting
Consumer couplingConsumers compete for messagesConsumers are independent
Adding consumersShares existing load (more workers)Creates new subscription (new copy)
Message lifetimeDeleted after acknowledgmentPer-subscription tracking
OrderingOften FIFO within queuePer-subscription, often per-key
BackpressureQueue depth grows if consumers are slowPer-subscription backlog
Typical examplesSQS, RabbitMQ, BullMQGoogle Cloud Pub/Sub, SNS, Kafka

The distinction matters because it affects how your system behaves when you add new consumers. Adding a consumer to a message queue means splitting the existing work. Adding a subscriber to a pub/sub topic means duplicating the event stream to a new destination.

When to Use Message Queues

Background Job Processing

You have a web request that triggers expensive work: generating a PDF, processing an image, sending an email, running an ML inference. The request handler puts a job on the queue and returns immediately. A pool of workers processes jobs at their own pace.

API Request → [job queue] → Worker pool → Done

The queue acts as a buffer. If you get a burst of 1000 requests, the queue absorbs them and workers process them at a steady rate. No dropped requests, no timeouts, no overloaded servers.

Task Distribution Across Workers

You're processing a batch of 10,000 records. Instead of one process handling all of them sequentially, you push each record onto a queue and let multiple workers process them in parallel. The queue handles load balancing automatically: each worker picks the next available message.

This scales horizontally. Need to process faster? Add more workers. Need to save costs during quiet hours? Reduce workers. The queue ensures no work is lost or duplicated.

Rate-Limited External API Calls

You need to call an external API with a rate limit (say, 100 requests per minute). A queue with rate limiting ensures you don't exceed the limit. Messages pile up in the queue during bursts and drain at the allowed rate.

Sequential Processing

Some operations must happen in order: applying database migrations, processing financial transactions for an account, replaying events. A FIFO queue guarantees ordering. SQS FIFO queues and Kafka partitions both support this pattern.

When to Use Pub/Sub

Event Broadcasting Across Services

An order is placed. The order service publishes an order.placed event. Multiple services react independently:

  • The payment service charges the customer
  • The inventory service reserves stock
  • The email service sends a confirmation
  • The analytics service records the event

Each service has its own subscription. If the email service goes down, the payment and inventory services are unaffected. When the email service recovers, it processes its backlog.

This is the canonical use case for pub/sub. The order service doesn't know (or care) which services are listening. New services can subscribe without modifying the order service.

Microservice Decoupling

In a microservice architecture, services need to react to changes in other services without direct API calls. Pub/sub provides this decoupling:

  • User service publishes user.created events
  • Billing service subscribes to set up the customer's payment profile
  • Notification service subscribes to send a welcome email
  • Search service subscribes to index the new user

Each service has a clear boundary. Services communicate through events rather than synchronous HTTP calls. If one service is slow or down, the others keep running.

Audit Logging and Event Sourcing

Every significant action in your system (user login, payment processed, config changed) is published as an event. An audit service subscribes to all topics and writes every event to an append-only log. This gives you a complete history of everything that happened, which is useful for compliance, debugging, and analytics.

Event sourcing takes this further: the event log becomes the source of truth, and the current state is derived by replaying events.

Real-Time Features

Pub/sub can power real-time features like notifications, live dashboards, and collaborative editing. When data changes, an event is published, and subscribers push updates to connected clients through WebSockets or server-sent events.

Combining Both Patterns

In production systems, you often use both patterns together. This isn't overengineering; it reflects the fact that some communication is "do this task" (queue) and some is "this happened" (pub/sub).

A common architecture:

Order Service → publishes "order.placed" to [Topic]
  → Subscription: Payment Service → processes payment
  → Subscription: Email Service → sends confirmation
  → Subscription: Analytics Service → records event
  → Subscription: Fulfillment Service
      → puts individual items on [Fulfillment Queue]
        → Worker 1: picks item, generates shipping label
        → Worker 2: picks item, generates shipping label
        → Worker 3: picks item, generates shipping label

The pub/sub topic handles fan-out (multiple services need to know about the order). The fulfillment queue handles work distribution (multiple workers process individual items). Each pattern is used where it fits.

Another common combination on AWS is SNS (pub/sub) feeding into multiple SQS queues (one per subscriber). This gives you fan-out at the topic level and reliable, independently-scaled consumption at the queue level.

Building Async Messaging with Encore.ts

Setting up message queues or pub/sub systems typically means provisioning infrastructure, configuring connections, managing credentials, and writing retry logic. For a production setup with dead-letter queues and monitoring, you're looking at significant configuration before you write any business logic.

Encore.ts has built-in Pub/Sub primitives that handle the infrastructure layer. You define topics and subscriptions in your application code, and Encore provisions the underlying infrastructure automatically. Locally, it uses an in-memory implementation for fast development. In the cloud, it provisions Google Cloud Pub/Sub or AWS SNS/SQS depending on your cloud provider.

Defining a Topic

A topic is where publishers send events. You define it with a TypeScript type for the event payload and a delivery guarantee:

import { Topic } from "encore.dev/pubsub";

interface OrderEvent {
  orderId: string;
  userId: string;
  totalAmount: number;
  items: Array<{ productId: string; quantity: number }>;
}

export const orders = new Topic<OrderEvent>("orders", {
  deliveryGuarantee: "at-least-once",
});

The deliveryGuarantee option specifies how the underlying infrastructure handles message delivery. "at-least-once" means every message is delivered to every subscriber at least once (and possibly more than once if there's a failure during acknowledgment). This is the standard guarantee for most pub/sub systems.

Publishing Events

Any service can import the topic and publish events to it:

import { orders } from "../orders/events";

// Inside an API endpoint or business logic
await orders.publish({
  orderId: "order-123",
  userId: "user-456",
  totalAmount: 9999,
  items: [
    { productId: "prod-a", quantity: 2 },
    { productId: "prod-b", quantity: 1 },
  ],
});

The publish call is type-safe. If your event payload doesn't match the OrderEvent interface, TypeScript catches it at compile time.

Creating Subscriptions

Each subscription defines an independent consumer for a topic. Multiple subscriptions on the same topic each receive every message (fan-out pattern):

import { Subscription } from "encore.dev/pubsub";
import { orders } from "../orders/events";

// Payment service subscribes to process payments
export const _ = new Subscription(orders, "process-payment", {
  handler: async (event) => {
    // event is typed as OrderEvent
    await chargeCustomer(event.userId, event.totalAmount);
    await recordPayment(event.orderId, event.totalAmount);
  },
});
import { Subscription } from "encore.dev/pubsub";
import { orders } from "../orders/events";

// Email service subscribes to send confirmations
export const _ = new Subscription(orders, "send-confirmation", {
  handler: async (event) => {
    await sendOrderConfirmationEmail(event.userId, event.orderId, event.items);
  },
});
import { Subscription } from "encore.dev/pubsub";
import { orders } from "../orders/events";

// Analytics service subscribes to record events
export const _ = new Subscription(orders, "record-analytics", {
  handler: async (event) => {
    await trackEvent("order_placed", {
      orderId: event.orderId,
      userId: event.userId,
      amount: event.totalAmount,
      itemCount: event.items.length,
    });
  },
});

Each subscription runs independently. If the email service fails, the payment and analytics subscriptions continue processing. The failed message is retried according to the subscription's retry policy.

Retry Configuration

Encore lets you configure retry behavior per subscription:

export const _ = new Subscription(orders, "process-payment", {
  handler: async (event) => {
    await chargeCustomer(event.userId, event.totalAmount);
  },
  retryPolicy: {
    maxRetries: 5,
    minBackoff: 1,  // seconds
    maxBackoff: 60, // seconds
  },
});

Failed messages are retried with exponential backoff. After all retries are exhausted, the message behavior depends on the underlying cloud provider's dead-letter configuration.

What Encore Provisions

When you deploy an Encore application with pub/sub topics, the infrastructure is provisioned automatically based on your cloud provider:

EnvironmentInfrastructure
Local developmentIn-memory pub/sub (no external dependencies)
AWSSNS topics + SQS queues per subscription
GCPGoogle Cloud Pub/Sub topics + subscriptions

You don't configure any of this. The same application code runs everywhere. Encore's infrastructure provisioning translates your topic and subscription declarations into the right cloud resources. This declarative approach also makes it easy for AI coding agents to add event-driven patterns to your codebase. An agent can see the existing Topic and Subscription declarations, then add new subscribers or publish events following the established pattern - without needing to configure message brokers or connection strings.

Using Pub/Sub as a Work Queue

While Encore's primitives are called "Pub/Sub," a single topic with a single subscription behaves like a message queue. Messages go to one handler, processed one at a time (or with configured concurrency), with retries on failure.

import { Topic, Subscription } from "encore.dev/pubsub";

// A "queue" is just a topic with one subscription
interface ResizeTask {
  imageId: string;
  targetWidth: number;
  targetHeight: number;
}

export const resizeQueue = new Topic<ResizeTask>("image-resize", {
  deliveryGuarantee: "at-least-once",
});

export const _ = new Subscription(resizeQueue, "resize-worker", {
  handler: async (task) => {
    await downloadImage(task.imageId);
    await resize(task.imageId, task.targetWidth, task.targetHeight);
    await uploadResized(task.imageId);
  },
  maxConcurrency: 5,
});

This gives you the work queue pattern without a separate system. The same infrastructure, the same monitoring, the same retry behavior.

Choosing the Right Pattern

Here's a decision tree for most backend systems:

"I have work that needs to get done, and I don't care which worker does it." Use a message queue (or a single pub/sub subscription).

"Something happened, and multiple services need to know about it." Use pub/sub.

"I need both." Use pub/sub for the event broadcasting, and let each subscriber handle its work internally. If a subscriber needs to distribute work across multiple workers, it can write to its own internal queue.

"I'm not sure yet." Start with pub/sub. A pub/sub topic with one subscription is functionally a queue. You can add more subscribers later without changing the publisher. Going the other direction (starting with a queue and later needing fan-out) requires restructuring.

Idempotency: The Hidden Requirement

Both message queues and pub/sub systems deliver messages at least once. Network failures, consumer crashes, and timeout-based redelivery all mean your handler might see the same message twice. Your handlers need to be idempotent: processing the same message twice should produce the same result as processing it once.

Common strategies:

  • Deduplication key: Store processed message IDs in a database. Check before processing.
  • Database constraints: Use upserts (ON CONFLICT DO NOTHING) so duplicate inserts are no-ops.
  • Idempotent operations: Design operations so they naturally handle duplicates. Setting a value is idempotent; incrementing a counter is not.
export const _ = new Subscription(orders, "process-payment", {
  handler: async (event) => {
    // Idempotent: uses orderId as deduplication key
    const existing = await db.queryRow`
      SELECT id FROM payments WHERE order_id = ${event.orderId}
    `;
    if (existing) return; // Already processed

    await db.exec`
      INSERT INTO payments (order_id, user_id, amount, status)
      VALUES (${event.orderId}, ${event.userId}, ${event.totalAmount}, 'completed')
    `;
    await chargeCustomer(event.userId, event.totalAmount);
  },
});

Conclusion

Message queues distribute work across competing consumers. Pub/sub broadcasts events to independent subscribers. Most production systems use both, often in the same event flow.

For TypeScript backends, the infrastructure setup is usually the hard part. Provisioning SQS queues, configuring SNS topics, managing dead-letter queues, setting up IAM permissions, writing retry logic. This is the same work regardless of your business logic.

Encore's built-in Pub/Sub primitives handle the infrastructure layer so you can focus on the messaging patterns themselves. Define a topic, add subscriptions, and Encore provisions the right cloud resources. One subscription gives you a work queue. Multiple subscriptions give you fan-out. The same code runs locally (in-memory), on AWS (SNS+SQS), or on GCP (Cloud Pub/Sub) without configuration changes.

Get started:

curl -L https://encore.dev/install.sh | bash
encore app create my-app --example=ts/hello-world

Read the Pub/Sub documentation for the complete API reference, including ordering keys, subscription filters, and advanced retry configuration.

Ready to build your next backend?

Encore is the Open Source framework for building robust type-safe distributed systems with declarative infrastructure.