04/17/26

How to Build an AI Agent Backend with TypeScript in 2026

Tool-calling endpoints, conversation storage, and async processing

6 Min Read

AI agents need a backend that can handle tool calls, store conversation history, process tasks asynchronously, and scale without infrastructure overhead. This tutorial builds one using Encore, which handles the database, Pub/Sub, and deployment automatically so you can focus on the agent logic.

What You'll Build

  • A conversation API that stores messages in PostgreSQL
  • Tool-calling endpoints the agent can invoke (web search, data lookup, calculations)
  • Async task processing via Pub/Sub for long-running operations
  • A cron job for cleaning up expired conversations
  • Built-in tracing across all components

Project Setup

Install Encore and create a new app:

Get started

Install the CLI and create an app.

$ brew install encoredev/tap/encore$ iwr https://encore.dev/install.ps1 | iex$ curl -L https://encore.dev/install.sh | bash

Or if you prefer to build it from scratch with an AI coding agent:

Prompt: Build me an AI agent backend with Encore. I need a conversation
service with PostgreSQL storage, tool-calling endpoints (web search,
calculator, data lookup), async task processing via Pub/Sub, and a
cron job to clean up expired conversations.

The rest of this tutorial walks through what that produces and how it works.

Conversation Service

The conversation service stores and retrieves chat history. Declare the database and API endpoints in the same file.

// conversation/conversation.ts
import { api } from "encore.dev/api";
import { SQLDatabase } from "encore.dev/storage/sqldb";

// Provisions RDS on AWS or Cloud SQL on GCP with sensible defaults (uses Docker Postgres locally).
// Migrations run automatically on startup.
const db = new SQLDatabase("conversations", { migrations: "./migrations" });

interface Message {
  id: number;
  conversationId: string;
  role: "user" | "assistant" | "tool";
  content: string;
  createdAt: string;
}

interface Conversation {
  id: string;
  messages: Message[];
}

// Each endpoint is automatically documented, traced, and validated based on its TypeScript types.
export const create = api(
  { method: "POST", path: "/conversations", expose: true },
  async (): Promise<{ id: string }> => {
    const id = crypto.randomUUID();
    await db.exec`INSERT INTO conversations (id) VALUES (${id})`;
    return { id };
  }
);

export const addMessage = api(
  { method: "POST", path: "/conversations/:id/messages", expose: true },
  async (req: { id: string; role: string; content: string }): Promise<Message> => {
    return await db.queryRow<Message>`
      INSERT INTO messages (conversation_id, role, content)
      VALUES (${req.id}, ${req.role}, ${req.content})
      RETURNING id, conversation_id as "conversationId", role, content, created_at as "createdAt"
    `;
  }
);

export const getHistory = api(
  { method: "GET", path: "/conversations/:id", expose: true },
  async ({ id }: { id: string }): Promise<Conversation> => {
    const messages = await db.query<Message>`
      SELECT id, conversation_id as "conversationId", role, content, created_at as "createdAt"
      FROM messages WHERE conversation_id = ${id}
      ORDER BY created_at ASC
    `;
    return { id, messages: messages.map((row) => ({ ...row })) };
  }
);

Create the migration:

-- conversation/migrations/001_create_tables.up.sql
CREATE TABLE conversations (
    id TEXT PRIMARY KEY,
    created_at TIMESTAMPTZ DEFAULT NOW()
);

CREATE TABLE messages (
    id SERIAL PRIMARY KEY,
    conversation_id TEXT REFERENCES conversations(id),
    role TEXT NOT NULL,
    content TEXT NOT NULL,
    created_at TIMESTAMPTZ DEFAULT NOW()
);

CREATE INDEX idx_messages_conversation ON messages(conversation_id, created_at);

Tool-Calling Endpoints

Agents need endpoints they can invoke as tools. Each tool is a typed API endpoint that the agent calls based on the user's request.

// tools/tools.ts
import { api } from "encore.dev/api";

interface SearchResult {
  title: string;
  snippet: string;
  url: string;
}

// Endpoints appear in Encore's service catalog with auto-generated docs and are traced automatically.
export const webSearch = api(
  { method: "POST", path: "/tools/search", expose: true },
  async (req: { query: string }): Promise<{ results: SearchResult[] }> => {
    // Call your preferred search API (Brave, SerpAPI, etc.)
    const response = await fetch(
      `https://api.search.brave.com/res/v1/web/search?q=${encodeURIComponent(req.query)}`,
      { headers: { "X-Subscription-Token": process.env.BRAVE_API_KEY! } }
    );
    const data = await response.json();
    return {
      results: data.web.results.slice(0, 5).map((r: any) => ({
        title: r.title,
        snippet: r.description,
        url: r.url,
      })),
    };
  }
);

export const calculate = api(
  { method: "POST", path: "/tools/calculate", expose: true },
  async (req: { expression: string }): Promise<{ result: number }> => {
    // Use a safe math evaluator
    const result = Function(`"use strict"; return (${req.expression})`)();
    return { result: Number(result) };
  }
);

export const lookupData = api(
  { method: "POST", path: "/tools/lookup", expose: true },
  async (req: { table: string; id: string }): Promise<{ data: Record<string, unknown> }> => {
    // Query your domain-specific data
    return { data: { id: req.id, source: req.table } };
  }
);

These endpoints appear in Encore's service catalog with auto-generated documentation, which makes it straightforward for the agent framework (LangChain, Vercel AI SDK, or custom) to discover available tools.

Async Task Processing

Some tool calls take too long for a synchronous response. Use Pub/Sub to process them asynchronously and notify the conversation when they're done.

// tasks/tasks.ts
import { api } from "encore.dev/api";
import { Topic, Subscription } from "encore.dev/pubsub";
import { conversation } from "~encore/clients";

interface TaskRequest {
  conversationId: string;
  toolName: string;
  params: Record<string, unknown>;
}

interface TaskResult {
  conversationId: string;
  result: string;
}

// Provisions SNS+SQS on AWS or GCP Pub/Sub on GCP with sensible defaults (in-memory locally).
const taskQueue = new Topic<TaskRequest>("agent-tasks", {
  deliveryGuarantee: "at-least-once",
});

const resultTopic = new Topic<TaskResult>("task-results", {
  deliveryGuarantee: "at-least-once",
});

export const submitTask = api(
  { method: "POST", path: "/tasks", expose: true },
  async (req: TaskRequest): Promise<{ status: string }> => {
    await taskQueue.publish(req);
    return { status: "queued" };
  }
);

// Each subscription gets its own queue. Messages are processed independently and traced end-to-end.
const _ = new Subscription(taskQueue, "task-processor", {
  handler: async (task) => {
    // Run the long-running tool call
    const result = await runTool(task.toolName, task.params);

    // Type-safe service-to-service call. Encore generates the client and traces the call automatically.
    await conversation.addMessage({
      id: task.conversationId,
      role: "tool",
      content: JSON.stringify(result),
    });

    await resultTopic.publish({
      conversationId: task.conversationId,
      result: JSON.stringify(result),
    });
  },
});

async function runTool(name: string, params: Record<string, unknown>): Promise<unknown> {
  // Route to the appropriate tool implementation
  switch (name) {
    case "search":
      return await fetch(`http://localhost:4000/tools/search`, {
        method: "POST",
        body: JSON.stringify(params),
      }).then((r) => r.json());
    default:
      return { error: `Unknown tool: ${name}` };
  }
}

Cleanup Cron Job

Conversations that haven't been active for 24 hours get cleaned up automatically.

// conversation/cleanup.ts
import { api } from "encore.dev/api";
import { CronJob } from "encore.dev/cron";
import { SQLDatabase } from "encore.dev/storage/sqldb";

const db = SQLDatabase.named("conversations");

export const cleanup = api(
  { method: "POST", path: "/conversations/cleanup", expose: false },
  async (): Promise<{ deleted: number }> => {
    const result = await db.exec`
      DELETE FROM conversations
      WHERE created_at < NOW() - INTERVAL '24 hours'
      AND id NOT IN (
        SELECT DISTINCT conversation_id FROM messages
        WHERE created_at > NOW() - INTERVAL '24 hours'
      )
    `;
    return { deleted: 0 };
  }
);

// Provisions CloudWatch Events on AWS or Cloud Scheduler on GCP.
const _ = new CronJob("conversation-cleanup", {
  title: "Clean up expired conversations",
  schedule: "0 3 * * *",
  endpoint: cleanup,
});

Running Locally

encore run

This starts your app with real PostgreSQL, Pub/Sub (in-memory), and distributed tracing. Open http://localhost:9400 to see the local development dashboard with your service architecture, API explorer, and traces.

Deploying to Production

git push encore

Encore Cloud provisions RDS, SNS+SQS, Fargate, and CloudWatch Events in your AWS or GCP account. The same code that runs locally runs in production with production-grade infrastructure.

What You've Built

  • Conversation API with PostgreSQL storage and full message history
  • Tool-calling endpoints that agents can discover and invoke
  • Async task processing via Pub/Sub for long-running operations
  • Automatic cleanup via cron for expired conversations
  • Distributed tracing across all services and tool calls
  • Production deployment to your own AWS or GCP account

All of the infrastructure (database, Pub/Sub, cron, compute) is declared in your application code and provisioned automatically. There are no Dockerfiles, Terraform configs, or CI/CD pipelines to maintain separately.

Next Steps

Get started

Install the CLI and create an app.

$ brew install encoredev/tap/encore$ iwr https://encore.dev/install.ps1 | iex$ curl -L https://encore.dev/install.sh | bash

Ready to build your next backend?

Encore is the Open Source framework for building robust type-safe distributed systems with declarative infrastructure.