If you've used Cursor, Claude Code, or Copilot in a TypeScript backend project, you've probably noticed the pattern. You ask for an endpoint and the agent starts deciding how to structure things: which framework, how to validate input, how to manage database connections, where to put files. It works, but the agent is spending most of its effort on plumbing you didn't ask about.
It doesn't have to be that way. Below is the same prompt run against two different projects, same model and same ask: add an authenticated endpoint that creates orders and stores them in a database.
The difference is the project. On the left the agent had no conventions to follow, so it picked Express, wrote auth middleware from scratch, added Zod validation, set up a pg connection pool, and wired up error handling. On the right the project already had opinions about all of that, so the agent just filled in the business logic.
The problem with TypeScript backends isn't that there's no structure available, it's that there are too many competing structures to choose from. Every layer of the stack has multiple valid options, and AI agents will use a different combination on every prompt:
express, fastify, hono, koa, nestjs?)zod, joi, class-validator, manual checks?) Often duplicating the TypeScript interfaces sitting right next to them.prisma, drizzle, knex, raw pg?)passport, custom middleware, Auth0 SDK?)The ecosystem also churns fast enough that agents are often trained on yesterday's best practices: Express 4 patterns in an Express 5 project, outdated Prisma schema syntax, deprecated middleware packages. Each prompt produces a slightly different stack, and the inconsistency compounds across a codebase.
Three attempts at the same endpoint: Express with pg, Fastify with Prisma, Hono with Drizzle. All valid TypeScript, yet none of it consistent. This is what fragmentation looks like when an agent has to pick a stack from scratch every time.
Give the project typed APIs, declared infrastructure, and consistent service structure, and the agent stops inventing architecture and starts writing business logic.
Encore.ts is a TypeScript framework built around this idea. APIs are typed functions with a single declaration:
import { api } from "encore.dev/api";
interface CreateOrderRequest {
customerId: string;
total: number;
}
interface CreateOrderResponse {
orderId: string;
}
export const createOrder = api(
{ expose: true, auth: true, method: "POST", path: "/orders" },
async (req: CreateOrderRequest): Promise<CreateOrderResponse> => {
// Business logic here
},
);
Infrastructure like databases and Pub/Sub topics is declared directly in TypeScript:
// Declare a database. Encore provisions it locally and in the cloud.
import { SQLDatabase } from "encore.dev/storage/sqldb";
const db = new SQLDatabase("orders", { migrations: "./migrations" });
// Declare a Pub/Sub topic. Encore handles creation and subscriptions.
import { Topic } from "encore.dev/pubsub";
const orderCreated = new Topic<OrderCreatedEvent>("order-created", {
deliveryGuarantee: "at-least-once",
});
When the agent sees these declarations in the codebase it follows them, and the structural decisions are already made.
In practice, that looks like this:
The agent reads the existing service structure, sees how APIs and infrastructure are declared, and writes an endpoint that follows the same patterns, adding to what's already there rather than reinventing it.
Conventions handle code structure, but agents can do a lot more when they also understand the running application.
Encore ships with an MCP (Model Context Protocol) server that gives AI agents structured access to the live system. Running encore mcp start exposes service architecture, database schemas, distributed traces, infrastructure state, live API calls, and framework docs as structured data the agent can query.
Schema access means generated queries match your actual tables, trace access means the agent debugs with real request data, and API access means it can call endpoints and verify its own work.
Add an endpoint that publishes to the order-created topic, call it, and verify the subscription handler processes the message correctly by checking the traces.
The agent implements the endpoint, calls it, and uses MCP to fetch and verify the traces. All without leaving the editor.
Encore works with any AI agent or editor that supports rules files or MCP. Claude Code, Cursor, Windsurf, Copilot, Zed, you name it. The setup is three steps: install Encore, generate AI rules for your editor, and start the MCP server so the agent can see your application's architecture, schemas, and traces.
# Install Encore
brew install encoredev/tap/encore # macOS
curl -L https://encore.dev/install.sh | bash # Linux
iwr https://encore.dev/install.ps1 | iex # Windows
# Create your app
encore app create my-app --lang=ts
# Generate AI config for Cursor, Claude Code, VS Code, or Zed
encore llm-rules init
# Start the MCP server for full application context
encore mcp start
Open the project in your editor and start prompting. The AI rules give the agent Encore's conventions, and the MCP server gives it live context about your running app.
TypeScript has more backend frameworks, ORMs, and validation libraries than any other language, and the ecosystem adds new options faster than old ones fade. Agents are good at filling in the rest of the owl (the business logic, the queries, the wiring) once the overall shape is clear, but they can't navigate that fragmentation on their own.
Give them the structure and they write code you'd actually keep.
Want to see it in action? We'd love to show you how Encore works with your AI tools of choice. Book a 1:1 intro, no pressure, just a conversation.


