
Encore ships with an MCP server that gives your AI agent access to your running application: database schemas, the service graph, distributed traces, pub/sub topology, cache configuration. The agent can discover your API surface before writing an endpoint, pull actual trace spans to verify a request executed correctly, or query the live pub/sub topology instead of searching through files.
MCP (Model Context Protocol) is an open standard that lets AI tools call into external systems through typed tool calls. Encore's MCP server connects to any MCP-compatible editor (Cursor, Claude Code, GitHub Copilot) and exposes the same structured data you see in the local dashboard. The static analyzer already builds a complete picture of your application at compile time. MCP makes that picture available to the agent.
MCP is a client-server protocol where your AI tool acts as the client and an MCP server runs locally alongside your app. The server exposes a set of tools, each being a function the model can call with typed parameters. The model decides which tools to call based on the task at hand, and gets back structured data it can reason about.
Encore's MCP server exposes 19 tools organized into nine areas: services and APIs, databases, traces, pub/sub, cache and storage, infrastructure, documentation, and testing. The practical difference is that the agent stops guessing. It knows where to put a new endpoint because it can see the service boundaries. It writes correct queries because it has the actual schema. It catches bugs because it can read traces. The rest of this post walks through each area.
Most of the mistakes agents make when writing backend code come from not knowing the existing structure. They put handlers in the wrong service, use the wrong auth pattern, or miss naming conventions. When the agent calls get_services, it gets the full picture: every service, endpoint, HTTP method and path, request and response types, auth configuration, and service dependencies:
This is the same information you see in the Encore dashboard's service catalog. The agent doesn't need to scan import statements or guess which functions are endpoints. It also has access to get_auth_handlers for auth configuration and get_middleware for middleware details.
When it adds a new endpoint, it knows to put it in the right service, use the same auth pattern as the existing endpoints, and match the naming conventions already in place.
Wrong column names and bad type assumptions are the other common source of agent mistakes. The agent reads a query string somewhere in your code, infers that there's a total column, and writes SELECT total FROM orders. The column is actually called total_cents and it's an INTEGER, not a DECIMAL. With get_databases, the agent queries the actual running schema:
With the actual schema, the agent knows total_cents is an INTEGER, not a DECIMAL. It knows status defaults to 'pending'. It sees the foreign key from order_items.order_id to orders.id and writes a proper JOIN instead of two separate queries.
The agent also has query_database to run SQL against your local database. If you ask it to "write a migration that adds a shipped_at column to orders," it can check the current schema, write the migration, run it, then query the table to verify the column exists.
This works because Encore provisions real Postgres locally. The schema the agent sees is the same schema your code runs against, after your latest migration.
An agent can write an endpoint and call it, but a 200 response doesn't tell you whether it did the right thing internally. Maybe auth didn't run. Maybe it queried the wrong table. Maybe there's an N+1 pattern you won't notice until production. Encore's runtime traces every operation automatically: API calls, database queries, service-to-service calls, pub/sub publishes, cache operations, outbound HTTP. The agent can pull any of these traces through MCP:
After the agent writes and tests an endpoint, telling it to "check the trace" gives it the full picture: whether auth ran, which tables were queried, how long the payment service took, whether the pub/sub event was published. If something looks off (wrong table, auth didn't run, N+1 query pattern) it shows up in the trace data.
The agent can also verify its own work this way. Write an endpoint, call it with call_endpoint, pull the trace with get_trace_spans, confirm everything executed as expected.
Pub/sub wiring is spread across the codebase in a way that's hard to follow even for humans. A topic is declared in one file, published to from a different service, and subscribed to in three more. Without structured access, the agent would need to grep through your entire project and piece together the relationships. With get_pubsub, it gets the full topology in one call:
The response includes message types, delivery guarantees, which services publish, and every subscription with its handler. If you ask it to "add a subscriber that sends a Slack notification when an order is created," it already knows the topic name, the message shape (orderId, customerId, totalCents), and the existing subscribers.
Here's the full set of tools the MCP server exposes:
| Category | Tools | What the agent gets |
|---|---|---|
| Services & APIs | get_services, get_auth_handlers, get_middleware | Endpoints, methods, paths, request/response types, auth config, dependencies |
| Databases | get_databases, query_database | Table schemas, column types, keys, relationships. Run SQL against local Postgres |
| Traces | get_trace_spans, get_traces | Full distributed traces with timing, nesting, request/response bodies |
| Pub/Sub | get_pubsub | Topics, publishers, subscribers, delivery guarantees, message types |
| Cache & Storage | get_cache_keyspaces, get_storage_buckets, get_objects | Cache cluster config, keyspace definitions, object storage buckets and contents |
| Infrastructure | get_cronjobs, get_metrics, get_secrets, get_metadata | Scheduled tasks, application metrics, secret names (not values), full app metadata |
| Source Code | get_src_files | Retrieve source file contents from the application |
| Docs | search_docs, get_docs | Search and read the Encore documentation directly |
| Testing | call_endpoint | Call any endpoint in the running application |
These are prompts you can give an agent in Cursor or Claude Code with the MCP server connected. Each one triggers a multi-step workflow where the agent uses MCP tools to discover context, write code, and verify the result.
"The /orders endpoint is slow. Figure out why."
The agent calls get_traces to find recent traces for that endpoint, pulls the spans with get_trace_spans, and reads the timing breakdown. It might find that a database query is missing an index, or that a downstream service call is taking longer than expected. It can then write the fix and re-test.
"Add an endpoint that returns order history for the authenticated user, then call it and verify the trace."
The agent calls get_services to find the orders service and its auth pattern, calls get_databases to get the schema, writes the endpoint matching existing conventions, calls it with call_endpoint, then pulls the trace with get_trace_spans to confirm auth ran and the right query executed.
"Add a subscriber to the order-created topic that sends a Slack notification."
The agent calls get_pubsub to see the topic, its message type, and existing subscribers. It writes a new subscriber that matches the message shape, puts it in the right service, and looks up the Pub/Sub docs with search_docs if it needs to check delivery configuration.
"Write a migration that adds a shipping_address column to orders, run it, and verify."
The agent calls get_databases to see the current schema, writes the migration SQL, runs it through Encore, then calls query_database to confirm the column exists with the right type and default.
A few things that make a difference:
Leave room for the agent to explore. "Add a GetOrderHistory endpoint" works better than "add a GET endpoint at /orders/history that queries the orders table and returns the id, status, and created_at columns with auth." The first lets the agent discover the schema, auth patterns, and service placement through MCP. It writes code that fits your system instead of code that matches your description of it.
Ask it to verify with traces. After the agent writes and calls an endpoint, "check the trace and make sure it looks right" catches things that are hard to spot in code review: wrong tables queried, auth not running, N+1 queries, unexpected service calls.
Use it for architecture questions. "Which services depend on payments?" or "what subscribes to the order-created topic?" are questions the agent answers instantly through MCP. Useful during code reviews, onboarding, or when planning how a change ripples through the system.
Combine with rules files and skills. MCP gives the agent live context about your running system. Rules files give it conventions about how you want code written. For a more structured starting point, the Encore skills repo has ready-made skill files for common tasks like creating APIs, setting up databases, auth, and testing. encore llm-rules init generates both MCP configuration and rules files.
The MCP server ships with the Encore CLI — no extra packages to install. New projects created with encore app create get this configured automatically. For existing projects, run encore llm-rules init to generate both the MCP configuration and rules files for your editor.
You can also configure it manually. In Cursor, add a .cursor/mcp.json at the project root:
{
"mcpServers": {
"encore-mcp": {
"command": "encore",
"args": ["mcp", "run", "--app=your-app-id"]
}
}
}
Restart Cursor, and you'll see the Encore tools listed in the MCP panel (Settings → MCP). The agent picks them up automatically when it decides it needs context about your application.
In Claude Code, register the server from your project directory:
claude mcp add --transport stdio encore-mcp -- encore mcp run --app=your-app-id
Claude Code discovers the tools on startup and will call them when a task benefits from live application context — no prompt engineering required.
Both setups connect to your local dev environment. The MCP server sees the same databases, traces, and services you get when you run encore run and open the dashboard at localhost:9400.
Encore is an open-source backend framework where infrastructure is declared in TypeScript or Go and provisioned from the code. The MCP server is built in. GitHub.


