MCP (Model Context Protocol) is an open protocol that standardizes how AI models connect to external tools and data sources. It was created by Anthropic and is now supported by Claude Desktop, Cursor, VS Code, and a growing number of other tools.
Before MCP, connecting an AI model to your database or internal APIs meant writing custom integration code for each model provider and each IDE separately. MCP replaces that with a single protocol that works across all of them.
As a practical example: say you're working in Cursor and you ask "show me the slowest API endpoints from the last hour." Without MCP, the model can only suggest that you go check your monitoring dashboard. With an MCP server connected to your observability stack, Cursor can actually query your traces, pull the data, and show you the results inline. You write a prompt, the model calls a tool through MCP, and the answer comes back with real data from your system.
MCP defines a communication protocol between two parties:
The protocol uses JSON-RPC 2.0 for message passing. A client connects to one or more servers, discovers what capabilities they offer, and uses them during conversations or code generation.
An MCP server can expose three types of capabilities:
Tools are functions the AI model can call. They take structured input and return structured output. A tool might execute a database query, create a GitHub issue, or fetch data from an API.
{
"name": "query_database",
"description": "Run a read-only SQL query against the production database",
"inputSchema": {
"type": "object",
"properties": {
"query": { "type": "string", "description": "SQL query to execute" }
},
"required": ["query"]
}
}
When the AI model decides it needs data from the database, it calls this tool with a SQL query, and the MCP server executes it and returns the results.
Resources are data the AI can read. They work like files: each resource has a URI and returns content. A resource might expose a configuration file, a log output, or an API schema.
{
"uri": "config://app/database",
"name": "Database Configuration",
"description": "Current database connection settings and schema",
"mimeType": "application/json"
}
Resources are read-only and help the AI understand the current state of a system without making changes.
Prompts are reusable templates that help the AI handle specific tasks. A server can expose pre-built prompts for common operations like "analyze this error log" or "review this pull request."
{
"name": "analyze_error",
"description": "Analyze an error log and suggest fixes",
"arguments": [
{
"name": "error_log",
"description": "The error log content to analyze",
"required": true
}
]
}
Prompts are optional but useful for guiding the AI toward effective patterns when interacting with your specific tools.
An MCP session follows a straightforward lifecycle:
1. Connection: The client connects to the server. MCP supports two transport mechanisms:
2. Initialization: The client and server exchange capabilities. The client says what protocol version it supports. The server responds with its list of tools, resources, and prompts.
3. Operation: The AI model interacts with the server's capabilities as needed during a conversation. The client sends requests (call this tool, read this resource), and the server responds.
4. Shutdown: Either side can close the connection gracefully.
Here's what the message flow looks like for a tool call:
Client → Server: {"jsonrpc": "2.0", "method": "tools/call", "params": {"name": "query_database", "arguments": {"query": "SELECT count(*) FROM users"}}, "id": 1}
Server → Client: {"jsonrpc": "2.0", "result": {"content": [{"type": "text", "text": "Count: 42,531"}]}, "id": 1}
The AI model sees the tool definition, decides when to use it, formulates the input, and processes the output. The MCP server handles the actual execution.
Understanding the server/client distinction is important because it determines where code runs and who is responsible for what.
An MCP server is a program that wraps some capability and exposes it through the MCP protocol. Servers are typically:
There are MCP servers for most common development tools: databases (Postgres, MySQL, SQLite), version control (GitHub, GitLab), infrastructure (AWS, Kubernetes), monitoring (Datadog, Sentry), and many more. The community maintains a growing registry at mcp.so.
An MCP client is the AI application that connects to servers and uses their capabilities. The client handles:
The major AI coding tools all support MCP as clients: Claude Desktop, Cursor, VS Code with Copilot, Windsurf, Zed, and others. When you configure an MCP server in one of these tools, it shows up as available tools the AI can use during your session.
Before MCP, if you wanted an AI tool to interact with your internal API, you had to build a separate integration for each AI client. A Cursor extension, a VS Code extension, a ChatGPT plugin, each with its own format and lifecycle.
With MCP, you build one server, and it works everywhere. Any MCP-compatible client can connect to it. Build an MCP server for your internal deployment system, and it works in Claude Desktop, Cursor, and any other MCP client without changes.
MCP servers typically run on your machine. Your database credentials, API keys, and data stay local. The AI model sends structured requests to the MCP server, and the server handles the actual system access. This is fundamentally different from cloud-based plugins where your credentials are sent to a third-party server.
For enterprise environments, this matters. Compliance teams are more comfortable with a model that queries data through a local intermediary than one that needs direct access to production systems.
A single client can connect to multiple MCP servers simultaneously. You might have:
The AI model sees all the tools from all connected servers and can use them together in a single conversation. "Check the error rate in the monitoring dashboard, look at the relevant code in GitHub, and suggest a fix" becomes possible when the model has access to all three systems.
MCP tool definitions use JSON Schema for input validation. This means the AI model knows exactly what parameters a tool expects, what types they should be, and which ones are required. The server validates inputs before execution. This reduces hallucinated tool calls and catches errors before they reach your systems.
Here's a complete MCP server in TypeScript that provides read access to a PostgreSQL database:
import { McpServer, ResourceTemplate } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import pg from "pg";
const pool = new pg.Pool({
connectionString: process.env.DATABASE_URL,
});
const server = new McpServer({
name: "postgres-readonly",
version: "1.0.0",
});
// Expose a tool for running read-only queries
server.tool(
"query",
"Run a read-only SQL query against the database",
{
sql: { type: "string", description: "SQL SELECT query to execute" },
},
async ({ sql }) => {
// Block non-SELECT statements
const normalized = sql.trim().toLowerCase();
if (!normalized.startsWith("select")) {
return {
content: [{ type: "text", text: "Error: Only SELECT queries are allowed" }],
isError: true,
};
}
try {
const result = await pool.query(sql);
return {
content: [{
type: "text",
text: JSON.stringify(result.rows, null, 2),
}],
};
} catch (err: any) {
return {
content: [{ type: "text", text: `Query error: ${err.message}` }],
isError: true,
};
}
}
);
// Expose the database schema as a resource
server.resource(
"schema",
new ResourceTemplate("schema://tables/{table}", { list: undefined }),
async (uri, { table }) => {
const result = await pool.query(
`SELECT column_name, data_type, is_nullable
FROM information_schema.columns
WHERE table_name = $1
ORDER BY ordinal_position`,
[table]
);
return {
contents: [{
uri: uri.href,
mimeType: "application/json",
text: JSON.stringify(result.rows, null, 2),
}],
};
}
);
// Start the server
const transport = new StdioServerTransport();
await server.connect(transport);
To use this server with Claude Desktop, you'd add it to your configuration:
{
"mcpServers": {
"my-database": {
"command": "npx",
"args": ["tsx", "/path/to/server.ts"],
"env": {
"DATABASE_URL": "postgresql://localhost:5432/myapp"
}
}
}
}
Now when you chat with Claude, it can query your database and read table schemas. You can ask "What are the most common error types in the last hour?" and Claude will write a SQL query, execute it through the MCP server, and interpret the results.
A few things to keep in mind when building MCP servers:
The usefulness of an MCP server depends on how much it knows about your system. A generic Postgres MCP server gives the AI raw SQL access. An MCP server that understands your service architecture, API schemas, database schemas, and request traces gives it significantly more to work with.
Encore.ts ships with a built-in MCP server that exposes all of this. Because Encore's framework uses typed constructs for services, APIs, databases, and Pub/Sub topics, the MCP server can expose your entire backend architecture to AI tools without any configuration.
Enable the Encore MCP server in your AI tool's config. For Claude Desktop:
{
"mcpServers": {
"encore": {
"command": "encore",
"args": ["mcp"]
}
}
}
For Cursor, add the same config in your MCP settings. That's it. No API keys, no setup scripts. The MCP server reads your Encore project and exposes its architecture.
Once connected, your AI tool gets access to:
Say you have an Encore backend with an orders service, a payments service, and a notifications service. You're working in Cursor and you type:
Add a refund endpoint to the orders service that reverses the payment and notifies the customer
Because the MCP server exposes your architecture, the AI knows that orders already has a database with an orders table, that payments has a charge endpoint it can reference, and that notifications subscribes to events on specific topics. The generated code follows your existing patterns, uses your actual database schema, and publishes to the right topics.
Without MCP, the AI would guess at your project structure and produce code that might not match your actual services or database.
You can also ask questions about your running system: "Which endpoints had the highest error rate today?" or "Show me the trace for request X." The AI queries your distributed traces through MCP and gives you real data.
For the full setup guide, see Encore's MCP documentation. For a broader comparison of how frameworks support AI-assisted development, see the best frameworks for AI-assisted development.
MCP adoption has grown rapidly since Anthropic open-sourced the specification. As of early 2026, the ecosystem includes:
The community maintains directories of available servers at mcp.so and on GitHub. Most servers are open source and can be adapted for specific needs.
For most common integrations (databases, GitHub, Slack), a pre-built MCP server exists and works well. Building a custom server makes sense when:
The fastest way to start using MCP is through an AI client that already supports it:
If you're already using Encore.ts, MCP is built in. Run encore mcp or add it to your AI tool's config as shown in the Encore.ts section above. Your service graph, APIs, databases, and traces are exposed automatically. See the Encore MCP docs for the full setup guide.
To build your own MCP server:
npm install @modelcontextprotocol/sdk for TypeScript, or use the Python SDK (pip install mcp).The official documentation is at modelcontextprotocol.io. Anthropic's TypeScript SDK is at github.com/modelcontextprotocol/typescript-sdk. The Python SDK is at github.com/modelcontextprotocol/python-sdk.
The MCP Inspector is a developer tool for testing MCP servers without needing a full AI client. It connects to your server, lists capabilities, and lets you call tools and read resources interactively. It's useful for debugging during development.
MCP is still evolving. A few directions to watch:
MCP solves a real problem: AI models need to interact with external systems, and before MCP, every integration was custom. The protocol standardizes that interaction, making it possible to build one integration that works with any MCP-compatible AI client.
For backend developers, MCP is particularly relevant because it determines how well AI tools can understand and work with your systems. Frameworks that provide machine-readable architecture metadata, like Encore.ts, enable richer AI interactions because the MCP server has more context to expose.
If you're building tools for AI agents, MCP is the interface to target. If you're using AI for development, enabling MCP servers for your stack gives the AI direct access to your systems instead of relying on copy-paste.
Explore Encore's MCP integration