If you've used Cursor, Claude Code, or Copilot in a Go project, you've probably noticed the output is verbose. You ask for an endpoint and get 150 lines of router setup, middleware wiring, connection pooling, and JSON marshaling before anything resembling business logic shows up. It works, but most of that code is the agent making decisions about plumbing you didn't ask about.
It doesn't have to be that way. Below is the same prompt run against two different projects. Same model, same ask: add an authenticated endpoint that creates orders, stores them in a database, and publishes to an event topic.
The difference is the project, not the model. On the left, the agent had no conventions to follow, so it picked a router, wrote auth middleware from scratch, set up a connection pool, and wired up JSON encoding. On the right, the project already had opinions about all of that, so the agent just wrote the 30 lines that matter.
Go has strong opinions about a lot of things, but how you structure a backend isn't one of them. That's usually fine for developers, but AI agents are the inverse: great at filling in the details once a structure exists, bad at deciding what that structure should be. Without conventions, every prompt forces the agent to make the kind of decisions it handles worst:
chi, gorilla/mux, httprouter, net/http with Go 1.22 patterns?)golang-migrate? goose? Raw SQL files?)otel? jaeger? Custom middleware?)Each of those leads to more choices about file layout, error handling, and testing. The agent picks something every time. Just not the same thing.
Three attempts at the same endpoint. Three different routers, three different error handling strategies, three different architectures. All valid Go. None of it consistent.
Give the project typed APIs, declared infrastructure, and consistent service structure, and the agent stops filling in blanks. It writes business logic.
Encore.go is a Go framework built around this idea. APIs are typed request/response structs with one annotation:
type CreateOrderRequest struct {
CustomerID string `json:"customer_id"`
Items []OrderItem `json:"items"`
}
type CreateOrderResponse struct {
OrderID string `json:"order_id"`
Total int `json:"total"`
}
//encore:api auth method=POST path=/orders
func CreateOrder(ctx context.Context, req *CreateOrderRequest) (*CreateOrderResponse, error) {
// Business logic here
}
Infrastructure like databases, Pub/Sub topics, and caches is declared directly in Go code:
// Declare a database. Encore provisions it locally and in the cloud.
var db = sqldb.NewDatabase("orders", sqldb.DatabaseConfig{
Migrations: "./migrations",
})
// Declare a Pub/Sub topic. Encore handles creation and subscriptions.
var OrderEvents = pubsub.NewTopic[OrderEvent]("order-events", pubsub.TopicConfig{
DeliveryGuarantee: pubsub.AtLeastOnce,
})
// Declare a cache cluster. Encore provisions Redis automatically.
var orderCache = cache.NewCluster("orders", cache.ClusterConfig{})
When the agent sees these declarations in the codebase, it follows them. No router to pick, no connection pool to configure.
In practice, that looks like this:
The agent reads the existing service structure, sees how APIs and infrastructure are declared, and writes an endpoint that follows the same patterns. It doesn't invent a new architecture or bring in a router you've never used. It just adds to what's already there.
Conventions handle code structure, but agents can do a lot more when they also understand the running application.
Encore ships with an MCP (Model Context Protocol) server that gives AI agents structured access to the live system. Running encore mcp start exposes service architecture, database schemas, distributed traces, infrastructure state, live API calls, and framework docs as structured data the agent can query.
Schema access means generated queries match your actual tables. Trace access means the agent debugs with real request data. API access means it can call endpoints and verify its own work.
Add an endpoint that publishes to the order-events topic, call it, and verify the subscription handler processes the message correctly by checking the traces.
The agent implements the endpoint, calls it, and uses MCP to fetch and verify the traces. All without leaving the editor.
Encore works with any AI agent or editor that supports rules files or MCP. Claude Code, Cursor, Windsurf, Copilot, Zed, you name it. The setup is three steps: install Encore, generate AI rules for your editor, and start the MCP server so the agent can see your application's architecture, schemas, and traces.
# Install Encore
brew install encoredev/tap/encore # macOS
curl -L https://encore.dev/install.sh | bash # Linux
iwr https://encore.dev/install.ps1 | iex # Windows
# Create your app
encore app create my-app
# Generate AI config for Cursor, Claude Code, VS Code, or Zed
encore llm-rules init
# Start the MCP server for full application context
encore mcp start
Open the project in your editor and start prompting. The AI rules give the agent Encore's conventions, and the MCP server gives it live context about your running app.
Go developers avoid frameworks for good reasons, but agents need structure to work inside. They're good at filling in the rest of the owl (the business logic, the queries, the wiring) once the overall shape is clear. They're bad at deciding what that shape should be.
Give them the structure and they write code you'd actually keep.
Want to see it in action? We'd love to show you how Encore works with your AI tools of choice. Book a 1:1 intro, no pressure, just a conversation.


