Adding caching to a TypeScript API typically involves running Redis locally with Docker, managing connection strings, handling serialization, and provisioning ElastiCache or Memorystore for production. This guide shows a faster approach where the cache is provisioned automatically and the API is type-safe from the start.
We'll use Encore, which provides a built-in caching primitive that provisions Redis automatically during local development and ElastiCache (AWS) or Memorystore (GCP) in production.
Start by declaring a cache cluster. This is all the configuration needed.
// cache/cache.ts
import { CacheCluster } from "encore.dev/storage/cache";
// Becomes ElastiCache (Redis) on AWS or Memorystore (Redis) on GCP.
export const cluster = new CacheCluster("app-cache", {
evictionPolicy: "allkeys-lru",
});
Running encore run starts a local Redis instance automatically. Deploying provisions ElastiCache or Memorystore in your cloud account with sensible defaults.
Keyspaces define what you store and how keys are structured. Each keyspace is fully typed.
// cache/keyspaces.ts
import { StructKeyspace, StringKeyspace, IntKeyspace, expireIn } from "encore.dev/storage/cache";
import { cluster } from "./cache";
interface UserProfile {
id: number;
email: string;
name: string;
plan: string;
}
// Cache user profiles by ID
export const userProfiles = new StructKeyspace<{ userId: number }, UserProfile>(cluster, {
keyPattern: "user/:userId",
defaultExpiry: expireIn(15 * 60 * 1000), // 15 minutes
});
// Cache session tokens
export const sessions = new StringKeyspace<{ sessionId: string }>(cluster, {
keyPattern: "session/:sessionId",
defaultExpiry: expireIn(3600 * 1000), // 1 hour
});
// Rate limit counters
export const rateLimits = new IntKeyspace<{ ip: string; window: string }>(cluster, {
keyPattern: "ratelimit/:ip/:window",
defaultExpiry: expireIn(60 * 1000), // 1 minute
});
The key pattern uses path-like syntax with typed parameters. TypeScript ensures you pass the right key shape and get the right value type back.
The most common caching pattern: check the cache first, fall back to the database, and populate the cache on miss.
// users/users.ts
import { api } from "encore.dev/api";
import { SQLDatabase } from "encore.dev/storage/sqldb";
import { userProfiles } from "../cache/keyspaces";
// Provisions RDS on AWS or Cloud SQL on GCP with sensible defaults (uses Docker Postgres locally).
const db = new SQLDatabase("users", { migrations: "./migrations" });
interface User {
id: number;
email: string;
name: string;
plan: string;
}
export const getUser = api(
{ method: "GET", path: "/users/:id", expose: true },
async ({ id }: { id: number }): Promise<User> => {
// Check cache first
const cached = await userProfiles.get({ userId: id });
if (cached) return cached;
// Cache miss — query database
const user = await db.queryRow<User>`
SELECT id, email, name, plan FROM users WHERE id = ${id}
`;
// Populate cache for next time
if (user) {
await userProfiles.set({ userId: id }, user);
}
return user!;
}
);
The get call returns the typed UserProfile or undefined on a cache miss. The set call stores the value with the default TTL (15 minutes in this case).
When data changes, invalidate the cache so stale data isn't served.
export const updateUser = api(
{ method: "PUT", path: "/users/:id", expose: true },
async (req: { id: number; name: string; plan: string }): Promise<User> => {
const user = await db.queryRow<User>`
UPDATE users SET name = ${req.name}, plan = ${req.plan}
WHERE id = ${req.id}
RETURNING id, email, name, plan
`;
// Invalidate the cached version
await userProfiles.delete({ userId: req.id });
return user!;
}
);
For more granular control, you can replace instead of delete, which updates the cache immediately:
// Update cache immediately instead of deleting
if (user) {
await userProfiles.set({ userId: req.id }, user);
}
The IntKeyspace supports atomic increment and decrement operations, which makes it useful for rate limiting.
// middleware/ratelimit.ts
import { rateLimits } from "../cache/keyspaces";
export async function checkRateLimit(ip: string): Promise<boolean> {
const window = new Date().toISOString().slice(0, 16); // per-minute window
const count = await rateLimits.increment({ ip, window }, 1);
return count <= 100; // 100 requests per minute
}
The counter auto-expires after 1 minute (the TTL defined in the keyspace), so you don't need to clean up old rate limit windows.
encore run
Encore starts your app with a local Redis instance, PostgreSQL database, and distributed tracing. The cache operations appear in traces alongside database queries and API calls, so you can see the cache hit/miss ratio and latency in the local development dashboard.
git push encore
Encore Cloud provisions ElastiCache (AWS) or Memorystore (GCP) in your cloud account alongside your other infrastructure. The cache cluster gets production-appropriate defaults for networking, encryption, and backup that you can adjust per environment.
Deploy a starter with a database and caching to see how Encore handles infrastructure automatically.