
In a TypeScript backend, the compiler usually knows the shape of data moving through the system. API requests have types, database queries have types, service-to-service calls have types. Caching is the exception. You serialize an object to JSON, store it under a string key in Redis, and cast it back when you read it. The types are gone at that point, and the bugs that come from that are the kind that show up in production after a deploy rather than in your editor beforehand.
Encore.ts v1.55 adds built-in caching with typed keyspaces that keep the type system intact through to Redis. This post walks through what typically goes wrong with stringly-typed caching and how keyspaces change it.
Most Redis usage in TypeScript backends looks something like this:
await redis.set(`user:${userId}`, JSON.stringify(profile));
// somewhere else, maybe another service
const raw = await redis.get(`user:${userId}`);
const profile = JSON.parse(raw!) as UserProfile;
The key is a string assembled by convention, and the value is JSON you cast back on read. TypeScript's type system stops at the Redis boundary.
This creates a few problems that tend to surface late. If someone renames a field in the UserProfile type, the cached values still have the old shape, and JSON.parse happily returns them. The code reads profile.username and gets undefined because the field is now called name, but there's no type error because the as UserProfile cast told TypeScript to trust it. If another service writes to the same key pattern (user:${id}) with different data, the values silently overwrite each other. And the key naming convention itself is just a string template that nothing enforces.
These are the kinds of bugs that are hard to reproduce locally because the cache is usually empty in development, and they show up in production where cached values stick around across deploys.
The core abstraction is the keyspace, a typed container that defines the key structure, the value type, and how keys map to Redis key strings, all at compile time:
import { CacheCluster, StructKeyspace } from "encore.dev/storage/cache";
const cluster = new CacheCluster("my-cache", {
evictionPolicy: "allkeys-lru",
});
interface UserProfile {
name: string;
email: string;
plan: "free" | "pro";
}
const profiles = new StructKeyspace<{ userId: string }, UserProfile>(cluster, {
keyPattern: "profile/:userId",
});
Calling profiles.get({ userId: "123" }) generates the Redis key profile/123 and returns UserProfile | undefined, while profiles.set(...) requires the value to match the UserProfile shape. The compiler checks both directions.
If a rate limiter in another service needs to cache counters per user, it gets its own keyspace with its own key pattern:
const rateLimits = new IntKeyspace<{ userId: string }>(cluster, {
keyPattern: "ratelimit/:userId",
});
The key patterns are different (profile/:userId vs ratelimit/:userId), so the Redis keys can't collide. The value types are also different (StructKeyspace vs IntKeyspace), so reading from the wrong keyspace is a compile error rather than a silent bug.
Each keyspace type constrains its API surface to operations that match the underlying Redis data structure. An IntKeyspace exposes increment and decrement, a StringListKeyspace exposes pushRight, popLeft, and getRange. Calling increment on a struct keyspace won't compile.
Encore ships eight keyspace types:
| Keyspace | Stores | Typical use |
|---|---|---|
StringKeyspace | string | Session tokens, serialized data |
IntKeyspace | number (integer) | Counters, rate limits |
FloatKeyspace | number (float) | Scores, running averages |
StructKeyspace | JSON object | User profiles, cached API responses |
StringListKeyspace | string[] | Recent activity, message queues |
NumberListKeyspace | number[] | Score history, time series |
StringSetKeyspace | Set<string> | Tags, unique visitors |
NumberSetKeyspace | Set<number> | Unique scores |
Rate limiting is a clean caching use case because it needs fast atomic increments with automatic expiry. Here's what it looks like:
import { CacheCluster, IntKeyspace, expireIn } from "encore.dev/storage/cache";
import { api, APIError } from "encore.dev/api";
import { getAuthData } from "~encore/auth";
const cluster = new CacheCluster("rate-limit", {
evictionPolicy: "allkeys-lru",
});
const requestsPerUser = new IntKeyspace<{ userId: string }>(cluster, {
keyPattern: "requests/:userId",
defaultExpiry: expireIn(10_000), // 10 seconds
});
export const myEndpoint = api(
{ expose: true, method: "GET", path: "/my-endpoint", auth: true },
async (): Promise<{ message: string }> => {
const auth = getAuthData()!;
const count = await requestsPerUser.increment({ userId: auth.userID }, 1);
if (count > 10) {
throw APIError.resourceExhausted("rate limit exceeded");
}
return { message: "Hello!" };
}
);
The increment call is atomic and returns a number. Each counter resets after 10 seconds of inactivity via defaultExpiry. With a raw Redis client you'd call INCR and separately manage EXPIRE, or combine them in a Lua script. The keyspace handles expiry as part of its configuration, so the application code doesn't need to think about it.
Setting up Redis for production usually means ElastiCache on AWS or Memorystore on GCP, each with their own VPC configuration, subnet groups, parameter groups, and security groups. Then you need different settings per environment and connection strings wired through environment variables.
The CacheCluster declaration in your application code is the infrastructure definition. Locally, encore run provides an in-memory cache implementation. When you deploy through Encore Cloud, it provisions ElastiCache or Memorystore in your own cloud account, the same way it handles databases, Pub/Sub, object storage, and cron jobs.
Caching with the same keyspace model is also available in Encore Go, with the same infrastructure provisioning and local development experience.
The caching docs cover all eight keyspace types, expiry options, error handling, and testing. If you want to try it out, encore version update gets you to v1.55, and encore run will spin up the local cache automatically.
Encore is an open-source backend framework for TypeScript and Go. Infrastructure is declared in application code and provisioned automatically. GitHub.


