Redis for Solo Founders: Caching Patterns That Beat Postgres on a Single VPS
April 7, 2026
Solo founders ship on a single VPS because the business math says so: one bill, one SSH session, one place to panic when Hacker News notices you. Postgres is the default brain — durable, expressive, boring in the best way. But Postgres on a small instance is also a magnet for death-by-a-thousand-reads: dashboards, public APIs, admin pages, and analytics-ish queries that look innocent until they stack. Redis is not a replacement for your database; it is a selective amnesia layer that keeps Postgres focused on writes and correctness while the hot paths breathe.
This article is a pragmatic pattern guide for 2026: what to cache, how to invalidate without inventing a second career in distributed systems, and how to avoid the classic traps that turn a speed-up into data lies.
We will stay vendor-neutral on hosting: whether your VPS lives on Hetzner, DigitalOcean, or a basement NUC, the patterns are the same — only the failure modes change when the NIC gets chatty.
Know the enemy: read amplification
Most early SaaS performance pain is not algorithmic complexity; it is read amplification — one HTTP request fans out into dozens of SQL lookups, N+1 shapes, or repeated aggregations that hit the same rows. You can optimize queries forever, or you can admit some answers do not need millisecond freshness.

Redis wins when your workload has a tall read-to-write ratio and you can name the staleness budget in plain language: “admin stats can lag 30 seconds,” “feature flags tolerate 5 seconds,” “public project pages can be 60 seconds behind for non-owners.” If you cannot state the budget, you are not ready to cache; you are ready to log queries.
Pattern 1: cache-aside for entity reads
The workhorse: on read, try Redis; on miss, query Postgres, populate Redis with a TTL. Keep keys boring: user:{id}, project:{id}:summary. TTL is your safety net when you forget invalidation. Choose TTL from human expectations, not vibes — shorter for security-sensitive, longer for expensive aggregates.
Invalidation rule of thumb: on writes that change the entity, delete the key (or version the key — more below). Deleting is simpler than updating in place when objects have nested shapes you do not want to partially stale.
Pattern 2: counters and rate limits
Redis shines for INCR with expirations: API throttles, signup bursts, invite quotas. Postgres can do this, but row-level contention on a hot counter row hurts. Keep limits coarse enough to be fair, tight enough to protect sleep.
Use sliding windows or token buckets when you care about smoothness; simple counters when you only need “stop the hammering.” Document what happens on Redis failure: fail open risks abuse; fail closed risks outages. Pick consciously.

Pattern 3: session and ephemeral tokens
Short-lived sessions, magic-link state, OAuth state parameters — data that already has a natural TTL belongs in Redis by default. Postgres migrations for ephemeral junk slow you down emotionally. Just persist what must survive server restarts for your threat model; everything else can evaporate.
Pattern 4: materialized-ish views without the ceremony
Heavy aggregates — daily revenue rollups, leaderboards — can be computed on a schedule into Redis hashes or sorted sets. This is not a substitute for proper accounting tables if money is involved, but it is excellent for internal dashboards that only need approximate steering data.
Tag computations with versioning in the key when schema changes: rollup:v2:2026-04-07. Nothing hurts like a silent format mismatch after deploy.
Invalidation strategies that do not require a PhD
- Key deletion on write — simplest; tolerate occasional thundering herds on big launches.
- Versioned keys — bump a global or per-entity version on write; readers compose keys with the version from a tiny Redis or in-memory pointer.
- Short TTL + async refresh — good for read-heavy public pages where stale-while-revalidate behavior is acceptable.
Avoid elaborate pub/sub chains until you have metrics proving you need them. Solo shops ship; they do not operate miniature Kafka cosplays for twelve paying users.
Memory planning on a small VPS
Redis is fast until it is OOM-killed. Set maxmemory and an eviction policy you understand. For caching, allkeys-lru is common. For mixed workloads (sessions + cache), separate DB indexes or instances if you can afford the RAM tax — it prevents session eviction from nuking login state under cache pressure.
Monitor memory fragmentation and key cardinality. Huge sets of tiny keys cost overhead; sometimes a single hash per namespace wins.
Persistence: do you need RDB/AOF?
Pure cache: often no persistence — rebuild on restart. If Redis holds non-reconstructible ephemeral tokens you cannot afford to drop on reboot, revisit whether those should be Postgres after all, or enable AOF with realistic fsync expectations. Every persistence choice is a latency and durability trade.
Security and tenancy
Bind Redis to private interfaces, require AUTH, rotate credentials, and never expose it publicly “temporarily.” Cache poisoning and lateral movement stories exist for a reason. Multi-tenant SaaS should namespace keys aggressively and never trust client-supplied cache keys without sanitization.
Lua scripts: tiny transactions when you need them
Sometimes you need compare-and-set semantics across a couple of keys — inventory-like counters, claim tokens, or “only one worker refreshes this key” guards. Redis Lua lets you bundle a few operations atomically on the server. Keep scripts short, deterministic, and preloaded via SCRIPT LOAD in production paths where possible. This is not your first caching tool; it is the scalpel when race conditions show up in logs.
Modules and JSON: convenience with a cost
RedisJSON and search modules are powerful for document-y caches. They also increase memory footprint and operational surface area. On a single VPS, prefer simple string blobs with explicit serialization until you outgrow them. Modules are not forbidden — just justify them with measured wins, not tutorial enthusiasm.
Connection handling and client libraries
Opening a TCP connection per request will murder throughput faster than a missing index. Use connection pooling in your app server, watch idle timeouts behind NAT, and tune timeout values for commands that should fail fast. If you run serverless-ish workers, consider a small local proxy or shared pool pattern — cold starts plus Redis handshakes are a classic latency trap.
Postgres read replicas vs Redis: choose the simpler lie
When you outgrow one box, the fork in the road is often “add a read replica” versus “add Redis.” Replicas improve read scaling for genuinely relational workloads; Redis helps when the data shape is denormalized or the hot path is tiny. Some teams do both; solo founders should sequence: fix queries, add Redis for provably hot keys, then consider replicas when Postgres CPU is still pinned with evidence.
When Postgres is still the answer
Keep source of truth in SQL for financial transactions, authorization grants you cannot afford to misread, and anything requiring complex joins as the primary interface. Redis accelerates; it does not adjudicate.
Bottom line
On a single VPS, Redis is leverage: shave milliseconds, protect Postgres, and buy calm during spikes — if you choose patterns with explicit TTLs and honest invalidation. Start with cache-aside on your noisiest reads, add counters for protection, push ephemeral state out of SQL, and measure before you engineer moon math. Speed is delightful; lying to customers is not.
Carry one design rule into prod: every cached key has an owner and a freshness story. If you cannot point to the write path that invalidates or the TTL that expires it, you are not caching — you are hoping, and hope does not scale.
Instrumentation before optimization
Turn on query logging or APM for a day before you cache. Solo founders love coding solutions; resist until you can name the top three expensive endpoints with actual timings attached. Often a missing index costs nothing compared to operating Redis — fix that first when EXPLAIN shames you.
Deployment hygiene
Run Redis supervised (systemd), with restart limits, and alerts on memory usage in production. Snapshot backups are optional for pure cache; uptime is not. If you colocate Redis with app and Postgres on one box, schedule background jobs during low-traffic windows so compaction spikes do not coincide with user peaks.
Local development parity
Use Docker Compose or a dev Redis instance that matches production major version. Serialization surprises between local JSON and production msgpack have fueled incident stories. Keep serializers explicit and versioned.
Thundering herd mitigation (lightweight)
When a popular key expires, many workers may stampede Postgres. Mitigations: jittered TTLs, single-flight locks with short timeouts, or serving slightly stale values while one refresher rebuilds. You do not need all three — pick one that matches your stack.
Feature flags and config blobs
Redis is a convenient home for small JSON config with TTL and an admin “bump version” button. Pair with Postgres as authority if you need audit trails; Redis can still accelerate reads while writes land in SQL first.
Document rollback: if Redis and Postgres disagree during deploy, which wins? Write the answer in your runbook while you are calm, not during an outage. Ambiguity here becomes customer-visible bugs under load.