Cloudflare Workers: The Good Parts
Workers are underrated. You can run code at the edge, store data in R2 and KV, and route traffic globally — all without managing servers. Here's the mental model that unlocked it for me.
I spent a long time thinking Cloudflare Workers were just a fancy CDN feature — something you'd use to rewrite headers or A/B test landing pages. Then I actually sat down and read the docs, and my understanding of "serverless" got completely reframed.
Workers aren't just edge functions. They're a full compute platform. Here's what clicked for me.
The mental model
Forget Lambda. Forget containers. A Worker is a V8 isolate — a tiny JavaScript runtime that starts in under a millisecond, runs your code, and gets destroyed. It runs in 300+ data centers simultaneously. There's no cold start in the traditional sense.
The programming model is dead simple:
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
if (url.pathname === "/api/hello") {
return Response.json({ message: "Hello from the edge" });
}
return new Response("Not found", { status: 404 });
},
};
That's it. It's just a fetch handler. You return a Response. Everything else is built on top of that.
The storage primitives
This is where Workers gets genuinely interesting. Cloudflare gives you three storage options, each with different tradeoffs:
- KV (Key-Value) — Globally replicated, eventually consistent. Great for config, feature flags, cached content. Reads are fast everywhere; writes propagate in seconds.
- R2 (Object storage) — S3-compatible but zero egress fees. Use it anywhere you'd use S3. Works great for user uploads, static assets, generated files.
- Durable Objects — The weird one. Each object is a single-threaded stateful actor with its own storage, running in a specific location. Use it for real-time collaboration, rate limiting, or anything that needs strong consistency.
For most apps, you'll only ever need KV and R2. Durable Objects are powerful but complex — save them for when you genuinely need coordinated state.
What I use Workers for
Here's my actual production usage:
- API proxies — Add auth headers, rate limit, and cache responses before they hit my origin server
- Webhooks — Handle Telegram/GitHub webhook validation and fan-out at the edge
- Image transforms — Resize and optimize images on-the-fly using Cloudflare Images (Workers does the routing)
- Geo-routing — Serve different content or redirect users based on
request.cf.country
Tunnels: the missing piece
Cloudflare Tunnel is what ties it all together for self-hosters. You run a small daemon (cloudflared) on your server, and it creates an encrypted outbound tunnel to Cloudflare's edge. No inbound ports. No firewall rules. Just:
cloudflared tunnel create my-tunnel
cloudflared tunnel route dns my-tunnel subdomain.yourdomain.com
cloudflared tunnel run my-tunnel
Your local service is now publicly accessible, proxied through Cloudflare, with DDoS protection and automatic HTTPS. I run n8n, Gitea, and a few other services this way. It's replaced every nginx reverse proxy config I ever wrote.
The gotchas
Workers aren't Node.js. The runtime differences will bite you.
- No
fs, nochild_process, no native modules — it's a browser-like runtime - CPU time limit of 10ms on the free plan (30s on paid). Long-running tasks need to be offloaded
- KV is eventually consistent — don't use it for inventory or anything where reads must reflect the latest write immediately
- Wrangler dev mode is good but not perfect — some behaviors only appear in production
None of these are dealbreakers. They're just constraints that push you toward better architecture. Once you internalize them, the platform snaps into focus.