Edge Computing for Web Apps: A Practical Guide for 2026
Author
ZTABS Team
Date Published
Every millisecond of latency costs you users. A request from Tokyo to a server in Virginia travels roughly 12,000 kilometers, adding 80-150ms of round-trip network latency before your server even starts processing. Edge computing eliminates that distance by running your code in data centers distributed worldwide, physically close to your users.
In 2026, edge computing has moved from experimental to mainstream. The platforms are mature, the tooling is solid, and the cost models make sense for a growing range of workloads. But edge is not a silver bullet — it comes with constraints that matter. This guide covers what edge computing actually is, how the major platforms compare, and when you should (and should not) move your logic to the edge.
What Edge Computing Actually Means
Edge computing runs your application logic on servers distributed across dozens or hundreds of global locations, rather than in a single data center. When a user makes a request, it is handled by the nearest edge node instead of traveling to a centralized origin.
Traditional architecture:
User (Tokyo) → CDN (static assets) → Origin Server (Virginia) → Database (Virginia)
Round trip: ~150ms network + server processing
Edge architecture:
User (Tokyo) → Edge Node (Tokyo) → Response
Round trip: ~10ms network + edge processing
The key distinction is between edge caching (which CDNs have done for decades) and edge compute (running arbitrary code at CDN locations). Edge compute lets you execute business logic — authentication checks, A/B test assignments, content personalization, API routing — without a round trip to your origin.
Edge Function Platforms Compared
Three platforms dominate edge compute for web applications in 2026. Each has distinct strengths.
Cloudflare Workers
Cloudflare Workers pioneered the edge compute category. They run on Cloudflare's network of 300+ data centers using the V8 isolate model — the same JavaScript engine that powers Chrome, but without a full Node.js runtime. This means sub-millisecond cold starts and very low memory overhead.
// Cloudflare Worker: geolocation-based content routing
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const country = request.cf?.country ?? "US";
const url = new URL(request.url);
if (country === "DE" || country === "AT" || country === "CH") {
url.hostname = "eu.api.yourapp.com";
} else if (country === "JP" || country === "KR" || country === "SG") {
url.hostname = "apac.api.yourapp.com";
}
return fetch(url.toString(), request);
},
};
Cloudflare Workers strengths:
- Largest edge network (300+ locations)
- Sub-millisecond cold starts via V8 isolates
- Durable Objects for stateful edge applications (counters, rate limiters, WebSocket coordination)
- R2 (S3-compatible storage) and D1 (SQLite at the edge) for data access without origin round trips
- KV (key-value store) distributed globally with eventual consistency
Cloudflare Workers constraints:
- No full Node.js API — some npm packages that depend on Node built-ins will not work
- CPU time limits (10ms on free plan, 30s on paid)
- 128 MB memory limit per isolate
- D1 is SQLite-based, not a full PostgreSQL replacement
// Cloudflare Worker with D1: query SQLite at the edge
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
const slug = url.pathname.replace("/api/posts/", "");
const post = await env.DB.prepare(
"SELECT title, content, published_at FROM posts WHERE slug = ? AND status = ?"
)
.bind(slug, "published")
.first();
if (!post) {
return new Response("Not found", { status: 404 });
}
return new Response(JSON.stringify(post), {
headers: {
"Content-Type": "application/json",
"Cache-Control": "public, max-age=3600",
},
});
},
};
Vercel Edge Functions
Vercel Edge Functions are deeply integrated with the Next.js framework. They use the same V8 isolate model as Cloudflare Workers and deploy to Vercel's edge network. The killer feature is seamless integration with Next.js middleware and App Router.
// Next.js Middleware: runs at the edge on every request
import { NextRequest, NextResponse } from "next/server";
export function middleware(request: NextRequest) {
const country = request.geo?.country ?? "US";
const pathname = request.nextUrl.pathname;
if (pathname.startsWith("/app") && !request.cookies.has("session")) {
return NextResponse.redirect(new URL("/login", request.url));
}
const response = NextResponse.next();
response.headers.set("x-user-country", country);
return response;
}
export const config = {
matcher: ["/app/:path*", "/api/:path*"],
};
Vercel Edge Functions strengths:
- First-class Next.js integration (middleware, edge API routes, edge-rendered pages)
- Automatic deployment — push to Git and edge functions deploy globally
- Edge Config for low-latency feature flags and configuration (reads in ~1ms)
- Seamless fallback to serverless functions for workloads that need Node.js APIs
// Vercel Edge Function: A/B testing with Edge Config
import { NextRequest, NextResponse } from "next/server";
import { get } from "@vercel/edge-config";
export async function middleware(request: NextRequest) {
const abTestConfig = await get("homepage-experiment");
if (!abTestConfig) return NextResponse.next();
const bucket = request.cookies.get("ab-bucket")?.value
?? (Math.random() < 0.5 ? "control" : "variant");
const response = NextResponse.rewrite(
new URL(
bucket === "variant" ? "/home-variant" : "/home",
request.url
)
);
if (!request.cookies.has("ab-bucket")) {
response.cookies.set("ab-bucket", bucket, { maxAge: 60 * 60 * 24 * 30 });
}
return response;
}
Vercel Edge Functions constraints:
- Tied to the Vercel platform (cannot self-host edge functions)
- Limited Node.js API surface (same V8 isolate model)
- Edge functions have a 25-second execution limit
- Database connections require external adapters (no persistent connections from edge)
Deno Deploy
Deno Deploy runs TypeScript and JavaScript on Deno's global edge network. It uses V8 isolates like the others but includes Deno's built-in standard library, which provides more APIs than bare V8 (including fetch, Web Streams, and Web Crypto natively).
// Deno Deploy: edge API with KV storage
const kv = await Deno.openKv();
Deno.serve(async (req: Request) => {
const url = new URL(req.url);
if (url.pathname === "/api/pageviews" && req.method === "POST") {
const { page } = await req.json();
const key = ["pageviews", page];
await kv.atomic()
.sum(key, 1n)
.commit();
const result = await kv.get(key);
return new Response(JSON.stringify({ views: Number(result.value) }), {
headers: { "Content-Type": "application/json" },
});
}
return new Response("Not found", { status: 404 });
});
Deno Deploy strengths:
- Native TypeScript support without build step
- Deno KV provides globally distributed key-value storage with strong consistency
- Web-standard APIs (fetch, Streams, Crypto) work without polyfills
- Supabase Edge Functions run on Deno Deploy, giving you PostgreSQL access at the edge
- Open-source runtime — you can run the same code locally with
deno serve
Deno Deploy constraints:
- Smaller ecosystem than Node.js (many npm packages work via compatibility layer, but not all)
- Fewer edge locations than Cloudflare (35+ regions vs 300+)
- Less mature than Cloudflare Workers for production workloads
Platform Comparison Table
| Dimension | Cloudflare Workers | Vercel Edge Functions | Deno Deploy | |-----------|-------------------|----------------------|-------------| | Edge locations | 300+ | 100+ | 35+ | | Cold start | Under 1ms | Under 1ms | Under 5ms | | Runtime | V8 isolates | V8 isolates | V8 isolates (Deno) | | Max execution time | 30s (paid) | 25s | 50ms CPU / request | | Persistent storage | KV, D1, R2, Durable Objects | Edge Config, Blob Store | Deno KV | | Framework integration | Any (Hono, SvelteKit, etc.) | Next.js (deep), others via adapters | Fresh, Hono, any | | Node.js compat | Partial | Partial | Via compatibility layer | | Self-hostable | No | No | Runtime is open-source | | Free tier | 100K requests/day | 1M executions/month | 1M requests/month |
When to Use Edge Computing
Edge compute delivers the most value for specific categories of workloads. Moving the wrong workload to the edge creates complexity without meaningful performance gains.
Ideal Edge Workloads
Authentication and authorization. Verifying JWTs, checking session cookies, and redirecting unauthenticated users can happen entirely at the edge with no origin round trip.
// Edge auth: verify JWT without hitting origin
import { jwtVerify } from "jose";
export async function middleware(request: NextRequest) {
const token = request.cookies.get("auth-token")?.value;
if (!token) {
return NextResponse.redirect(new URL("/login", request.url));
}
try {
const { payload } = await jwtVerify(
token,
new TextEncoder().encode(process.env.JWT_SECRET)
);
const response = NextResponse.next();
response.headers.set("x-user-id", payload.sub as string);
return response;
} catch {
return NextResponse.redirect(new URL("/login", request.url));
}
}
Geolocation and personalization. Serving localized content, currency conversion, and region-specific pricing based on the user's location.
A/B testing and feature flags. Assigning users to experiment buckets and rewriting requests to different variants at the edge, before the page even starts rendering.
API routing and rate limiting. Routing requests to the correct backend service and enforcing rate limits without consuming origin server resources.
Bot detection and security. Blocking malicious traffic, verifying CAPTCHAs, and enforcing WAF rules before requests reach your application.
Content transformation. Rewriting HTML, injecting headers, modifying response bodies, or converting image formats on the fly.
Workloads That Belong on the Origin
Heavy database queries. If your logic requires multiple SQL joins, transactions, or writes to a primary database, the database connection latency from edge to origin negates the edge location benefit. Edge-to-database round trips can be slower than a direct origin request.
Long-running computations. PDF generation, video transcoding, machine learning inference, and batch processing need more CPU time and memory than edge platforms provide.
Stateful operations. Multi-step workflows that require locks, queues, or persistent state are better served by traditional servers or containers (with the exception of Cloudflare Durable Objects for specific use cases).
Large payload processing. Parsing or transforming large request/response bodies hits memory limits on edge platforms quickly.
Architecture Patterns
Pattern 1: Edge Middleware + Origin API
The most common pattern. Edge middleware handles auth, routing, and request enrichment. The origin server handles business logic and database operations.
User → Edge (auth, routing, headers) → Origin (business logic, DB) → Response
This gives you the latency benefits of edge auth and routing without moving your entire application. It is the default pattern for Next.js on Vercel.
Pattern 2: Edge-First with KV Cache
For read-heavy applications, cache frequently accessed data in edge KV storage and serve it directly. Cache misses fall through to the origin.
// Edge-first with KV cache fallback
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
const cacheKey = `page:${url.pathname}`;
const cached = await env.CONTENT_KV.get(cacheKey, "text");
if (cached) {
return new Response(cached, {
headers: {
"Content-Type": "text/html",
"X-Cache": "HIT",
"Cache-Control": "public, max-age=60",
},
});
}
const originResponse = await fetch(`${env.ORIGIN_URL}${url.pathname}`);
const html = await originResponse.text();
await env.CONTENT_KV.put(cacheKey, html, { expirationTtl: 300 });
return new Response(html, {
headers: {
"Content-Type": "text/html",
"X-Cache": "MISS",
},
});
},
};
Pattern 3: Full Edge Application
For applications with simple data needs, you can run the entire backend at the edge using D1 (Cloudflare) or Deno KV (Deno Deploy). This works well for blogs, documentation sites, URL shorteners, and API proxies.
// Full edge app: URL shortener with D1
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
const slug = url.pathname.slice(1);
if (!slug) {
return new Response("URL Shortener", { status: 200 });
}
const result = await env.DB.prepare(
"SELECT target_url FROM links WHERE slug = ?"
).bind(slug).first<{ target_url: string }>();
if (!result) {
return new Response("Not found", { status: 404 });
}
await env.DB.prepare(
"UPDATE links SET click_count = click_count + 1 WHERE slug = ?"
).bind(slug).run();
return Response.redirect(result.target_url, 302);
},
};
Common Pitfalls
Cold Starts Are Not the Problem You Think
V8 isolate-based platforms (Cloudflare Workers, Vercel Edge, Deno Deploy) have cold starts under 5ms. This is fundamentally different from traditional serverless functions (AWS Lambda, Google Cloud Functions) where cold starts can exceed 1 second. Do not avoid edge compute because of cold start concerns — the isolate model solved this.
Database Connections from the Edge
Edge functions cannot maintain persistent database connection pools because each request may run in a different isolate. Use connection poolers (PgBouncer, Supabase connection pooling, Neon serverless driver) or HTTP-based database clients.
// Neon serverless driver: designed for edge environments
import { neon } from "@neondatabase/serverless";
const sql = neon(process.env.DATABASE_URL!);
export async function GET(request: Request) {
const posts = await sql`
SELECT id, title, slug, published_at
FROM posts
WHERE status = 'published'
ORDER BY published_at DESC
LIMIT 20
`;
return Response.json(posts);
}
Testing Edge Locally
All three platforms provide local development tooling that simulates the edge runtime. Use them — do not deploy to production to test edge behavior.
- Cloudflare:
wrangler devruns a local miniflare environment - Vercel:
next devwith--experimental-edgeflag simulates edge middleware - Deno:
deno serveruns your edge functions with the same runtime
The Future of Edge Compute
Edge computing in 2026 is converging on a few clear trends. Edge databases (D1, Deno KV, Turso) are bringing data closer to compute. WebAssembly is expanding the edge beyond JavaScript to Rust, Go, and Python. And frameworks like Next.js, SvelteKit, and Remix are making edge deployment a configuration choice rather than an architectural rewrite.
The edge is not replacing cloud infrastructure. It is becoming the first layer of your application — handling the fast path (auth, routing, cached reads) while your origin handles the complex path (transactions, writes, heavy computation).
Getting Started
If you have not adopted edge computing yet, start small. Pick one workload — authentication middleware, geolocation routing, or a high-traffic API endpoint — and deploy it to the edge. Measure the TTFB improvement. Then expand based on data.
If you are planning a new application architecture or looking to optimize an existing one with edge computing, talk to our team. We design and build edge-first architectures on Cloudflare Workers, Vercel Edge, and Deno Deploy — helping you put compute where it delivers the most impact without over-engineering your stack.
Move compute closer to your users. Measure the difference. Scale what works.
Need Help Building Your Project?
From web apps and mobile apps to AI solutions and SaaS platforms — we ship production software for 300+ clients.
Related Articles
Kubernetes Deployment Patterns: Rolling, Blue-Green, Canary & More
A practical guide to Kubernetes deployment strategies. Covers rolling updates, blue-green deployments, canary releases, and A/B testing with production-ready YAML examples and decision criteria.
10 min readServerless Architecture Patterns: Build Scalable Apps Without Managing Servers
A practical guide to serverless architecture patterns. Covers AWS Lambda, Azure Functions, and Vercel Functions with event-driven, fan-out, and saga patterns. Includes strategies for cold starts, cost optimization, and production deployment.
9 min readCloud Migration Strategy Guide: Planning, Execution, and Optimization
Cloud migration is more than lifting servers. This guide covers migration strategies, cost planning, security considerations, and how to avoid the pitfalls that derail most cloud initiatives.