Denial of Service via Input
When the input is the weapon
The previous vulnerabilities let the attacker read data, modify data, or execute code. Denial of service (DoS) via input does not steal anything — it makes your server unavailable by exhausting its resources with a single request.
ReDoS: catastrophic backtracking
Regular expressions can be weaponized. Some patterns, when matched against specific input, cause the regex engine to explore an exponential number of paths. This is called catastrophic backtracking or ReDoS (Regular Expression Denial of Service).
Consider a regex that validates email-like input:
const emailRegex = /^([a-zA-Z0-9]+)+@[a-zA-Z0-9]+\.[a-zA-Z]+$/; The ([a-zA-Z0-9]+)+ is the dangerous part. The nested quantifiers (+ inside +) create exponential backtracking when the input does not match. The attacker sends:
aaaaaaaaaaaaaaaaaaaaaaaaaaaa! (28 as followed by !, which forces a mismatch)
The regex engine tries every possible way to split the as between the inner and outer +. With 28 characters, that is 2^27 combinations — over 134 million. Your server hangs for minutes on a single request.
The fix: Avoid nested quantifiers. Use simple, non-backtracking patterns or use Zod for validation instead of regex:
// DANGEROUS — nested quantifiers
const bad = /^([a-zA-Z0-9]+)+$/;
// SAFE — single quantifier
const good = /^[a-zA-Z0-9]+$/;
// SAFEST — use Zod
const schema = z.object({ email: z.email() }); [!TIP] A rule of thumb: if your regex has a quantifier (
+,*,{n,m}) inside another quantifier, it is potentially vulnerable to ReDoS. Restructure it to remove the nesting.
JSON bombs
A deeply nested JSON object can exhaust memory or stack space when parsed:
{
"a": {
"a": {
"a": {
"a": { "a": { "a": { "a": { "a": { "a": { "a": { "a": { "a": { "a": "deep" } } } } } } } } }
}
}
}
} Or a massive array:
[1,1,1,1,1,1,1,...] // millions of elements Node.js’s JSON.parse handles deeply nested objects reasonably well, but a 100MB JSON body will consume 100MB+ of memory.
The fix: Limit request body size. Add a check before parsing:
route.post("/notes", {
resolve: async (c) => {
// Check Content-Length before reading the body
const contentLength = parseInt(c.request.headers.get("content-length") ?? "0");
if (contentLength > 100_000) {
// 100KB limit
return Response.json({ error: "Request too large" }, { status: 413 });
}
const body = await c.request.json();
// ...
},
}); A global body size limit can be applied in onRequest:
onRequest: ({ request }) => {
const contentLength = parseInt(request.headers.get("content-length") ?? "0");
if (contentLength > 1_000_000) { // 1MB global limit
return new Response(
JSON.stringify({ error: "Request too large" }),
{ status: 413, headers: { "content-type": "application/json" } },
);
}
return { startTime: Date.now() };
}, [!NOTE]
Content-Lengthcan be spoofed (the client can claim a smaller size than the actual body). For full protection, you would also limit the bytes read from the request stream. But checkingContent-Lengthstops honest clients and naive attacks.
Slowloris and request timeouts
A slowloris attack sends HTTP requests very slowly, keeping connections open and exhausting the server’s connection pool. The attacker opens hundreds of connections and sends one byte per second on each.
This is primarily an infrastructure concern (handled by reverse proxies like Nginx, which close slow connections). But you can also set request timeouts in your app:
// In srvx configuration
serve({
fetch: app.fetch,
port: 3000,
// Most runtimes support timeout configuration
}); The exact configuration depends on your runtime and reverse proxy. The key principle: never let a single request consume unlimited time or connections.
Summary of input-based DoS defenses
| Attack | Defense |
|---|---|
| ReDoS | Avoid nested quantifiers in regex. Use Zod for validation. |
| JSON bombs / large payloads | Limit request body size (Content-Length check). |
| Deeply nested JSON | Limit nesting depth if parsing untrusted JSON. |
| Slowloris | Set request timeouts. Use a reverse proxy. |
Exercises
Exercise 1: Create a route that uses the dangerous regex ^([a-zA-Z0-9]+)+$. Send a request with aaaaaaaaaaaaaaaaaaaaaaaaaaaa! as input. Observe how long it takes. Then replace the regex with ^[a-zA-Z0-9]+$ and try again.
Exercise 2: Add the Content-Length check to a route. Send a request with a very large body (use dd if=/dev/zero bs=1M count=10 | curl ...). Verify it is rejected with 413.
Exercise 3: Review all regex patterns in your codebase. Do any have nested quantifiers?
What makes a regex vulnerable to ReDoS?