Rate Limiting Jobs
The problem
You have 1,000 webhook delivery jobs in the queue. The worker processes them as fast as possible — 50 per second. The receiving server cannot handle that load and starts returning 429 (Too Many Requests). Every job fails. They all retry. The retry storm makes it worse.
Job rate limiting controls how fast the worker processes jobs of a specific type, preventing it from overwhelming external services.
A simple rate limiter
// src/rate-limiter.ts
export class JobRateLimiter {
private timestamps: Map<string, number[]> = new Map();
canProcess(type: string, maxPerMinute: number): boolean {
const now = Date.now();
const windowMs = 60_000;
let times = this.timestamps.get(type) ?? [];
// Remove timestamps outside the window
times = times.filter((t) => now - t < windowMs);
this.timestamps.set(type, times);
return times.length < maxPerMinute;
}
record(type: string): void {
const times = this.timestamps.get(type) ?? [];
times.push(Date.now());
this.timestamps.set(type, times);
}
} This tracks how many jobs of each type were processed in the last 60 seconds. If the count exceeds the limit, canProcess returns false and the worker skips that job type for now.
Integrating with the worker
const rateLimiter = new JobRateLimiter();
const RATE_LIMITS: Record<string, number> = {
send_webhook: 30, // 30 webhooks per minute
sync_inventory: 60, // 60 syncs per minute
send_email: 120, // 120 emails per minute
};
async function run(): Promise<void> {
while (running) {
const job = dequeue(WORKER_ID);
if (!job) {
await new Promise((r) => setTimeout(r, POLL_INTERVAL_MS));
continue;
}
// Check rate limit for this job type
const limit = RATE_LIMITS[job.type];
if (limit && !rateLimiter.canProcess(job.type, limit)) {
// Put the job back — cannot process it yet
db.prepare(
`
UPDATE jobs
SET status = 'pending', locked_by = NULL, locked_at = NULL,
scheduled_at = datetime('now', '+5 seconds')
WHERE id = ?
`,
).run(job.id);
await new Promise((r) => setTimeout(r, 1000));
continue;
}
try {
await processJob(job);
completeJob(job.id);
if (limit) rateLimiter.record(job.type);
} catch (err) {
failJob(job.id, err instanceof Error ? err.message : String(err));
}
}
} When the rate limit is reached, the worker puts the job back in the queue with a short delay (+5 seconds) and moves on to other job types. The job is not lost — it waits briefly and is picked up again when the rate allows.
[!NOTE] The Securing Your API course’s rate limiting lesson used a similar sliding window pattern for HTTP requests. The concept is the same: count events in a time window and reject when the count exceeds the limit. Here it controls job processing speed instead of request speed.
Rate limits per external service
Group rate limits by the service they call, not by job type:
const SERVICE_LIMITS: Record<string, { types: string[]; maxPerMinute: number }> = {
email_provider: {
types: ["send_email", "send_digest", "send_notification"],
maxPerMinute: 120,
},
payment_gateway: {
types: ["charge_card", "refund", "retry_payment"],
maxPerMinute: 60,
},
}; All email-related jobs share one rate limit because they all hit the same email API. This prevents three different email job types from each sending 120 emails per minute (360 total) when the provider only allows 120.
Exercises
Exercise 1: Implement the rate limiter. Set a limit of 5 per minute. Enqueue 20 jobs. Verify only 5 run in the first minute.
Exercise 2: Enqueue jobs of two different types with different rate limits. Verify each type respects its own limit independently.
Exercise 3: Remove the rate limiter and send 100 webhook jobs. Observe the failure rate when the simulated service returns 429.
Why does the worker put a rate-limited job back in the queue instead of waiting in a loop?