Long-running operations
Not everything finishes instantly
Most API requests complete in milliseconds. The server reads from the database, formats the response, sends it back. Done. But some operations take longer: generating a report from thousands of records, processing a large file upload, sending bulk emails, exporting an entire dataset to CSV.
If these operations block the HTTP request, the client sits there waiting. The connection might time out after 30 seconds. The user stares at a spinner. And if the connection drops, they have no idea whether the operation completed or not.
We need a different pattern for these cases.
The 202 Accepted pattern
Instead of blocking until the work is done, the server accepts the request immediately and processes it in the background:
POST /reports -> 202 { "id": "report-1", "status": "processing" }
GET /reports/report-1 -> 200 { "id": "report-1", "status": "processing" }
GET /reports/report-1 -> 200 { "id": "report-1", "status": "completed", "result": { ... } } The flow is:
- The client sends a POST to start the operation.
- The server returns 202 Accepted immediately. This means “I got your request and I’ll work on it, but it’s not done yet.” The response includes an ID the client can use to check back later.
- The client polls the status endpoint periodically to see if the work is done.
- When it’s done, the status endpoint returns the result.
Implementation
Let’s build this for a report generation feature:
// In-memory job store (use a database or queue in production)
const jobs = new Map<
string,
{
status: "processing" | "completed" | "failed";
result?: any;
error?: string;
createdAt: number;
}
>();
// Start a long-running operation
route.post("/reports", {
resolve: async (c) => {
const body = await c.request.json();
const id = crypto.randomUUID();
// Store the job as processing
jobs.set(id, { status: "processing", createdAt: Date.now() });
// Process in the background (fire-and-forget)
processReport(id, body).catch((err) => {
jobs.set(id, { status: "failed", error: err.message, createdAt: Date.now() });
});
return Response.json(
{ id, status: "processing" },
{
status: 202,
headers: { location: `/reports/${id}` },
},
);
},
}); Let’s walk through what happens when a client sends POST /reports.
The server generates a unique ID for this job and stores it in the jobs map with a status of "processing". Then it kicks off processReport in the background. Notice the .catch(): if the background work fails, we update the job status to "failed" with the error message.
The key detail: the server returns 202 immediately, before the report is actually generated. The Location header points to /reports/{id}, telling the client where to check the status.
Here’s the background processing function:
async function processReport(id: string, params: any): Promise<void> {
// Simulate long-running work
await new Promise((resolve) => setTimeout(resolve, 5000));
const result = {
title: "Book Sales Report",
generatedAt: new Date().toISOString(),
totalBooks: { count: books.length },
};
jobs.set(id, { status: "completed", result, createdAt: Date.now() });
} In a real app, this might query millions of rows, aggregate data, generate a PDF, or call external services. The setTimeout simulates that work.
Now the status endpoint:
route.get("/reports/:id", {
request: { params: z.object({ id: z.string() }) },
resolve: (c) => {
if (!c.input.ok) return Response.json({ error: c.input.issues }, { status: 400 });
const { id } = c.input.params;
const job = jobs.get(id);
if (!job) return notFound("Report");
if (job.status === "completed") {
return Response.json({
id,
status: "completed",
result: job.result,
});
}
if (job.status === "failed") {
return Response.json({
id,
status: "failed",
error: job.error,
});
}
// Still processing, tell the client to check back
return Response.json({ id, status: "processing" }, { headers: { "retry-after": "5" } });
},
}); The status endpoint checks the job’s state and returns the appropriate response. If the job is still processing, it includes a Retry-After: 5 header, telling the client to wait 5 seconds before polling again.
The client’s polling loop
Here’s what the client code looks like:
// Client-side
const { id } = await fetch("/reports", { method: "POST", body: JSON.stringify(params) }).then((r) =>
r.json(),
);
let result;
while (true) {
const res = await fetch(`/reports/${id}`);
const data = await res.json();
if (data.status === "completed") {
result = data.result;
break;
}
if (data.status === "failed") {
throw new Error(data.error);
}
// Wait before polling again
const retryAfter = parseInt(res.headers.get("retry-after") ?? "5");
await new Promise((r) => setTimeout(r, retryAfter * 1000));
} The client starts the operation, then enters a loop. It checks the status, and if the work isn’t done yet, it waits the amount of time the server suggested before checking again. This prevents the client from hammering the server with polls every millisecond.
When to use 202
Use 202 when: the operation takes more than a few seconds, the work can fail independently of the request, or you want to decouple the request from the processing.
Use synchronous responses (200/201) when: the operation completes quickly, the client needs the result right away, or the operation is simple enough that blocking is fine.
A good rule of thumb: if the operation might take more than 2-3 seconds, consider making it asynchronous with 202.
Cleaning up old jobs
Job records don’t need to stick around forever. Set a TTL and clean up completed or failed jobs periodically:
// Delete jobs older than 24 hours
setInterval(
() => {
const cutoff = Date.now() - 24 * 60 * 60 * 1000;
for (const [id, job] of jobs) {
if (job.createdAt < cutoff) jobs.delete(id);
}
},
60 * 60 * 1000,
); In a production system, you’d store jobs in a database or use a message queue like Redis or RabbitMQ instead of an in-memory map. The pattern stays the same.
What’s next
We’ve covered almost every REST pattern. There’s one more concept worth understanding: HATEOAS, the idea that API responses should include links to related resources and available actions. It’s not something every API needs, but knowing when it helps (and when it’s overkill) is valuable.
Exercises
Exercise 1: Implement POST /reports and GET /reports/:id. Start a report and poll until it completes.
Exercise 2: Add a Retry-After header to the processing response. Implement the client polling loop that respects it.
Exercise 3: Add error handling: make processReport fail for certain inputs. Verify the status endpoint returns the error.
Why does the 202 response include a Location header?