hectoday
DocsCoursesChangelog GitHub
DocsCoursesChangelog GitHub

Access Required

Enter your access code to view courses.

Invalid code

← All courses REST API Design with @hectoday/http

What Makes an API RESTful

  • APIs are contracts
  • Project setup
  • Resources, not actions

HTTP Methods

  • GET, POST, PUT, PATCH, DELETE
  • Idempotency
  • Method safety and side effects

Status Codes

  • The status codes that matter
  • Error responses

Resource Design

  • Modeling resources
  • Partial responses and field selection
  • Pagination
  • Filtering, sorting, and searching

API Lifecycle

  • Versioning
  • Content negotiation
  • Rate limiting and quotas

Advanced Patterns

  • Bulk operations
  • Long-running operations
  • HATEOAS and discoverability

Putting It All Together

  • API design checklist
  • Summary

Bulk operations

One at a time is too slow

Imagine a bookstore admin needs to add 100 books to the catalog. With our current API, that means 100 separate POST /books requests. Each request has HTTP overhead, validation, and a response. At 50 milliseconds per request, that’s 5 seconds of waiting.

A bulk endpoint lets the client send all 100 books in a single request. One HTTP round-trip, one response. Much faster.

But bulk operations introduce a new problem. What happens if 98 books are valid and 2 have duplicate ISBNs? Do you reject the entire batch? Or create the 98 and report errors for the 2?

Bulk create

Let’s build a bulk create endpoint:

const BulkCreateBody = z.object({
  books: z.array(CreateBookBody).min(1).max(100),
});

route.post("/books/bulk", {
  request: { body: BulkCreateBody },
  resolve: (c) => {
    if (!c.input.ok) return fromZodIssues(c.input.issues);

    const results: { index: number; id?: string; error?: string }[] = [];

    for (let i = 0; i < c.input.body.books.length; i++) {
      const input = c.input.body.books[i];

      if (input.isbn && books.some((b) => b.isbn === input.isbn)) {
        results.push({ index: i, error: "Duplicate ISBN" });
        continue;
      }

      const book: Book = {
        id: crypto.randomUUID(),
        title: input.title,
        isbn: input.isbn,
        genre: input.genre,
        publishedAt: input.publishedAt,
        authorId: input.authorId,
        createdAt: new Date().toISOString(),
      };
      books.push(book);
      results.push({ index: i, id: book.id });
    }

    const allSucceeded = results.every((r) => r.id);
    return Response.json({ results }, { status: allSucceeded ? 201 : 207 });
  },
});

Let’s walk through what happens. The client sends a JSON body with a books array containing up to 100 book objects. Each one is validated against the same CreateBookBody schema we use for individual creates.

The handler loops through each book and tries to insert it. If the insert succeeds, we record the new ID. If it fails (for example, because of a duplicate ISBN), we record the error. The index field in each result tells the client which book in their array the result corresponds to.

The response looks like this:

{
  "results": [
    { "index": 0, "id": "book-new-1" },
    { "index": 1, "error": "Duplicate ISBN" },
    { "index": 2, "id": "book-new-3" }
  ]
}

Books at index 0 and 2 were created successfully. Book at index 1 failed. The client can check each result to see what happened.

207 Multi-Status

Notice the status code logic. If every book was created, we return 201 (created). If some succeeded and others failed, we return 207 Multi-Status.

207 is a special status code that means “the response contains results for multiple sub-operations, and they have different outcomes.” It’s the right code for partial success. The client needs to look at the response body to understand what happened with each individual item.

All-or-nothing vs partial success

There are two valid approaches for bulk operations:

Partial success: create what you can, skip what fails, return 207 with per-item results. This is what we implemented above.

All-or-nothing: if any item fails, none are created. Validate everything first, and only insert if all items pass. Return 400 with the errors if anything fails. In a real application backed by a database, you would typically use a database transaction that rolls back on any error. Our in-memory version uses a simpler validate-first pattern:

// All-or-nothing version
const errors: { index: number; error: string }[] = [];

for (let i = 0; i < inputBooks.length; i++) {
  if (inputBooks[i].isbn && books.some((b) => b.isbn === inputBooks[i].isbn)) {
    errors.push({ index: i, error: "Duplicate ISBN" });
  }
}

if (errors.length > 0) {
  return apiError(400, "VALIDATION_ERROR", "Some books failed validation");
}

// All valid, insert all
for (const input of inputBooks) {
  books.push({ id: crypto.randomUUID(), ...input, createdAt: new Date().toISOString() });
}

Which one should you use? It depends on the use case. For importing data, partial success makes sense. You want to get as much in as possible and deal with failures separately. For financial operations, all-or-nothing is safer. You don’t want half a batch of payments to go through and half to fail.

Bulk delete

route.post("/books/bulk-delete", {
  request: {
    body: z.object({
      ids: z.array(z.string()).min(1).max(100),
    }),
  },
  resolve: (c) => {
    if (!c.input.ok) return fromZodIssues(c.input.issues);

    const idsToDelete = new Set(c.input.body.ids);
    const before = books.length;

    for (let i = books.length - 1; i >= 0; i--) {
      if (idsToDelete.has(books[i].id)) {
        books.splice(i, 1);
      }
    }

    return Response.json({ deleted: before - books.length });
  },
});

You might notice that bulk delete uses POST, not DELETE. That’s because DELETE requests don’t conventionally have a request body. Some APIs do use DELETE /books with a body containing IDs, but POST is safer and more widely supported by HTTP clients and proxies.

Always cap the batch size

Without a cap, a client could send a million items and overwhelm your server. Always enforce a maximum:

const MAX_BULK = 100;
if (items.length > MAX_BULK) {
  return apiError(400, "VALIDATION_ERROR", `Maximum ${MAX_BULK} items per request`);
}

For our bookstore, 100 is a reasonable limit. For a high-volume API, you might allow more. The important thing is that there is a limit.

What’s next

Bulk operations handle many items at once, but they still finish quickly. What about operations that take seconds or even minutes? Like generating a report, processing a file upload, or running a complex analysis? You can’t keep the HTTP connection open that long. That’s where long-running operations come in.

Exercises

Exercise 1: Implement POST /books/bulk with partial success. Create 5 books where one has a duplicate ISBN. Verify you get 207 with 4 successes and 1 failure.

Exercise 2: Implement POST /books/bulk-delete. Delete 3 books in one request.

Exercise 3: Implement an all-or-nothing version: validate all books first, and only insert them if every one passes.

When should a bulk operation use 207 Multi-Status instead of 201 Created?

← Rate limiting and quotas Long-running operations →

© 2026 hectoday. All rights reserved.