hectoday
DocsCoursesChangelog GitHub
DocsCoursesChangelog GitHub

Access Required

Enter your access code to view courses.

Invalid code

← All courses REST API Design with @hectoday/http

What Makes an API RESTful

  • APIs are contracts
  • Project setup
  • Resources, not actions

HTTP Methods

  • GET, POST, PUT, PATCH, DELETE
  • Idempotency
  • Method safety and side effects

Status Codes

  • The status codes that matter
  • Error responses

Resource Design

  • Modeling resources
  • Partial responses and field selection
  • Pagination
  • Filtering, sorting, and searching

API Lifecycle

  • Versioning
  • Content negotiation
  • Rate limiting and quotas

Advanced Patterns

  • Bulk operations
  • Long-running operations
  • HATEOAS and discoverability

Putting It All Together

  • API design checklist
  • Summary

Pagination

Never return everything

Our bookstore has 6 books. GET /books returns all 6 in one response. That works fine right now. But what happens when the bookstore has 100,000 books?

The response would be enormous. The client would choke trying to parse a massive JSON array. On a mobile network, the user would stare at a loading spinner for ages.

Every list endpoint needs pagination. Even if your list is small today, it won’t be small forever. Adding pagination later means changing the response shape, which is a breaking change. Better to do it from the start.

There are two main approaches: offset-based and cursor-based. Let’s look at both and understand why one is almost always better.

Offset-based pagination

This is the approach most people think of first. You’ve probably seen it: ?page=2&limit=10 or ?offset=10&limit=10.

route.get("/books", {
  request: {
    query: z.object({
      limit: z.coerce.number().default(20),
      offset: z.coerce.number().default(0),
    }),
  },
  resolve: (c) => {
    if (!c.input.ok) return fromZodIssues(c.input.issues);
    const { offset } = c.input.query;
    const limit = Math.min(c.input.query.limit, 100);

    const sorted = books.slice().sort((a, b) => b.createdAt.localeCompare(a.createdAt));
    const page = sorted.slice(offset, offset + limit);

    return Response.json({
      data: page,
      pagination: {
        total: books.length,
        limit,
        offset,
        hasMore: offset + limit < books.length,
      },
    });
  },
});

We validate query parameters with Zod, just like we do for params and body. Query strings are always strings in HTTP, so z.coerce.number() converts them to numbers automatically. The .default() provides a fallback when the parameter is missing. We still cap the limit at 100 after validation.

The client asks for a slice of the data: “Give me 20 books, starting at position 40.” We sort the full list, then use .slice(offset, offset + limit) to grab just that window. The response includes the total count so the client can build a “page 3 of 10” UI.

This approach is simple and lets clients jump to any page. But it has a problem that’s easy to miss.

What happens if a new book is added while the client is paginating? Say the client fetches page 1 (books 1 through 10). Then someone adds a new book, which lands at the top of the list. Now when the client fetches page 2 (books 11 through 20), everything has shifted down by one. Book 10 from page 1 appears again as book 11 on page 2. The client sees a duplicate.

The reverse can happen too. If a book is deleted while paginating, the client might skip one entirely and never see it.

There’s also a scaling consideration. With a real dataset, sorting and slicing the entire array on every request gets expensive as the list grows. For our in-memory bookstore this is fine, but it’s worth knowing the tradeoff.

Cursor-based pagination

Instead of saying “skip the first 40 rows,” cursor-based pagination says “give me items that come after this specific item.” The “cursor” is usually a value from the last item the client received, like a timestamp or an ID.

route.get("/books", {
  request: {
    query: z.object({
      limit: z.coerce.number().default(20),
      cursor: z.string().optional(),
    }),
  },
  resolve: (c) => {
    if (!c.input.ok) return fromZodIssues(c.input.issues);
    const { cursor } = c.input.query;
    const limit = Math.min(c.input.query.limit, 100);

    let sorted = books.slice().sort((a, b) => b.createdAt.localeCompare(a.createdAt));

    if (cursor) {
      sorted = sorted.filter((b) => b.createdAt < cursor);
    }

    const page = sorted.slice(0, limit);
    const nextCursor = page.length === limit ? page[page.length - 1].createdAt : null;

    return Response.json({
      data: page,
      pagination: {
        limit,
        nextCursor,
        hasMore: nextCursor !== null,
      },
    });
  },
});

Let’s walk through this by actually doing it. Start with a small limit so we can see pagination in action with our six seed books:

# First page: get 3 books
curl "http://localhost:3000/books?limit=3"

The response includes the first 3 books (newest first) and a nextCursor:

{
  "data": [ ... ],
  "pagination": {
    "limit": 3,
    "nextCursor": "2024-01-04T00:00:00Z",
    "hasMore": true
  }
}

Copy the nextCursor value and pass it as the cursor parameter to get the next page:

# Second page: books created before that cursor
curl "http://localhost:3000/books?limit=3&cursor=2024-01-04T00:00:00Z"

This returns the next 3 books. If hasMore is false, you’ve reached the end.

# What happens with no limit? Default kicks in (20)
curl http://localhost:3000/books

On the first request, there’s no cursor. We sort all books by createdAt descending (newest first) and return up to the limit.

The response includes a nextCursor, which is the createdAt value of the last book in the page.

For the next page, the client sends that cursor back. We filter the sorted list to only books with a createdAt before that timestamp, then take the next batch. This gives the next page of books.

Why is this better? Because inserting or deleting items doesn’t mess things up. The cursor points to a specific position in the data, not an index. If new books are added above the cursor, the cursor still points to the same place. No duplicates, no skipped items.

The tradeoff: the client can’t jump to an arbitrary page. There’s no “go to page 7.” You can only go forward (and optionally backward with a prevCursor). And you don’t get a total count unless you calculate it separately.

Which one should you use?

Use cursor-based for most APIs. It’s more reliable and scales better. This is what Twitter, Facebook, Slack, and most modern APIs use.

Use offset-based for admin panels and dashboards. If you need a “page 3 of 10” UI and the data doesn’t change frequently, offset pagination works fine.

For our bookstore API, we’ll use cursor-based pagination.

The response format

One important rule: never return a bare array from a list endpoint.

// BAD: bare array
[{ "id": "book-1", "title": "..." }, { "id": "book-2", "title": "..." }]

// GOOD: wrapped in an object
{
  "data": [{ "id": "book-1", "title": "..." }, { "id": "book-2", "title": "..." }],
  "pagination": {
    "limit": 20,
    "nextCursor": "2024-01-15T10:30:00Z",
    "hasMore": true
  }
}

If you start with a bare array and later need to add pagination metadata, you have to change the response shape. That’s a breaking change. Wrapping the list in a data field from the start gives you room to add metadata without breaking anything.

Always cap the limit

Without a cap, a client can request ?limit=1000000 and get the entire dataset in one response, defeating the purpose of pagination entirely.

// In the query schema:
limit: z.coerce.number().default(20),

// In the handler:
const limit = Math.min(c.input.query.limit, 100);

Zod handles the default. We still cap it at 100 after validation. Clients who need more data paginate through it.

Link headers

Some APIs put pagination links in HTTP headers instead of the response body:

Link: <https://api.example.com/books?cursor=abc&limit=20>; rel="next"

GitHub uses this pattern. It keeps the response body clean, but many developers find it harder to work with than a JSON field in the response. Either approach is fine. Our bookstore API uses the JSON approach because it’s more straightforward.

What’s next

Pagination controls how many results come back. But what if the client wants fiction books only, sorted by publication date? That’s filtering and sorting, which we’ll add next.

Exercises

Exercise 1: Implement cursor-based pagination on GET /books. Test by fetching page 1, extracting the cursor, and fetching page 2.

Exercise 2: Add a limit parameter with a default of 20 and a maximum of 100. Test with ?limit=5.

Exercise 3: Compare offset and cursor pagination: add 50 books, then paginate through them with both methods while simultaneously adding new books. With offset, you’ll see duplicates. With cursors, you won’t.

Why does offset-based pagination produce inconsistent results when data changes?

← Partial responses and field selection Filtering, sorting, and searching →

© 2026 hectoday. All rights reserved.