hectoday
DocsCoursesChangelog GitHub
DocsCoursesChangelog GitHub

Access Required

Enter your access code to view courses.

Invalid code

← All courses File Uploads and Storage with @hectoday/http

The Basics

  • Why File Uploads Are Hard
  • Project Setup

Receiving Files

  • Multipart Form Data
  • Validating Uploads
  • Saving to Disk

Serving Files

  • Serving Static Files
  • Range Requests and Resumable Downloads
  • Access Control on Files

Production Patterns

  • Streaming Uploads
  • Image Processing
  • Upload Progress and Cancellation

Cloud Storage

  • Presigned URLs
  • Moving from Local to Cloud

Putting It All Together

  • File Upload Checklist
  • Capstone: File Sharing API

Presigned URLs

The server bottleneck

When files flow through your server (client → server → storage), the server is a bottleneck. Every byte of every upload passes through your CPU and network. Ten concurrent 100 MB uploads need 1 GB of bandwidth.

Presigned URLs eliminate the bottleneck: the server generates a temporary upload URL, and the client uploads directly to cloud storage (S3, R2, GCS). The file never touches the server.

The flow

1. Client:  POST /files/upload-url  { filename: "photo.jpg", mimeType: "image/jpeg" }
2. Server:  Generates a presigned URL for S3 → returns it
3. Client:  PUT [presigned URL]  (uploads directly to S3)
4. Client:  POST /files/confirm  { key: "uploads/abc123.jpg" }
5. Server:  Records the file metadata in the database

The server never sees the file bytes. It only generates the URL and records the metadata.

Generating presigned URLs (S3-compatible)

Using the AWS SDK (works with S3, Cloudflare R2, MinIO, and any S3-compatible storage):

// src/cloud-storage.ts
import { S3Client, PutObjectCommand, GetObjectCommand } from "@aws-sdk/client-s3";
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";

const s3 = new S3Client({
  region: process.env.S3_REGION ?? "auto",
  endpoint: process.env.S3_ENDPOINT, // For R2 or MinIO
  credentials: {
    accessKeyId: process.env.S3_ACCESS_KEY!,
    secretAccessKey: process.env.S3_SECRET_KEY!,
  },
});

const BUCKET = process.env.S3_BUCKET ?? "fileshare";

export async function generateUploadUrl(
  key: string,
  mimeType: string,
  maxSizeBytes: number,
): Promise<string> {
  const command = new PutObjectCommand({
    Bucket: BUCKET,
    Key: key,
    ContentType: mimeType,
  });

  return getSignedUrl(s3, command, { expiresIn: 600 }); // 10 minutes
}

export async function generateDownloadUrl(key: string): Promise<string> {
  const command = new GetObjectCommand({
    Bucket: BUCKET,
    Key: key,
  });

  return getSignedUrl(s3, command, { expiresIn: 3600 }); // 1 hour
}

The upload URL route

route.post("/files/upload-url", {
  resolve: async (c) => {
    const user = authenticate(c.request);
    if (user instanceof Response) return user;

    const body = await c.request.json();
    const { filename, mimeType } = body;

    if (!filename || !mimeType) {
      return Response.json({ error: "filename and mimeType required" }, { status: 400 });
    }

    // Validate MIME type
    if (!ALLOWED_TYPES.has(mimeType)) {
      return Response.json({ error: "File type not allowed" }, { status: 400 });
    }

    // Generate a unique key
    const ext = extname(filename).toLowerCase();
    const key = `uploads/${crypto.randomUUID()}${ext}`;

    const uploadUrl = await generateUploadUrl(key, mimeType, MAX_FILE_SIZE);

    return Response.json({
      uploadUrl,
      key,
      expiresIn: 600,
      method: "PUT",
      headers: {
        "content-type": mimeType,
      },
    });
  },
});

The confirm route

After the client uploads to S3, it calls back to confirm:

route.post("/files/confirm", {
  resolve: async (c) => {
    const user = authenticate(c.request);
    if (user instanceof Response) return user;

    const body = await c.request.json();
    const { key, filename, mimeType, size } = body;

    if (!key || !filename) {
      return Response.json({ error: "key and filename required" }, { status: 400 });
    }

    // Optionally: verify the file exists in S3 (HeadObject)
    const id = crypto.randomUUID();
    db.prepare(
      "INSERT INTO files (id, user_id, original_name, stored_name, mime_type, size) VALUES (?, ?, ?, ?, ?, ?)",
    ).run(id, user.id, filename, key, mimeType, size ?? 0);

    return Response.json({ id, url: `/files/${id}` }, { status: 201 });
  },
});

The client

// Client-side presigned upload
async function uploadToCloud(file: File) {
  // Step 1: Get the presigned URL
  const urlRes = await fetch("/files/upload-url", {
    method: "POST",
    headers: { "content-type": "application/json" },
    body: JSON.stringify({ filename: file.name, mimeType: file.type }),
  });
  const { uploadUrl, key } = await urlRes.json();

  // Step 2: Upload directly to S3
  await fetch(uploadUrl, {
    method: "PUT",
    headers: { "content-type": file.type },
    body: file,
  });

  // Step 3: Confirm with the server
  const confirmRes = await fetch("/files/confirm", {
    method: "POST",
    headers: { "content-type": "application/json" },
    body: JSON.stringify({ key, filename: file.name, mimeType: file.type, size: file.size }),
  });

  return confirmRes.json();
}

The file goes directly from the browser to S3. The server handles two small JSON requests (URL generation and confirmation). No file bytes pass through the server.

Exercises

Exercise 1: If you have access to S3 or Cloudflare R2, implement the presigned URL flow. Upload a file and verify it appears in the bucket.

Exercise 2: Without cloud storage, implement a “fake” presigned URL that redirects to your local upload endpoint. This lets you test the client-side flow.

Exercise 3: Generate a presigned download URL. Open it in a browser (no auth needed). Verify the file downloads.

Why is the presigned URL approach more scalable than server-side uploads?

← Upload Progress and Cancellation Moving from Local to Cloud →

© 2026 hectoday. All rights reserved.