hectoday
DocsCoursesChangelog GitHub
DocsCoursesChangelog GitHub

Access Required

Enter your access code to view courses.

Invalid code

← All courses Real-Time APIs with @hectoday/http

Beyond Request-Response

  • Why Real-Time Matters
  • Project Setup

Polling

  • Short Polling
  • Long Polling

Server-Sent Events

  • How SSE Works
  • Building an SSE Endpoint
  • Event Types and IDs
  • SSE in Practice

WebSockets

  • How WebSockets Work
  • Building a WebSocket Server
  • Rooms and Broadcasting
  • Authentication on WebSockets
  • Handling Disconnects and Reconnection

Patterns and Architecture

  • Pub/Sub
  • Presence
  • Scaling Real-Time

Putting It All Together

  • Choosing the Right Approach
  • Capstone: Live Task Board

Scaling Real-Time

The single-server limit

Each WebSocket and SSE connection consumes memory (a few KB per connection) and a file descriptor. A single Node.js server can handle 10,000-50,000 concurrent connections depending on memory and message volume.

For most apps, this is more than enough. A task board with 1,000 active users is well within a single server’s capacity.

When you need multiple servers

If you have more concurrent connections than one server can handle, or you need high availability (one server going down should not disconnect everyone), you need multiple servers behind a load balancer.

The problem: events published on server 1 are not seen by clients connected to server 2. Alice (on server 1) creates a task. Bob (on server 2) does not receive the event because the in-memory event bus and room lists are per-server.

Redis pub/sub

Redis provides a pub/sub messaging system. All servers subscribe to the same Redis channels. When an event is published on any server, Redis broadcasts it to all subscribers.

// src/redis-pubsub.ts
import { createClient } from "redis";

const publisher = createClient();
const subscriber = createClient();

await publisher.connect();
await subscriber.connect();

type MessageHandler = (channel: string, message: string) => void;
const handlers: MessageHandler[] = [];

export async function publishToRedis(channel: string, data: any): Promise<void> {
  await publisher.publish(channel, JSON.stringify(data));
}

export async function subscribeToRedis(channel: string, handler: MessageHandler): Promise<void> {
  handlers.push(handler);
  await subscriber.subscribe(channel, (message, ch) => {
    for (const h of handlers) {
      h(ch, message);
    }
  });
}

Connecting the event bus to Redis

Replace the in-memory event bus with Redis:

// src/event-bus.ts — updated for Redis
import { publishToRedis, subscribeToRedis } from "./redis-pubsub.js";

type EventHandler = (event: { boardId: string; type: string; data: any }) => void;
const localSubscribers: EventHandler[] = [];

export function subscribe(handler: EventHandler): () => void {
  localSubscribers.push(handler);
  return () => {
    const index = localSubscribers.indexOf(handler);
    if (index !== -1) localSubscribers.splice(index, 1);
  };
}

export async function publish(boardId: string, type: string, data: any): Promise<void> {
  const event = { boardId, type, data };

  // Publish to Redis (all servers receive it)
  await publishToRedis(`board:${boardId}`, event);
}

// Listen for events from Redis (including our own)
subscribeToRedis("board:*", (channel, message) => {
  const event = JSON.parse(message);
  for (const handler of localSubscribers) {
    try {
      handler(event);
    } catch (err) {
      console.error("Event handler error:", err);
    }
  }
});

Now when Alice (on server 1) creates a task, the event is published to Redis. Redis broadcasts it to all servers. Server 2 receives it and pushes it to Bob’s WebSocket connection.

Architecture diagram

┌─────────────┐     ┌─────────────┐
│  Server 1   │     │  Server 2   │
│  Alice ←ws  │     │  Bob ←ws    │
│  Carol ←sse │     │  Dave ←sse  │
└──────┬──────┘     └──────┬──────┘
       │                   │
       └───────┬───────────┘
               │
        ┌──────┴──────┐
        │    Redis     │
        │   Pub/Sub    │
        └─────────────┘

Alice creates a task on server 1. Server 1 publishes to Redis. Redis broadcasts to both servers. Server 1 pushes to Carol (SSE). Server 2 pushes to Bob (WebSocket) and Dave (SSE).

What about presence?

Presence is harder to scale. Each server tracks its own connections. With Redis pub/sub, events are forwarded, but the presence list is still per-server.

Solutions:

Redis-backed presence. Store presence in a Redis set instead of in-memory. Every server reads and writes to the same set.

// Join: add to Redis set
await redis.sAdd(`presence:${boardId}`, JSON.stringify({ userId, name }));

// Leave: remove from Redis set
await redis.sRem(`presence:${boardId}`, JSON.stringify({ userId, name }));

// Get: read from Redis set
const members = await redis.sMembers(`presence:${boardId}`);

TTL-based cleanup. Each server refreshes its users’ presence with a TTL. If a server dies, its users’ presence entries expire automatically.

When not to scale

Most apps do not need multi-server real-time. A single server handles thousands of concurrent connections. Redis adds complexity, a dependency, and a failure point.

Scale when you need to, not before. The in-memory event bus and rooms work perfectly for single-server deployments, which covers the majority of applications.

Exercises

Exercise 1: If you have Redis available, implement the Redis pub/sub bridge. Run two instances of your server on different ports. Verify events from one reach clients on the other.

Exercise 2: Without Redis, think about what breaks with multiple servers: in-memory rooms, presence, event buffers (SSE). List every piece of state that is per-server.

Exercise 3: Research Redis pub/sub vs Redis Streams. What are the differences? (Streams add persistence and consumer groups; pub/sub is fire-and-forget.)

Why does multi-server real-time need a shared message bus like Redis?

← Presence Choosing the Right Approach →

© 2026 hectoday. All rights reserved.