Log Output and Transport
Where logs go
Your logger writes JSON lines. But where do those lines go? In development, the terminal. In production, it depends on your deployment:
stdout: the Docker default
The Building a Logger lesson wrote to process.stdout and process.stderr. This is the standard for Docker deployments:
// Logger already does this:
process.stdout.write(output + "\n"); // info, debug, warn
process.stderr.write(output + "\n"); // error, fatal Docker captures everything written to stdout and stderr. The container log command shows it:
docker logs my-api --since 1h [!NOTE] The Deploying with Docker course configured
docker logsand log drivers. Writing to stdout is the best practice — Docker handles routing, retention, and aggregation. Your application should not manage log files.
The twelve-factor app principle
The Twelve-Factor App methodology says: treat logs as event streams. Your application writes to stdout. The environment (Docker, Kubernetes, systemd) captures and routes the stream. The application does not know or care where logs end up.
This is why the Logger class writes to stdout/stderr — not to files. The deployment environment decides the destination.
File output (non-Docker deployments)
For servers running directly on a VM (no Docker), you might write to files:
import { createWriteStream } from "node:fs";
const logStream = createWriteStream("/var/log/book-catalog/app.log", { flags: "a" });
// In the Logger class:
private write(output: string, isError: boolean): void {
if (process.env.LOG_FILE) {
logStream.write(output + "\n");
} else if (isError) {
process.stderr.write(output + "\n");
} else {
process.stdout.write(output + "\n");
}
} The flags: "a" opens the file in append mode — new entries are added to the end.
Log rotation
Files grow forever. Log rotation limits file size by creating new files and archiving old ones:
app.log ← current (being written to)
app.log.1 ← yesterday's log (compressed)
app.log.2.gz ← two days ago (compressed)
app.log.3.gz ← three days ago (compressed) On Linux, logrotate handles this:
/var/log/book-catalog/app.log {
daily
rotate 7
compress
missingok
notifempty
copytruncate
} Rotate daily, keep 7 days, compress old files. copytruncate copies the file and truncates the original — the application keeps writing without interruption.
Log aggregation
In production, logs from multiple servers and services need to be collected in one place. Common tools:
ELK Stack (Elasticsearch + Logstash + Kibana) — Open source. Logstash collects logs, Elasticsearch indexes them, Kibana visualizes them.
Grafana + Loki — Lightweight alternative. Loki stores logs, Grafana queries and visualizes.
Datadog, Splunk, New Relic — Managed services. Send logs, they handle storage, search, and alerting.
All of these expect JSON logs — which is why the entire course builds structured JSON output. Your logs are already in the right format for any aggregation tool.
Configuring output per environment
// src/logger.ts
export function createLogger(): Logger {
const env = process.env.NODE_ENV ?? "development";
const level = (process.env.LOG_LEVEL ?? (env === "production" ? "info" : "debug")) as Level;
return new Logger({
level,
context: {
service: "book-catalog",
environment: env,
version: process.env.APP_VERSION ?? "unknown",
},
});
} Development: LOG_LEVEL=debug, all traces visible. Production: LOG_LEVEL=info, only operations and problems.
Exercises
Exercise 1: Run the API with stdout logging. Pipe the output to a file: node dist/server.js > app.log 2>&1. Verify the file contains JSON lines.
Exercise 2: Configure the logger to write to a file when LOG_FILE is set, stdout otherwise.
Exercise 3: Add environment and version to the base context. Verify every log entry includes them.
Why should applications write logs to stdout instead of managing log files?