JSON Logs
Why JSON
The previous lesson showed unstructured logs: "Request: GET /books". This lesson replaces them with JSON: {"method":"GET","path":"/books"}.
JSON logs are machine-parseable. A log aggregation tool (Datadog, Grafana, ELK stack) can parse the JSON, index every field, and let you search by any combination: “all errors on the /orders endpoint in the last hour.” Plain text cannot do this — the tool would need to guess what each part of the string means.
JSON logs are also human-readable. Not as pretty as console.log("GET /books"), but you can read {"method":"GET","path":"/books"} at a glance. And when you need to debug at 3 AM, searchability matters more than prettiness.
The log entry structure
Every log entry should have a consistent set of fields:
{
"timestamp": "2024-01-15T14:32:05.123Z",
"level": "info",
"message": "request completed",
"method": "GET",
"path": "/v2/books",
"status": 200,
"duration": 12
} timestamp — When the event occurred. ISO 8601 format. Essential for filtering by time.
level — Severity: debug, info, warn, error, fatal. Essential for filtering by importance. Covered in the next lesson.
message — A short, human-readable description. “request completed”, “order created”, “database connection failed”.
Additional fields — Context specific to this log entry: method, path, status, duration, userId, orderId, error details. These vary by entry.
Outputting JSON
The simplest structured logger:
function log(level: string, message: string, context: Record<string, unknown> = {}): void {
const entry = {
timestamp: new Date().toISOString(),
level,
message,
...context,
};
console.log(JSON.stringify(entry));
}
log("info", "request completed", { method: "GET", path: "/v2/books", status: 200, duration: 12 });
// Output: {"timestamp":"2024-01-15T14:32:05.123Z","level":"info","message":"request completed","method":"GET","path":"/v2/books","status":200,"duration":12} One line of JSON per log entry. Each line is a complete, self-contained record.
One line per entry
Log entries must be one JSON object per line (sometimes called JSONL or newline-delimited JSON). This is critical for log processing tools — they read line by line.
{"timestamp":"...","level":"info","message":"request started","method":"GET","path":"/books"}
{"timestamp":"...","level":"info","message":"request completed","method":"GET","path":"/books","status":200,"duration":12}
{"timestamp":"...","level":"error","message":"order failed","error":"INSUFFICIENT_STOCK","orderId":"order-99"} Never console.log a pretty-printed JSON object (with newlines and indentation) — it breaks the one-entry-per-line rule and confuses log parsers.
Replacing console.log
// BEFORE
console.log("Fetching books...");
console.log("Found", books.length, "books");
console.error("Error:", error.message);
// AFTER
log("debug", "fetching books");
log("info", "books fetched", { count: books.length });
log("error", "order creation failed", { error: error.message, orderId: "order-99" }); Every console.log becomes a log() call with a level, message, and context object. The context fields make the entry searchable.
Exercises
Exercise 1: Write the log function. Output 5 different log entries. Pipe the output to a file and verify each line is valid JSON.
Exercise 2: Replace 3 console.log calls in the project setup with structured log() calls. Add context fields to each.
Exercise 3: Parse one of your JSON log lines with JSON.parse(). Access individual fields (entry.level, entry.path). This is what log tools do automatically.
Why output one JSON object per line instead of pretty-printed JSON?