Logging and Monitoring
Logs in Docker
Docker captures everything your app writes to stdout and stderr. This is the standard: write to stdout, let Docker handle collection and routing.
// Your app logs to stdout (console.log)
console.log(
JSON.stringify({
level: "info",
message: "Server started",
port: 3000,
timestamp: new Date().toISOString(),
}),
); Docker captures this output and makes it available via docker logs.
Viewing logs
# All logs
docker logs myapp
# Follow (real-time)
docker logs -f myapp
# Last 100 lines
docker logs --tail 100 myapp
# Since a specific time
docker logs --since 2024-01-15T10:00:00 myapp
# With timestamps
docker logs -t myapp For Docker Compose:
docker compose logs
docker compose logs app # One service
docker compose logs -f app # Follow
docker compose logs --tail 50 # Last 50 lines per service Structured logging
The Securing Your API course introduced structured logging (JSON log lines). In a containerized environment, structured logging is essential because log aggregation tools (Grafana Loki, ELK, Datadog) parse JSON automatically.
// src/logger.ts
export function log(event: string, data: Record<string, any> = {}): void {
const entry = {
timestamp: new Date().toISOString(),
event,
...data,
};
console.log(JSON.stringify(entry));
} log("request", { method: "GET", path: "/health", status: 200, duration: 3 });
log("auth_failed", { email: "[email protected]", reason: "wrong_password" });
log("task_created", { taskId: "task-5", userId: "user-alice" }); Each line is a self-contained JSON object. Log aggregation tools parse the fields and let you search, filter, and visualize.
Log drivers
Docker supports different log drivers that route logs to various destinations:
# Default: json-file (stored on disk)
docker run --log-driver json-file myapp
# Syslog (send to a syslog server)
docker run --log-driver syslog myapp
# None (discard logs — not recommended)
docker run --log-driver none myapp In Docker Compose:
services:
app:
build: .
logging:
driver: json-file
options:
max-size: "10m" # Rotate after 10 MB
max-file: "3" # Keep 3 rotated files [!WARNING] Without
max-sizeandmax-file, Docker logs grow indefinitely and can fill the disk. Always set rotation limits in production.
Monitoring container resources
# Real-time resource usage
docker stats
# CONTAINER CPU % MEM USAGE / LIMIT NET I/O
# myapp 0.5% 45MiB / 512MiB 1.2kB / 0B docker stats shows CPU, memory, network, and disk I/O for all running containers. Watch it during load testing to spot resource issues.
Application-level monitoring
Beyond container metrics, monitor application health:
// Expose metrics at /metrics
route.get("/metrics", {
resolve: () => {
return Response.json({
uptime: process.uptime(),
memory: process.memoryUsage(),
activeConnections: getConnectionCount(), // from your SSE/WebSocket tracking
timestamp: new Date().toISOString(),
});
},
}); In production, use Prometheus for metrics collection and Grafana for dashboards. But the /metrics endpoint is a good start for basic monitoring.
Exercises
Exercise 1: Start your app in Docker. View logs with docker logs -f. Make some requests and watch the log output.
Exercise 2: Add log rotation (max-size: "10m", max-file: "3") to your Docker Compose file. Verify logs rotate.
Exercise 3: Run docker stats while sending requests to your app. Watch CPU and memory usage.
Why should applications log to stdout instead of to files?