Deployment Checklist and Capstone
What we built
A complete deployment pipeline that takes a Hectoday HTTP app from local development to a running production server:
| Step | What it does | Tool |
|---|---|---|
| Dockerfile | Package the app into an image | Docker |
| Multi-stage build | Separate build and runtime, small image | Docker |
| .dockerignore | Exclude unnecessary files | Docker |
| Environment variables | Configuration without baking secrets | docker run -e |
| Health checks | Verify the app is working | HEALTHCHECK |
| Docker Compose | Multi-container app (app + nginx + redis) | docker-compose.yml |
| Nginx reverse proxy | SSL termination, request buffering | Nginx |
| Persistent volumes | Database and uploads survive restarts | Docker volumes |
| VPS deployment | Pull and run on a server | SSH + Docker |
| HTTPS | Let’s Encrypt + Certbot + Nginx SSL | Certbot |
| Zero-downtime deploys | Start new, health check, swap, stop old | Deploy script |
| CI build | Build and push image on every merge | GitHub Actions |
| Automated deploy | CI triggers deploy on the server | SSH from CI |
| Non-root user | Limit damage from container compromise | Dockerfile USER |
| Read-only filesystem | Prevent writes outside volumes | —read-only |
| Resource limits | Prevent CPU/memory exhaustion | —cpus, —memory |
| Structured logging | JSON logs, rotation, monitoring | console.log + Docker |
The complete pipeline
Developer pushes to main
│
├─ GitHub Actions: Run tests
│ └─ Tests pass?
│ ├─ No → Pipeline stops. No deploy.
│ └─ Yes ↓
│
├─ GitHub Actions: Build Docker image
│ ├─ Multi-stage build (compile TypeScript)
│ ├─ Tag with :latest and :commit-sha
│ └─ Push to Docker Hub
│
├─ GitHub Actions: Deploy via SSH
│ ├─ SSH into the VPS
│ ├─ Pull the new image
│ ├─ docker compose up -d --no-deps app
│ └─ Verify health check
│
└─ App is live at https://yourdomain.com
├─ Nginx handles HTTPS (Let's Encrypt)
├─ App runs as non-root user
├─ Database persists in a Docker volume
└─ Logs captured by Docker Checklist
Dockerfile
- Multi-stage build (build stage + production stage)
- Production stage uses Alpine (small image)
- Dependencies installed with
npm ci --omit=dev - .dockerignore excludes node_modules, .git, .env, databases
- CMD uses exec form (JSON array)
- HEALTHCHECK instruction included
Security
- Runs as non-root user (USER instruction)
- Read-only filesystem (—read-only) with tmpfs for /tmp
- Resource limits set (CPU and memory)
- Secrets passed at runtime (-e or —env-file), never baked into image
- No —privileged flag
Networking
- App uses expose (internal only), not ports
- Nginx is the only service with published ports (80, 443)
- HTTPS enabled with Let’s Encrypt
- HTTP redirects to HTTPS
- Firewall allows only 22, 80, 443
- Forwarded headers (X-Real-IP, X-Forwarded-For) configured
Data
- Database in a named volume (persists across restarts)
- Uploads in a named volume
- Volume backup strategy in place
- No application data stored inside the container
CI/CD
- Tests run before build
- Image built and pushed on every merge to main
- Image tagged with :latest and :commit-sha
- Deploy triggered automatically after successful build
- Layer caching enabled for fast rebuilds
Operations
- Structured logging (JSON to stdout)
- Log rotation configured (max-size, max-file)
- Health check monitored
- Zero-downtime deploy script
- Rollback procedure documented (pull previous SHA tag)
- SSL certificate auto-renewal (cron job)
The files
project/
src/ # Application code
Dockerfile # Multi-stage build
.dockerignore # Exclude unnecessary files
docker-compose.yml # App + Nginx + Redis
nginx.conf # Reverse proxy config
.env.production # Production env vars (not in git)
deploy.sh # Zero-downtime deploy script
.github/
workflows/
deploy.yml # Test → Build → Deploy pipeline Common mistakes
Not using .dockerignore. The build sends hundreds of MB of node_modules to Docker. Builds are slow and images are bloated.
Baking secrets into the image. ENV JWT_SECRET=... in the Dockerfile. Anyone with the image can read it.
No health checks. The container is “running” but the app is deadlocked. Docker does not know. Traffic keeps going to a dead app.
No log rotation. Logs grow until the disk is full. The database cannot write. The app crashes. Everything crashes.
Single-stage builds. The production image has TypeScript, dev dependencies, and build tools. 450 MB instead of 95 MB. Slower deploys, larger attack surface.
No resource limits. One container consumes all CPU and memory. Other containers starve. The host becomes unresponsive.
Using docker compose down/up for deploys. Downtime between stop and start. Use up -d --no-deps or the zero-downtime script.
Challenges
Challenge 1: Add a staging environment. Deploy to a separate staging server on pushes to a staging branch. Use a different domain and separate database.
Challenge 2: Add database backups. Write a cron job that backs up the SQLite database volume to object storage (S3/R2) daily. Test restoring from a backup.
Challenge 3: Add Prometheus + Grafana. Add Prometheus and Grafana containers to your compose file. Expose app metrics at /metrics. Build a dashboard showing request rate, response time, and memory usage.
Challenge 4: Migrate to PostgreSQL. Replace SQLite with a PostgreSQL container. Update the compose file, add a PostgreSQL volume, and update the app to use pg instead of better-sqlite3.
What is the most important step in the deployment pipeline?
What should you do if a deploy fails the health check?