Multi-Stage Builds
The problem with single-stage builds
A single-stage Dockerfile installs everything in one image: TypeScript compiler, dev dependencies, build tools, source code. The final image includes all of this, even though the running app only needs the compiled JavaScript and production dependencies.
A typical single-stage Node.js image is 300-500 MB. Most of that is build tools the running app never uses.
Two stages
A multi-stage build uses multiple FROM instructions. Each FROM starts a new stage. The final image only includes the last stage. Earlier stages are discarded.
# ==================
# Stage 1: Build
# ==================
FROM node:22 AS build
WORKDIR /app
# Install ALL dependencies (including dev — we need TypeScript)
COPY package.json package-lock.json ./
RUN npm ci
# Copy source code and compile
COPY . .
RUN npm run build
# This runs tsc and produces dist/
# ==================
# Stage 2: Production
# ==================
FROM node:22-alpine AS production
WORKDIR /app
# Install production dependencies only
COPY package.json package-lock.json ./
RUN npm ci --omit=dev
# Copy compiled JavaScript from the build stage
COPY --from=build /app/dist ./dist
EXPOSE 3000
CMD ["node", "dist/server.js"] What each stage does
Build stage (FROM node:22 AS build): Uses the full Node.js image (has build tools). Installs all dependencies (including TypeScript). Compiles the code. This stage is ~500 MB but is discarded after the build.
Production stage (FROM node:22-alpine AS production): Uses the slim Alpine image. Installs only production dependencies. Copies the compiled JavaScript from the build stage with COPY --from=build. This is the final image — typically 80-120 MB.
The COPY --from=build /app/dist ./dist instruction copies files from the build stage into the production stage. The build stage is not included in the final image.
The size difference
# Build both
docker build -t myapp-single -f Dockerfile.single . # Single stage
docker build -t myapp-multi -f Dockerfile . # Multi stage
docker images
# REPOSITORY TAG SIZE
# myapp-single latest 450 MB
# myapp-multi latest 95 MB The multi-stage image is 4-5x smaller. It contains only what the running app needs.
Why smaller images matter
Faster deploys. Pulling a 95 MB image is faster than pulling a 450 MB image. On a slow connection (common for VPS providers), this saves minutes.
Less attack surface. The production image does not contain build tools (gcc, make, python). An attacker who gains access to the container has fewer tools to work with.
Less disk usage. On a server running multiple apps, smaller images mean more apps fit on the same disk.
Adding the build script
Make sure your package.json has a build script that compiles TypeScript:
{
"scripts": {
"build": "tsc",
"dev": "tsx watch src/server.ts",
"start": "node dist/server.js"
}
} And your tsconfig.json outputs to dist/:
{
"compilerOptions": {
"outDir": "dist"
}
} Native dependencies
Some packages (sharp, better-sqlite3, bcrypt) compile native code. The native binaries compiled in the build stage (full Debian) might not work in the production stage (Alpine) because the C libraries differ.
Two solutions:
Rebuild in the production stage. Copy package.json into the production stage and run npm ci --omit=dev there (which is what our Dockerfile does). This compiles native modules for the correct OS.
Use Alpine-compatible images for both stages. Use node:22-alpine for the build stage too. This avoids the cross-OS issue but requires installing build tools (apk add python3 make g++).
Exercises
Exercise 1: Create the multi-stage Dockerfile. Build it. Check the image size with docker images.
Exercise 2: Run the production image. Verify the app works at http://localhost:3000/health.
Exercise 3: Try to run tsc inside the production container: docker run myapp sh -c "tsc --version". It should fail — TypeScript is not in the production image.
Why does the production stage use node:22-alpine instead of node:22?