Ortem Technologies
    Cloud & DevOps

    How to Containerize a Node.js Application with Docker: Step-by-Step Guide

    Mehul ParmarMarch 10, 202611 min read
    How to Containerize a Node.js Application with Docker: Step-by-Step Guide
    Quick Answer

    To containerize a Node.js app: (1) create a Dockerfile using a slim Node base image (node:20-alpine); (2) copy package.json first and run npm ci before copying app code (enables Docker layer caching); (3) use a non-root user for security; (4) use a multi-stage build to keep the production image small; (5) add a .dockerignore file to exclude node_modules and .env; (6) use Docker Compose for local development with database and cache services. The resulting image should be under 200MB and start in under 3 seconds.

    Commercial Expertise

    Need help with Cloud & DevOps?

    Ortem deploys dedicated Cloud Infrastructure squads in 72 hours.

    Optimize Cloud Costs

    Next Best Reads

    Continue your research on Cloud & DevOps

    These links are chosen to move readers from general education into service understanding, proof, and buying-context pages.

    Prerequisites

    • Docker Desktop installed (docker.com)
    • A Node.js application (Express, Fastify, NestJS, or similar)
    • Basic terminal familiarity

    Step 1: Create a .dockerignore File

    Before writing the Dockerfile, tell Docker what to exclude from the build context:

    node_modules
    .env
    .env.*
    .git
    .gitignore
    README.md
    dist
    coverage
    *.log
    

    This prevents your local node_modules from being copied into the image (we install fresh inside) and keeps sensitive .env files out.

    Step 2: Write a Production Dockerfile

    # ---- Build stage ----
    FROM node:20-alpine AS builder
    WORKDIR /app
    
    # Copy dependency files first (cache layer)
    COPY package.json package-lock.json ./
    RUN npm ci --only=production
    
    # Copy application code
    COPY . .
    
    # If you have a build step (TypeScript, etc.)
    # RUN npm run build
    
    # ---- Runtime stage ----
    FROM node:20-alpine AS runtime
    WORKDIR /app
    
    # Create non-root user for security
    RUN addgroup -S appgroup && adduser -S appuser -G appgroup
    
    # Copy only what we need from builder
    COPY --from=builder /app/node_modules ./node_modules
    COPY --from=builder /app .
    
    # Own files as non-root user
    RUN chown -R appuser:appgroup /app
    USER appuser
    
    # Expose port and define startup command
    EXPOSE 3000
    CMD ["node", "src/index.js"]
    

    Why multi-stage? The builder stage installs all dev dependencies and runs build steps. The runtime stage only contains production dependencies — keeping the image lean.

    Step 3: Build and Test the Image

    # Build the image
    docker build -t my-node-app:latest .
    
    # Run it locally
    docker run -p 3000:3000 --env-file .env my-node-app:latest
    
    # Test it
    curl http://localhost:3000/health
    

    Step 4: Add Docker Compose for Local Development

    # docker-compose.yml
    version: '3.9'
    services:
      api:
        build:
          context: .
          target: runtime
        ports:
          - "3000:3000"
        environment:
          NODE_ENV: development
          DATABASE_URL: postgresql://postgres:password@db:5432/myapp
          REDIS_URL: redis://cache:6379
        depends_on:
          db:
            condition: service_healthy
          cache:
            condition: service_started
        volumes:
          - .:/app          # Hot reload in development
          - /app/node_modules  # Don't overwrite container's node_modules
    
      db:
        image: postgres:16-alpine
        environment:
          POSTGRES_USER: postgres
          POSTGRES_PASSWORD: password
          POSTGRES_DB: myapp
        volumes:
          - postgres_data:/var/lib/postgresql/data
        healthcheck:
          test: ["CMD-SHELL", "pg_isready -U postgres"]
          interval: 5s
          timeout: 5s
          retries: 5
    
      cache:
        image: redis:7-alpine
        volumes:
          - redis_data:/data
    
    volumes:
      postgres_data:
      redis_data:
    
    # Start all services
    docker compose up
    
    # Run in background
    docker compose up -d
    
    # View logs
    docker compose logs -f api
    
    # Stop
    docker compose down
    

    Step 5: Optimise Image Size

    Check your image size:

    docker images my-node-app
    

    Target: under 200MB for a typical Node.js API. Common bloat sources:

    • Using node:20 instead of node:20-alpine (adds ~600MB)
    • Including devDependencies in production (npm ci --only=production fixes this)
    • Copying entire repo including test files (.dockerignore fixes this)

    Production Checklist

    • Use specific image tags (node:20.11-alpine), not latest
    • Non-root user in production
    • Health check endpoint (GET /health returning 200)
    • Graceful shutdown handler (SIGTERM → drain connections → exit)
    • No secrets in Dockerfile or docker-compose.yml (use environment injection)
    • Image pushed to private registry (AWS ECR, GCP Artifact Registry)

    Need help building a containerised deployment pipeline? Talk to our DevOps team → or contact us to discuss your deployment architecture.

    Production Hardening: What Most Tutorials Miss

    The Dockerfile and Docker Compose setup above gets you running. These additional steps get you production-ready.

    Use a health check endpoint: Docker and Kubernetes use health checks to determine if your container is ready to receive traffic. Add a health check endpoint to your Express/Fastify app that returns 200 OK when the application is fully initialized and ready to serve requests. Configure the Docker HEALTHCHECK or Kubernetes readiness probe to call this endpoint. Without a health check, traffic can be routed to containers that are still initializing.

    Handle graceful shutdown: When Docker stops a container (SIGTERM signal), your Node.js application should finish processing in-flight requests before exiting. Listen for the SIGTERM signal and stop accepting new requests, allow in-flight requests to complete (with a timeout), and then exit cleanly. This prevents dropped requests during deployments.

    Set appropriate resource limits: In Docker Compose or Kubernetes, specify memory and CPU limits for your container. Node.js applications that experience memory leaks will eventually consume all available memory without limits, affecting other containers on the same host. A memory limit triggers the container to be restarted before it can consume all host resources.

    Use environment-specific configuration: Configuration that differs between environments (database URLs, API keys, feature flags) should be provided as environment variables, never hardcoded in the Dockerfile or the application code. Use .env files for local development, Docker Compose environment sections for local multi-container setups, and Kubernetes Secrets or AWS Parameter Store for production configuration.

    Implement structured logging: Production containers write logs to stdout/stderr rather than to files, and those logs should be structured JSON rather than unformatted text. JSON logs are queryable in centralized log systems (CloudWatch Logs, Datadog Logs, Elastic Stack) — you can filter and search by any field. Use a structured logging library (pino for Node.js — the fastest structured logging library in the ecosystem) rather than console.log for production applications.

    Container image security scanning: Before deploying any container image to production, scan it for known vulnerabilities. Trivy is the leading open-source scanner — it checks the base OS packages, application dependencies from npm lock files, and any other known vulnerability sources, reporting findings by severity level. Integrate Trivy into your CI pipeline to block deployment of images with critical CVEs.

    At Ortem Technologies, container-based deployment is standard on all of our client projects — Docker for consistent development environments, Kubernetes or ECS for production orchestration, and security scanning integrated into CI pipelines. Talk to our DevOps team about your containerization needs | Get help containerizing your application

    Multi-Stage Builds: The Production Image Size Strategy

    The multi-stage Dockerfile approach described above uses two stages: a builder stage that installs all dependencies (including devDependencies used during build or compilation), and a runtime stage that contains only the production artifacts and production dependencies.

    For TypeScript Node.js applications, the multi-stage approach is particularly valuable: the builder stage runs TypeScript compilation, and the runtime stage contains only the compiled JavaScript and production dependencies. A TypeScript application with extensive devDependencies (TypeScript compiler, ts-node, type definitions) that would produce a 800MB single-stage image can be reduced to 200MB with a multi-stage build.

    The practical size targets: a production Node.js application image should be under 200MB. Larger than that indicates either a large base image (use node:20-alpine or node:20-slim rather than node:20), too many production dependencies that could be replaced by lighter alternatives, or missing npm prune --production after dependency installation.

    Google's Distroless images (gcr.io/distroless/nodejs) take the minimization further — they contain only the Node.js runtime and its dependencies, with no shell, no package manager, and no other programs. Distroless images reduce image size significantly and eliminate the attack surface of having a shell in the container. The tradeoff: debugging distroless containers requires running a temporary container with a standard image alongside the distroless container.

    Talk to our DevOps team about containerization best practices | Get help containerizing your Node.js application

    About Ortem Technologies

    Ortem Technologies is a premier custom software, mobile app, and AI development company. We serve enterprise and startup clients across the USA, UK, Australia, Canada, and the Middle East. Our cross-industry expertise spans fintech, healthcare, and logistics, enabling us to deliver scalable, secure, and innovative digital solutions worldwide.

    📬

    Get the Ortem Tech Digest

    Monthly insights on AI, mobile, and software strategy - straight to your inbox. No spam, ever.

    Docker Node.jsContainerize Node.jsDocker TutorialDevOpsDocker Best Practices

    About the Author

    M
    Mehul Parmar

    Digital Marketing Head, Ortem Technologies

    Mehul Parmar is the Digital Marketing Head at Ortem Technologies, leading the marketing team under the direction of Praveen Jha. A seasoned digital marketing expert with 15 years of experience and 500+ projects delivered, he specialises in SEO, SEM, SMO, Affiliate Marketing, Google Ads, and Analytics. Certified in Google Ads & Analytics, he is proficient in CMS platforms including WordPress, Shopify, Magento, and Asp.net. Mehul writes about growth marketing, search strategies, and performance campaigns for technology brands.

    SEO & SEMDigital Marketing StrategyGoogle Ads & Analytics
    LinkedIn

    Stay Ahead

    Get engineering insights in your inbox

    Practical guides on software development, AI, and cloud. No fluff — published when it's worth your time.

    Ready to Start Your Project?

    Let Ortem Technologies help you build innovative solutions for your business.