Back to Blog
5 min read

Docker for Full-Stack Developers — From Zero to Production

A practical guide to containerizing full-stack applications with Docker and Docker Compose, covering Node.js, Python, and database setups I use in production.

Docker for Full-Stack Developers — From Zero to Production

After spending weeks fighting environment inconsistencies across dev machines, I made Docker non-negotiable on every project. Here's everything I wish I'd known when I started.

Why Docker Changed How I Work

Before Docker, the classic "works on my machine" problem killed hours of debugging. When I joined Terros and started working on multiple client projects simultaneously — React, Adonis.js, PostgreSQL, different Node versions — I needed a way to keep environments isolated and reproducible.

Docker solved that. Today every project I start has a docker-compose.yml from day one.

The Basic Full-Stack Setup

Here's the compose file pattern I use for a Node.js + PostgreSQL project:

version: "3.8"
 
services:
  api:
    build:
      context: ./api
      dockerfile: Dockerfile
    ports:
      - "3333:3333"
    environment:
      - DATABASE_URL=postgres://user:password@db:5432/myapp
      - NODE_ENV=development
    volumes:
      - ./api:/app
      - /app/node_modules
    depends_on:
      db:
        condition: service_healthy
    command: npm run dev
 
  db:
    image: postgres:15-alpine
    environment:
      POSTGRES_USER: user
      POSTGRES_PASSWORD: password
      POSTGRES_DB: myapp
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U user -d myapp"]
      interval: 5s
      timeout: 5s
      retries: 5
 
  frontend:
    build:
      context: ./frontend
      dockerfile: Dockerfile
    ports:
      - "3000:3000"
    volumes:
      - ./frontend:/app
      - /app/node_modules
    environment:
      - NEXT_PUBLIC_API_URL=http://localhost:3333
 
volumes:
  postgres_data:

Multi-Stage Builds for Production

Development Dockerfiles with hot-reload are fine locally, but production images need to be lean.

# Stage 1 — deps
FROM node:20-alpine AS deps
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production

# Stage 2 — build
FROM node:20-alpine AS builder
WORKDIR /app
COPY . .
COPY --from=deps /app/node_modules ./node_modules
RUN npm run build

# Stage 3 — runner
FROM node:20-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production
COPY --from=builder /app/dist ./dist
COPY --from=deps /app/node_modules ./node_modules
EXPOSE 3333
CMD ["node", "dist/server.js"]

Result: dev image ~800MB → production image ~120MB.

Python / Django Setup

For Django projects (like Luxifia), I add a few tweaks:

FROM python:3.11-slim

WORKDIR /app

ENV PYTHONDONTWRITEBYTECODE=1 \
    PYTHONUNBUFFERED=1

RUN apt-get update && apt-get install -y \
    libpq-dev gcc \
    && rm -rf /var/lib/apt/lists/*

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

EXPOSE 8000
CMD ["gunicorn", "config.wsgi:application", "--bind", "0.0.0.0:8000"]

Common Mistakes I Made Early On

1. Not using .dockerignore — your node_modules and .git will bloat your image massively.

node_modules
.git
.env
*.log
dist
.next
__pycache__

2. Running as root — always add a non-root user in production:

RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser

3. Hardcoding secrets — never ENV SECRET_KEY=mysecret in a Dockerfile. Use --env-file at runtime or Docker secrets.

4. Not setting resource limits — in production compose, always cap memory:

deploy:
  resources:
    limits:
      memory: 512M

Networking Between Services

One thing that trips people up: inside Docker Compose, services talk to each other by service name, not localhost.

# Wrong — this only works outside Docker
DATABASE_URL=postgres://user:pass@localhost:5432/db

# Correct — use the service name
DATABASE_URL=postgres://user:pass@db:5432/db

What I Use in Production

Every production deployment I run uses:

  • Nginx as reverse proxy (separate container)
  • Certbot for SSL (volume-mounted certs)
  • Watchtower for automated image updates
  • GitHub Actions to build and push to registry, then SSH + docker compose pull && docker compose up -d

Docker made me dramatically faster at shipping to production. The initial setup overhead pays back within the first deployment.