Introduction
Working in application deployment for over 7 years, I’ve observed how Docker containers revolutionize the way we build and distribute software. Using containerization increases developer productivity and streamlines CI/CD workflows. This tutorial shows practical, production-ready techniques for setting up Docker, building images, managing containers, and troubleshooting real issues.
Docker, founded in 2013, remains the industry-standard container runtime; examples below reference Docker 24.x-compatible CLI patterns and demonstrate pragmatic patterns you can adopt immediately. Examples focus on Node.js (current LTS: Node 20) and include comments, multi-stage builds, CI/CD automation, and security guidance.
By following this guide you’ll be able to containerize simple apps, run multi-container stacks with Docker Compose, persist application data with volumes, scan images for vulnerabilities, and troubleshoot common failures with copy-paste commands.
Prerequisites
Before you begin, ensure you have the following:
- A computer with a supported OS (Windows, macOS, or Linux).
- Basic knowledge of command-line interface (CLI) commands.
- Familiarity with application deployment concepts.
Introduction to Docker and Containerization
What is Docker?
Docker is a platform that enables developers to automate the deployment of applications inside lightweight containers. Containers package an application with its dependencies, producing consistent runtime behavior across environments. Containerization reduces "works on my machine" incidents and simplifies reproducible builds.
- Consistent environments: Containers ensure identical runtime environments across development, test, and production.
- Isolation: Each container runs in its own namespace and cgroup, reducing dependency conflicts.
- Efficient resource use: Containers share the host kernel and have lower overhead than full VMs.
- Rapid startup: Containers start in milliseconds to seconds, enabling fast scaling and iteration.
- Scalability: Containers can be replicated and orchestrated to handle increased load.
Architecture Diagrams
Visualizing Docker's layered image model and common network types helps you reason about build cache, image size, and inter-container communication. The following inline SVG diagrams illustrate the layered architecture and three primary Docker network modes.
Setting Up Your Docker Environment
Installing Docker
Download Docker Desktop for macOS or Windows or install the Docker Engine on Linux. Visit the Docker root site for downloads and docs: https://www.docker.com/. For CLI reference, see the Docker docs root: https://docs.docker.com/.
On Ubuntu you can install the engine via the package manager; a minimal quick-check is:
sudo apt update
sudo apt install docker.io
Verify Docker is installed and the daemon is running:
docker --version
sudo systemctl start docker # on Linux systems using systemd
sudo systemctl enable docker
Quick link: if you encounter Desktop-specific issues, see the Docker Desktop Considerations section below.
Docker Desktop Considerations
Docker Desktop is convenient for macOS and Windows development but has operational and licensing considerations you should plan for before adopting it across a team.
Licensing and Enterprise Use
Docker Desktop moved to a licensing model that requires commercial users in some organizations to have a paid subscription. Evaluate your organization's policy and, when licensing costs are a concern for larger fleets, consider deploying Docker Engine on Linux hosts or using cloud-hosted container build services for CI/CD builds.
Resource Consumption and Performance
Docker Desktop runs a lightweight VM or WSL2 backend; this can consume significant CPU, memory, and disk space if left at defaults. Common mitigations:
- Adjust CPU and memory allocation in Docker Desktop Preferences to match your machine and workload.
- Configure disk image location and cleanup: prune unused images, containers, and volumes periodically with
docker system prune(careful to avoid deleting production data). - On Windows, prefer the WSL2 backend for better I/O performance and lower overhead where available.
Security and Best Practices
Avoid running unnecessary host mounts as writable, and minimize privileged containers when using Docker Desktop. For CI or production builds, use dedicated Linux build agents or cloud builds to reduce the attack surface on developer machines.
Quick diagnostics
# Check Docker system information
docker info
# See current disk usage by Docker
docker system df
# Prune safely (removes stopped containers, dangling images)
docker system prune --volumes
Building Your First Docker Container
Creating a Simple Dockerfile (with comments)
This example uses the official Node 20 image and creates a non-root user inside the image. Comments explain each step and rationale.
# Use Node 20 LTS (Debian-based image)
FROM node:20
# Create and set working directory
WORKDIR /app
# Copy package manifests first to leverage Docker layer cache
COPY package*.json ./
# Install only production dependencies for smaller image
RUN npm ci --only=production
# Copy application source code into the image
COPY . /app
# Create a dedicated non-root user and group for running the app (best practice)
RUN addgroup --system app && adduser --system --ingroup app app
# Switch to the non-root user
USER app
# Expose the application's listening port
EXPOSE 3000
# Default command to run the app
CMD ["node", "app.js"]
Why this approach?
- Separating package.json copy & install improves build caching.
- Running as a non-root user reduces attack surface and limits what an exploited process can do.
- Using Node 20 brings current LTS security fixes and features.
Example Node.js application (app.js) with comments
// app.js
// Minimal HTTP server used for tutorial purposes
const http = require('http');
const port = process.env.PORT || 3000;
// Simple request handler; in a real API you'd use Express or Fastify
const requestHandler = (request, response) => {
// Basic health endpoint
if (request.url === '/health') {
response.writeHead(200, { 'Content-Type': 'application/json' });
return response.end(JSON.stringify({ status: 'ok' }));
}
// Default response
response.writeHead(200, { 'Content-Type': 'text/plain' });
response.end('Hello World!');
};
const server = http.createServer(requestHandler);
server.listen(port, () => {
console.log(`Server running at http://localhost:${port}/`);
});
Build the image locally
docker build -t my-app:1.0 .
Note: If you prefer the official "node" user's convenience, you can use USER node in simple developer images. In production images the explicit addgroup/adduser pattern provides a consistent, auditable user setup (see Best Practices).
Project: Build an Evolving API
Follow a small, progressive project while reading this tutorial. The project helps tie examples together and demonstrates how to evolve a containerized app:
- Start: a single-file HTTP server (see Building Your First Docker Container).
- Add persistence: introduce MongoDB with Docker Compose to persist simple items (see Docker Compose and Docker Volumes).
- Harden: add non-root user, reduce filesystem permissions, and scan images with a vulnerability scanner (see Best Practices and CI/CD).
- Scale: add a health check and run multiple replicas behind a load balancer or move to orchestration (see Advanced Topics).
Example: when you add MongoDB, update the application to read the DB host from DB_HOST and run the stack with Compose. This progressive approach reinforces concepts and produces a deployable artifact you can test and scan in CI.
Managing Docker Containers and Images
Container Management Basics
Use these commands to observe and control containers. All commands are shown as copy-paste blocks for faster troubleshooting.
# List running containers
docker ps
# List all containers (including stopped)
docker ps -a
# Stop a running container by id or name
docker stop [container_id_or_name]
# Remove a stopped container
docker rm [container_id_or_name]
# View real-time resource usage (CPU/Memory/Network/IO)
docker stats
Image Management
Manage local images to free disk space and avoid using stale base images.
# List images on local machine
docker images
# Remove an unused image (force with -f if necessary)
docker rmi [image_id_or_name]
# Prune dangling images and build cache (careful in CI)
docker image prune -a
Tip: Use docker image ls --format '{{.Repository}}:{{.Tag}} {{.Size}}' to see sizes and tags. Regularly pruning images and relying on multi-stage builds reduces image size and speeds deployments.
Docker Compose
Defining Multi-Container Applications
Docker Compose describes multi-container stacks in YAML. The following Compose v3 example runs a Node web service and MongoDB with a named volume for persistence. The example uses Node 20 for the app image.
version: '3.8'
services:
web:
image: node:20
ports:
- "3000:3000"
volumes:
- .:/app
working_dir: /app
environment:
- DB_HOST=db
command: node app.js
db:
image: mongo:6
volumes:
- dbdata:/data/db
volumes:
dbdata:
driver: local
Start the stack (Compose V2 integrated into the Docker CLI):
docker compose up -d
If you use the standalone docker-compose binary in older environments, run docker-compose up -d. For authoritative installation steps, see the Docker docs root: https://docs.docker.com/. For the Compose project, see the repo root: https://github.com/docker/compose.
Docker Volumes
Understanding Persistent Data Storage
Volumes are the recommended way to persist container data. They are managed by Docker and survive container restarts and removal.
# Create a named volume
docker volume create my-volume
# Mount the volume into a container
docker run -d --name my-service -v my-volume:/data my-image
Use named volumes for production data and bind mounts for development where live-editing of files is needed. On Linux, watch for UID/GID mismatches when mapping host directories; adjust permissions or use named volumes to avoid permission headaches. If you need consistent UID mapping, set user inside the container to match host UID/GID or use tools like uid/gid mapping in your provisioning scripts.
Docker Networking Basics
How Containers Communicate
Docker supports multiple network drivers; choose the driver that fits your topology:
- bridge (default): isolates containers and provides inter-container communication on a single host through internal DNS.
- host: container shares the host network namespace — useful for low-latency network apps but reduces isolation.
- overlay: used for cross-host communication in Swarm or other orchestrators.
# Create a custom bridge network
docker network create my-network
# Run containers attached to that network
docker run -d --name db --network my-network mongo
docker run -d --name web --network my-network -e DB_HOST=db my-webapp
Use named networks for Compose-based stacks and service discovery. For production, prefer overlay networks in orchestrated clusters (Kubernetes networking is a separate, more advanced topic — see Advanced Topics).
Container Registries and Docker Hub
Pushing Images to Docker Hub
To share images, push to a registry such as Docker Hub or a private registry. Use these commands:
# Log in to Docker Hub
docker login
# Tag an image for your namespace
docker tag my-app username/my-app:1.0
# Push the tagged image
docker push username/my-app:1.0
For secure registries, use token-based access and limit push permissions. Integrate image pushes into CI/CD pipelines so images are immutable artifacts produced by builds (example CI workflow in CI/CD and Automated Image Builds).
CI/CD and Automated Image Builds
Automated rebuilds in CI ensure base image updates and dependency patches are applied. Below is a practical GitHub Actions example that builds, scans, and pushes an image to Docker Hub. The workflow uses GitHub secrets for credentials (set DOCKERHUB_USERNAME and DOCKERHUB_TOKEN in your repo settings).
GitHub Actions example (build, scan, push)
name: CI Build and Push
on:
push:
branches: [ main ]
jobs:
build-and-push:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v4
with:
context: .
push: true
tags: ${{ secrets.DOCKERHUB_USERNAME }}/my-app:latest
- name: Scan image with Trivy
run: |
sudo apt-get update && sudo apt-get install -y wget
wget -qO- https://github.com/aquasecurity/trivy/releases/latest/download/trivy_0.0.1_Linux-64bit.tar.gz | tar xz -C /tmp
/tmp/trivy image --exit-code 1 --severity HIGH,CRITICAL ${{ secrets.DOCKERHUB_USERNAME }}/my-app:latest
Notes:
- Replace the Trivy install step with your preferred scanner. This snippet demonstrates an image scan inside CI; you can fail the build on found critical vulnerabilities by using an
--exit-codethreshold. - On long-running projects, schedule a periodic rebuild (e.g., nightly) to pick up base image security fixes even without source changes.
See https://github.com/ for more on GitHub Actions and actions used above. Keep credentials in CI secrets and avoid embedding tokens in pipeline definitions.
Best Practices for Docker Deployment
Optimizing Your Deployment Strategy
Key practices to adopt with concrete suggestions:
- Use multi-stage builds to remove build-time dependencies and reduce final image size — see example below.
- Set a non-root user in images (USER directive) to minimize attack surface.
- Regularly update base images and trigger automated rebuilds in CI to get security fixes.
- Scan images with vulnerability scanners such as Trivy (Trivy repo) before deployment.
- Minimize privileges: use read-only filesystem mounts where possible and drop Linux capabilities.
Multi-stage build example (compile + runtime)
# Build stage: install dev dependencies and build assets
FROM node:20 AS build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . /app
RUN npm run build # e.g., transpile/minify assets
# Runtime stage: smaller surface area, production deps only
FROM node:20
WORKDIR /app
COPY --from=build /app/dist ./dist
COPY package*.json ./
RUN npm ci --only=production
RUN addgroup --system app && adduser --system --ingroup app app
USER app
EXPOSE 3000
CMD ["node", "dist/server.js"]
Non-root user consistency
Use the addgroup/adduser pattern in Dockerfiles for production images so the runtime user is explicit and independent of the base image's built-in users. This avoids inconsistencies like using USER node in one place and creating app users elsewhere — adopt one consistent pattern across your images.
Other operational tips
- Prefer specific base versions during builds, but reference LTS or a known tag in docs (e.g., Node 20). In CI, consider automated rebuilds rather than permanently fixed tags for long-lifecycle projects.
- Use
docker build --no-cacheoccasionally in CI to ensure fresh installs of base layers when you suspect stale caches. - Document the image creation process and put Dockerfile linting into CI (tools like hadolint can be added to the build pipeline).
| Best Practice | Benefit |
|---|---|
| Use multi-stage builds | Reduces final image size and removes build-time dependencies |
| Organize Dockerfiles & use non-root users | Improves security and maintainability |
| Schedule CI rebuilds and scan images | Enhances security posture |
Advanced Topics: Orchestration & Runtime Security
This section briefly introduces orchestration and runtime hardening topics so you can plan the transition from single-host Compose stacks to production orchestration.
Orchestration choices
Common production choices include Kubernetes for large-scale deployments and Docker Swarm for simpler setups. When moving to orchestration, consider:
- Service discovery and DNS-based routing (built-in in orchestrators).
- Declarative manifests (Kubernetes YAML, Helm charts) for reproducible deployments.
- Health checks and readiness/liveness probes to let the orchestration platform manage lifecycle and rolling updates.
Runtime security and policy
Harden runtime environments with tools and practices such as:
- Image signing and provenance to ensure only trusted images are deployed.
- Admission controllers and policy enforcement (Kubernetes) for pod security policies.
- Least-privilege runtime settings: seccomp, AppArmor, dropping capabilities, and using read-only root filesystems when possible.
These topics are complex — treat this as a checklist when planning production readiness. For repository resources and projects, see the GitHub root: https://github.com/.
Troubleshooting Common Docker Issues
Common Problems and Solutions
Use the following commands to diagnose common issues. Each command block is ready to copy and paste.
# View container logs
docker logs [container_id_or_name]
# Follow logs for a Compose stack
docker compose logs -f
# Inspect a container (network settings, mounts, env vars)
docker inspect [container_id_or_name]
# Check running containers and ports
docker ps
# Check image build errors: build with higher verbosity
docker build --progress=plain -t my-app:debug .
# Check file/volume permissions on host (example)
ls -la /path/to/host/mounted/dir
Common problems and quick fixes:
- Container not starting: Inspect logs (see command above) and check exit codes with
docker inspect --format='{{.State.ExitCode}}' [container]. Ensure your CMD/ENTRYPOINT runs successfully and that required env vars or dependent services are reachable. - Image build fails: Re-run with
--progress=plainto see the full output and ensure build dependencies are present in the Dockerfile (e.g., apt packages like build-essential for native modules). Consider adding build-stage specific packages only in the build stage of multi-stage builds. - Port conflicts: Use
docker psto identify active port mappings; change host ports or stop conflicting containers. - Permission denied: Verify UID/GID ownership on bind mounts and prefer named volumes for production workloads. If using bind mounts for development, map user IDs explicitly or adjust permissions on the host.
- Network issues: Confirm containers share the same user-defined network and inspect container IPs with
docker inspect. For cross-host networking, use overlay networks under orchestration.
Continuing Your Docker Journey
Next Steps for Further Learning
After mastering local containers and Compose, explore orchestration with Kubernetes or Docker Swarm for production deployments. Learn about image signing, admission controllers, and runtime security for hardened deployments (see Advanced Topics).
For Docker docs and reference material, visit the Docker documentation root: https://docs.docker.com/.
Key Takeaways
- Docker containers provide consistent, portable runtime environments.
- Use multi-stage builds and non-root users to minimize image size and surface area.
- Persist data with volumes and manage networks for service communication.
- Automate scanning and updates in CI to reduce security risk; see the CI/CD section for an example.
- Use the provided copy-paste commands to speed troubleshooting and debugging; consult the Troubleshooting section when things go wrong.
Conclusion
Docker is a practical tool for modern application deployment. By combining clear Dockerfiles, multi-stage builds, Compose for multi-service stacks, and a small set of diagnostic commands, you can create repeatable, secure, and maintainable deployments. Continue practicing with the evolving API project and integrate image scanning and CI/CD to move toward production readiness.