Beginner's Docker Tutorial: Step-by-Step Guide

Introduction

As a DevOps Engineering Manager with 11 years of experience in Docker, I’ve seen containerization become essential for modern software delivery. Containerization simplifies dependency management, improves deployment consistency, and speeds up delivery pipelines. In this tutorial you’ll learn how to set up Docker, build images, run containers, and apply practical techniques that make a Node.js web app reproducible across environments.

By following the hands-on steps below you’ll create a runnable container image, learn how to manage containers and images, and explore Compose, networking, and persistent storage. These fundamentals prepare you for orchestration tools like Kubernetes and for integrating containers into CI/CD pipelines.

Prerequisites

  • Basic terminal and shell familiarity (running commands, editing files).
  • Node.js installed locally for development (for building and testing the sample app). Recommended Node.js LTS series such as Node 16 for this tutorial.
  • A code editor (VS Code or similar) and Git for cloning or tracking your project files.
  • Sufficient disk space and a 64-bit OS with hardware virtualization enabled for Docker Desktop or Docker Engine.

Installing Docker on Your System

System Requirements and Preparation

Docker supports Windows, macOS, and many Linux distributions. Typical requirements are a 64-bit OS and enabled hardware virtualization in the BIOS/UEFI. On Windows, enable WSL 2 and install Docker Desktop; on macOS, install Docker Desktop. For Linux, install the Docker Engine package for your distribution (Docker Engine is commonly packaged for Debian/Ubuntu and RHEL/CentOS families).

After installation, verify Docker is available by running:

docker --version

If the daemon is not running, follow platform-specific instructions below to start it and retry verification.

Starting the Docker daemon

Platform-specific commands to start or confirm the Docker daemon. Each code example is nested inside its list item for clearer semantics and accessibility.

  • Linux (systemd) β€” start and enable the service:
    # start docker now
    sudo systemctl start docker
    # enable on boot
    sudo systemctl enable docker
    # check status
    sudo systemctl status docker
    
  • Linux (SysV / older distros) β€” use the service wrapper:
    sudo service docker start
    sudo service docker status
    
  • Windows / macOS β€” Docker Desktop: ensure Docker Desktop is running (start from Start Menu on Windows or Applications on macOS). On Windows with WSL2, verify the WSL 2 backend is enabled in Docker Desktop settings.
  • After starting the daemon on Linux, optionally add your user to the docker group to run Docker without sudo:
    sudo usermod -aG docker $USER
    # then log out and log back in for group changes to apply
    

If you still see permission or daemon connection issues, check the system logs (e.g., journalctl -u docker.service on systemd systems) for errors such as storage driver failures or missing kernel features.

Docker Desktop GUI (overview)

Many beginners start with Docker Desktop's graphical interface. The GUI provides quick visibility into containers, images, volumes, and networks β€” useful for learning and troubleshooting before moving primarily to the CLI.

  • Containers: view running/stopped containers, start/stop, view logs, and open a terminal into a container.
  • Images: inspect local images, remove unused images, and trigger basic image scans (Docker Desktop surfaces "docker scan" results when available).
  • Volumes & Networks: list named volumes, inspect them, and remove unused ones; view custom networks and connected containers.
  • Settings: configure resources allocated to the Docker VM (CPU, memory), enable/disable Kubernetes, WSL integration (Windows), and adjust experimental features.

Tip: use the GUI to quickly validate whether a container has started successfully and to examine logs when the CLI output is not sufficient. For CI and production workflows, prefer reproducible CLI commands and declarative YAML manifests (Compose) that can be version-controlled.

Understanding Docker Images and Containers

What Are Docker Images?

Docker images are read-only, layered templates that include application code, runtime, libraries, and metadata. Layers make images efficient: unchanged layers are cached and reused during builds, which speeds up iterative development.

What Is a Container?

A container is a runnable instance of an image β€” an isolated process with its own filesystem (from the image), network interfaces, and resource constraints. Containers are ephemeral by default; use volumes to persist data.

Creating Your First Docker Container

Base image: pulling a Node.js runtime

Before building, you can pull a Node.js base image to cache the base layer locally. For this tutorial we target Node 16 LTS variants. Choose node:16-alpine for smaller images when compatible with your dependencies, or node:16 (Debian-based) if you need broader binary compatibility.

# pull an appropriate Node.js runtime
docker pull node:16-alpine

Build reproducibility note: for production builds prefer pinning to specific patch versions to avoid surprises from implicit updates. Example: node:16.20.2-alpine. Pinning a full tag (major.minor.patch + variant) ensures you get the same base image across rebuilds; combine this with a scheduled process in CI to update and re-test pinned images regularly.

Pulling a base image is optional; docker build will fetch it automatically if missing. Pulling it explicitly can speed up consecutive local builds and CI caching behavior.

Project files (runnable example)

This example uses Node.js (16 runtime) and Express 4.18.2. Files below let a beginner build and run the image locally.

package.json

{
  "name": "my-node-app",
  "version": "1.0.0",
  "description": "Simple Node.js app for Docker tutorial",
  "main": "app.js",
  "scripts": {
    "start": "node app.js"
  },
  "dependencies": {
    "express": "^4.18.2"
  }
}

app.js

const express = require('express');
const app = express();
const PORT = process.env.PORT || 3000;

app.get('/', (req, res) => {
  res.json({ message: 'Hello from Dockerized Node.js app!' });
});

app.listen(PORT, () => {
  console.log(`Server listening on port ${PORT}`);
});

Dockerfile (builds on Node 16)

Best practice: use an official Node image, install dependencies, copy only what’s needed, and run the app as a non-root user where possible. In this example we use node:16-alpine in the final image for a smaller footprint and a reduced attack surface compared with full Debian-based images.

# Use official Node 16 LTS variant (alpine for smaller image size)
FROM node:16-alpine

# Create app directory
WORKDIR /usr/src/app

# Install dependencies first (cacheable)
COPY package*.json ./
RUN npm ci --only=production

# Copy application code
COPY . .

# Create a non-root user and switch to it (security best practice)
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser

EXPOSE 3000
CMD [ "node", "app.js" ]

Using a .dockerignore file

Include a .dockerignore to keep your build context small and to avoid copying sensitive files into the image. A small build context speeds uploads to the daemon and reduces accidental inclusion of secrets.

Example .dockerignore

node_modules
npm-debug.log
.git
.env
.DS_Store
dist

Notes:

  • Always exclude node_modules so dependencies are installed inside the image using the declared package manifest.
  • Exclude .env and other local secrets β€” use environment variables or secret managers for runtime secrets instead of baking them into images.
  • Keeping the context minimal reduces image rebuild time and the chance of leaking files into published images.

Why choose alpine?

alpine variants are based on a minimal Linux distribution (Alpine Linux) and typically produce smaller images, which reduces attack surface and speeds up distribution. Be aware that some native binaries may require additional build/runtime libraries (e.g., libc differences), so choose Alpine when it fits your dependency set or use slim/buster variants when compatibility is required.

Build and run

# build the image (tag with version)
docker build -t my-node-app:1.0 .

# run the container mapping host port 3000 to container port 3000
# common flags explained further below
docker run --rm -p 3000:3000 my-node-app:1.0

# test locally
curl http://localhost:3000/

Common docker run flags (explanations and examples)

  • -d (detach) β€” run container in the background. Useful for services you don't need logs for immediately: docker run -d --name my-node-app -p 3000:3000 my-node-app:1.0.
  • --name β€” assign a human-friendly name to a container so you can reference it in other commands: docker stop my-node-app or docker logs my-node-app.
  • --rm β€” automatically remove the container when it exits (handy for short-lived dev runs).
  • -p hostPort:containerPort β€” publish container ports to the host. Use higher host ports (>1024) if you don't have root privileges.
  • -e KEY=VALUE β€” set environment variables for runtime configuration (do not use to pass secrets in plain text in shared environments).

Troubleshooting

  • Permission denied when binding low ports: use port >1024 or run with proper capabilities.
  • Build cache not reflecting changes: use --no-cache when building to force a fresh build.
  • Daemon errors on Linux: ensure the Docker daemon (dockerd) is running and your user is in the docker group (see "Starting the Docker daemon" above).
  • If the container exits immediately, inspect exit code with docker ps -a and view logs with docker logs <container_id_or_name>.

Managing Docker Containers and Images

Useful Commands

# list running containers
docker ps

# list all containers
docker ps -a

# view logs for a container
docker logs <container_id_or_name>

# stop and remove a container
docker stop <container_id_or_name>
docker rm <container_id_or_name>

# list images
docker images

# remove unused images and free space
docker image prune -a

Common workflow examples

Examples showing the -d and --name flags in context:

# start a container in detached mode with a name
docker run -d --name my-node-app -p 3000:3000 my-node-app:1.0

# view logs for a named container
docker logs -f my-node-app

# stop and remove by name
docker stop my-node-app
docker rm my-node-app

Comprehensive cleanup: docker system prune

To reclaim more space (containers, networks, images, and optionally volumes) use docker system prune. Be cautious: this can remove resources you still need.

# interactive prompt; without flags, removes stopped containers, dangling images, and unused networks
docker system prune

# remove unused images (including unreferenced ones) and volumes without prompt (use with care)
docker system prune -a --volumes

Recommendation: run docker system df to inspect disk usage before pruning, and use docker volume ls and docker volume inspect <name> to back up volume data if required.

Debugging Tips

  • Use docker exec -it <container> /bin/sh (or /bin/bash if available) to inspect a running container's file system and processes.
  • Check docker inspect <container> to see configuration, mounted volumes, and networking details.
  • When containers fail to start, examine exit codes via docker ps -a and docker logs for stack traces; combine with journalctl -u docker.service on Linux for daemon-level errors.

Docker Compose Basics

Docker Compose lets you define and run multi-container apps with a YAML file. Modern Docker installs include the Compose CLI plugin; use docker compose. Compose simplifies running linked services (app + database, for example).

Example: app + Redis

version: '3.8'
services:
  app:
    build: .
    ports:
      - "3000:3000"
    environment:
      - REDIS_HOST=redis
    depends_on:
      - redis
  redis:
    image: redis:6
    restart: unless-stopped

Run with:

# using Compose plugin
docker compose up -d

# stop and remove
docker compose down

Compose supports named volumes, networks, and overrides for development vs. production. Use separate override files (e.g., docker-compose.override.yml) to keep environment-specific configuration out of the main manifest.

Networking in Docker

Docker provides multiple network drivers. The most common are:

  • bridge (default): isolates containers and allows port mapping to the host.
  • host: container shares the host network namespace (be cautious: this reduces isolation).
  • overlay: used for multi-host networking (Swarm or other orchestrators).

Create a user-defined bridge network to enable automatic DNS between containers and better network isolation:

docker network create mynet
docker run --network mynet --name db redis:6
docker run --network mynet --name app my-node-app:1.0
# app can reach 'db' by hostname

Best practices: avoid the host network unless necessary; use custom networks so services can discover each other by name and apply network-level policies.

Volumes and Persistent Storage

By default, container filesystems are ephemeral. Volumes persist data independent of the container lifecycle and are the recommended way to store databases, uploads, and logs.

Example: named volume for Postgres (compose snippet)

services:
  db:
    image: postgres:13
    environment:
      - POSTGRES_USER=example
      - POSTGRES_PASSWORD=example
    volumes:
      - db-data:/var/lib/postgresql/data

volumes:
  db-data:

Use docker volume ls and docker volume inspect <name> to manage volumes. Clean up unused volumes with docker volume prune. For production, consider using named volumes backed by specific drivers (local, NFS, or cloud provider storage drivers) and ensure backups for databases.

Multi-stage Builds (Example)

Multi-stage builds let you use one image to build artifacts and another, smaller image to run them. This reduces the final image size and removes build-time dependencies (compilers, package managers) from production images.

Example: build a simple static frontend (or compiled Node assets) and produce a minimal runtime image:

# syntax=docker/dockerfile:1

# Builder stage: uses full Node 16 image with build tooling
FROM node:16 AS builder
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm ci
COPY . .
# build step (example for apps that transpile/bundle)
RUN npm run build

# Final stage: minimal runtime using Alpine
FROM node:16-alpine
WORKDIR /usr/src/app
# copy only built artifacts from the builder stage
COPY --from=builder /usr/src/app/dist ./dist
# copy production package.json to install runtime deps
COPY package*.json ./
RUN npm ci --only=production
USER node
CMD ["node", "dist/server.js"]

Explanation: the builder stage contains build tools and devDependencies; the final stage contains only runtime files and production dependencies. This pattern is widely used to keep production images small and reduce attack surface.

Best Practices and Resources for Learning More

Implementing Docker Best Practices

  • Use multi-stage builds to produce small production images and keep build dependencies out of the final image (see the example above).
  • Run processes as a non-root user inside containers where possible (e.g., USER node or a dedicated user).
  • Minimize layers by combining commands in RUN where it makes sense and leverage build cache by ordering COPY and RUN steps properly.
  • Scan images for vulnerabilities using tools such as Trivy (see the project on GitHub) or the built-in docker scan integration in Docker Desktop/CLI; integrate scans into CI pipelines to block known vulnerabilities.
  • Use healthchecks (HEALTHCHECK in Dockerfile) so orchestration tools can detect unhealthy containers and restart them automatically.
  • Pin base images to specific tags to make builds reproducible. For production, prefer precise tags including patch version and variant (for example, node:16.20.2-alpine) and schedule periodic rebuilds and tests to pick up important security updates.
  • Avoid committing secrets or API keys into images or build contexts; use environment variables, secret managers, or build-time secret mechanisms from your CI system.

Tools and Resources

Official sites and learning platforms (root domains):

Security Considerations

Security should be a first-class part of your container workflow. Below are practical, actionable items to incorporate into development and CI/CD.

  • Integrate image scanning into CI: run a scanner such as Trivy as a pipeline step to detect known CVEs before images are promoted. Example (local scan):
    # local scan example
    trivy image my-node-app:1.0
    
  • Avoid baking secrets into images or checking them into build context. Use runtime environment variables, secret managers (Vault, cloud provider secrets), or CI/CD build-time secret facilities.
  • Run containers with the least privileges needed; drop capabilities you don't need and avoid using the --privileged flag. Prefer non-root users inside images.
  • Pin base image tags and apply an image update/rotation policy: schedule periodic rebuilds and rescans as part of your patch cycle to pick up patched base images.
  • Use HEALTHCHECK and resource limits (--memory, --cpus) to make containers more resilient and predictable in production.

Key Takeaways

  • Docker packages applications with their dependencies so they run consistently across environments.
  • Docker Compose streamlines multi-container applications by describing services, networks, and volumes in one file.
  • Networking in Docker uses drivers (bridge, host, overlay); user-defined networks provide service discovery by name.
  • Volumes persist container data and are essential for stateful services like databases.

Frequently Asked Questions

How do I check which Docker images are on my system?
Run docker images to list images with repository, tag, image ID, creation date, and size. Clean up unused images with docker image prune or docker image prune -a to remove all unused images.
What is the difference between Docker and a virtual machine?
Containers share the host kernel, making them lightweight and fast to start. Virtual machines include a full guest OS and therefore consume more resources and take longer to boot.
How can I share my Docker images?
Tag and push images to a registry (Docker Hub or a private registry). Steps: docker login, docker tag local-image:tag yourusername/imagename:tag, then docker push yourusername/imagename:tag. Others can pull the image with docker pull.

Conclusion

Containerization with Docker helps teams deliver reliable, reproducible applications. This guide walked through installing Docker, building a runnable Node.js image, managing containers, and using Compose, networks, and volumes. Apply the best practices above β€” non-root users, multi-stage builds, image scanning, and documented Dockerfiles β€” to maintain secure and efficient container workflows.

Next steps: Dockerize a small real-world app, add a database with a named volume, and integrate image builds and scans into your CI pipeline. Practice with Compose files to model multi-service systems before moving to orchestration platforms.

About the Author

Ahmed Khalil

Ahmed Khalil is a DevOps Engineering Manager with 11 years of experience specializing in Docker, Kubernetes, Terraform, Jenkins, GitLab CI, AWS, and monitoring. He streamlines software delivery pipelines and manages infrastructure at scale.


Published: Oct 24, 2025 | Updated: Jan 02, 2026