Linux Command Line Basics: Essential Commands for Beginners

Introduction

This guide focuses on practical Linux command line skills for managing servers, automating tasks, and troubleshooting production issues. Examples assume a typical GNU/Linux environment (Bash 5.x and GNU coreutils 8.x+), but the commands shown are broadly applicable across distributions.

Below you’ll find clear, actionable examples for the commands most beginners need, plus slightly deeper DevOps-oriented use cases: scripted backups, safe file operations, permission hardening, and quick troubleshooting commands. Each new section maps to commands mentioned later in the Key Takeaways and FAQ to avoid disconnected references.

About the Author

Ahmed Khalil

Ahmed Khalil is a DevOps Engineering Manager with 11 years of experience streamlining software delivery pipelines and managing infrastructure at scale. His expertise spans CI/CD automation, container orchestration, cloud infrastructure, and operating system administration. Ahmed has led teams in implementing DevOps practices that improve deployment frequency, reduce failure rates, and accelerate time-to-market for software products.

ls: List directory contents

ls is the first tool you’ll use to inspect directories. Useful flags you should know:

  • -l: long listing (permissions, owner, size, timestamp)
  • -a: show hidden files (dotfiles)
  • -h: human-readable sizes (with -l)
  • --color=auto: colorize output (common in GNU coreutils)

Examples:

# detailed listing with human-readable sizes
ls -lah --color=auto

# view a specific directory sorted by modification time
ls -lt /var/log | head -n 20

Practical DevOps tip: when inspecting a package deploy folder (e.g., /var/www/html), combine ls -lt to find the most recently modified files and detect unexpected changes after deployment.

sudo: Running commands as root

sudo temporarily runs a single command with elevated privileges (usually root). It's used throughout this guide for package management and privileged file operations. Use it carefully—misuse can be destructive.

Key points:

  • Prefer sudo for single commands rather than logging in as root.
  • Use sudo -i or sudo -s only when you need an elevated shell; exit promptly.
  • Edit the sudoers file with visudo to avoid syntax errors that could lock out admin access.
# run an apt update with elevated privileges
sudo apt update && sudo apt upgrade -y

# run a one-off command as another user (if allowed)
sudo -u deploy systemctl restart my-service

# run last command with sudo (use carefully)
sudo !!

Security best practices:

  • Grant the least privilege in /etc/sudoers (or /etc/sudoers.d/) — avoid NOPASSWD: ALL unless intentionally required and audited.
  • Use visudo to validate edits. Example safe entry to allow a single command for a user:
# (example) allow user 'deploy' to run nginx reload without a password
# placed in /etc/sudoers.d/deploy-nginx
deploy ALL=(root) NOPASSWD: /bin/systemctl reload nginx

Note: Place sudoers fragments in /etc/sudoers.d/ with proper permissions (mode 0440) to keep the main file unchanged and simplify auditing. When automating, prefer granting specific commands rather than full administrative rights.

mkdir: Create directories

mkdir creates directories. Common options:

  • -p — create parent directories as needed
# create nested directories in one command
mkdir -p /srv/backups/daily/2025-12-01

DevOps use-case: create structured backup paths and ensure scripts don't fail when parent folders are missing.

Managing files: cp, mv, rm, rmdir, and tar

cp (copy)

Common flags:

  • -r — recursive (directories)
  • -a — archive; preserves attributes and is equivalent to -dR --preserve=all in GNU cp
  • -v — verbose
# copy a directory preserving metadata
cp -a /etc/nginx /backup/nginx-$(date +%F)

Tip: use cp -a when backing up configuration files to keep permissions intact.

mv (move / rename)

Common flags:

  • -i — interactive prompt before overwrite
# rename a file or move it into a directory
mv -i release.tar.gz /srv/releases/

Use -i in scripts run interactively to avoid accidental overwrites.

rm and rmdir (remove)

Be cautious: rm is destructive. Useful options:

  • -r — recursive
  • -f — force (no prompts)
  • --preserve-root — (GNU rm default behavior) protects /
# remove an empty directory
rmdir tmpdir

# remove a directory tree interactively
rm -rI build/

Security tip: avoid running sudo rm -rf unless you’ve double-checked the path. In automation, prefer safe deletion patterns (move to a quarantine directory first, then delete after verification).

tar (archive and compress)

tar is the standard tool for creating archives. GNU tar (commonly present on Linux hosts) supports many options useful in backups.

  • -c — create archive
  • -z — gzip compression
  • -j — bzip2 compression
  • -J — xz compression
  • -C — change to directory (useful to get consistent paths in archive)
  • -p — preserve permissions when extracting
# create a gzip-compressed archive of /etc/nginx keeping relative paths
tar -czf /srv/backups/nginx-$(date +%F).tar.gz -C / etc/nginx

# extract preserving permissions
tar -xzf nginx-2025-12-01.tar.gz -C / --preserve-permissions

DevOps tip: prefer -C / etc/nginx style to avoid embedding absolute filesystem paths in archives; when scripting backups, verify the archive contents with tar -tf and check checksums with sha256sum.

Viewing and editing files: cat, less, editors

cat and less

# quick view
cat /etc/hostname

# interactive paging
less /var/log/syslog

# follow a log file and highlight matches
tail -F /var/log/nginx/error.log | grep --line-buffered "ERROR"

Editing: vim, nano

Common editors on servers are vim and nano. If you must modify config files in production, use vim -R for read-only preview, then update with proper backups and reload services (e.g., systemctl reload nginx).

grep: Searching and filtering

grep is essential for finding text in files and pipelines. Useful flags:

  • -R — recursive
  • -n — show line numbers
  • --color=auto — highlight matches
  • -E — extended regular expressions
# find occurrences of ERROR in logs recursively
grep -R --color=auto -n "ERROR" /var/log/

# search inside a git repo for TODO comments
grep -R --line-number -E "TODO|FIXME" src/

DevOps example: when a deployment fails, search recent logs across multiple services to find the root cause quickly. Combine grep with xargs or find -exec for powerful automated searches. For faster recursive search on large codebases, consider using ripgrep (rg) which is widely adopted as a faster alternative to grep.

find: Locate files and act on them

find is indispensable for locating files by name, age, type, permissions, or owner. It pairs well with grep, xargs, and -exec. Typical installations use GNU findutils (commonly available on Linux distributions).

Basic examples and safe patterns:

# find files by name under /var/log
find /var/log -type f -name "*.log"

# find files modified in the last 7 days
find /var/www -type f -mtime -7

# find and gzip log files (safe: test first without -exec)
find /var/log -type f -name "*.log" -mtime +30 -print
# if output looks correct, then compress
find /var/log -type f -name "*.log" -mtime +30 -exec gzip -- "{}" \;

# find files and grep efficiently using null delimiters
find . -type f -name "*.conf" -print0 | xargs -0 grep -n "listen"

# find files writable by others (security check)
find / -xdev -type f -perm -o=w -print 2>/dev/null

Security and safety tips:

  • Always run find ... -print or an equivalent test before running destructive actions like -exec rm or -delete. Verify the list first.
  • Prefer -exec ... \; for single-file operations or -print0 | xargs -0 for batching large sets safely.
  • Use -xdev to avoid crossing filesystem boundaries (e.g., avoid searching mounted network filesystems unintentionally).
  • Redirect stderr (e.g., 2>/dev/null) when searching system roots to keep output focused on results rather than permission errors.

File permissions: chmod and chown (practical guide)

Permission basics

# human-readable view
ls -l /etc/nginx/nginx.conf
# -rw-r----- 1 root www-data 1234 Jan  1 12:00 nginx.conf

In this example, owner is root, group www-data. Only owner and group have read access.

chmod numeric examples

# give owner read/write, group read, others none
chmod 640 /etc/nginx/nginx.conf

# make a script executable by owner
chmod 700 /usr/local/bin/deploy.sh

Symbolic modes

Symbolic modes are more readable for many interactive tasks and avoid remembering numeric codes. They work with GNU coreutils (chmod) across modern Linux distributions.

# add read, write and execute for the owner; remove all for others
chmod u+rwx,o-rwx /usr/local/bin/deploy.sh

# give group read and write, keep owner permissions unchanged
chmod g+rw /etc/nginx/nginx.conf

# remove execute permission for all
chmod a-x /usr/local/bin/*.sh

# set the setgid bit on a directory so new files inherit the group
chmod g+s /srv/releases

# copy permissions from a reference file
chmod --reference=/etc/skel/.bashrc /home/deploy/.bashrc

Notes and best practices:

  • Use symbolic modes in interactive sessions for clarity; use numeric modes in automation when the values are fixed and documented.
  • Avoid making configuration or credential files world-readable. Use 600 or 640 depending on whether a service account (group) needs read access.
  • Use chmod --reference to replicate trusted permission sets when provisioning new hosts or files.

chown usage

# change owner to deploy user and group to deploy
chown deploy:deploy /srv/releases/current
# change ownership recursively
chown -R deploy:deploy /srv/releases/

Best practice for sensitive config files: set ownership to a dedicated user (not root when possible), restrict mode to 640 or 600, and keep secrets out of world-readable files. In CI/CD pipelines, avoid committing credentials; instead use secret managers (e.g., Vault, AWS Secrets Manager) and mount them at runtime.

Shell config files: .bashrc, .zshrc, aliases

Personalize and speed up your shell with configuration files. Common files and their roles:

  • ~/.bashrc — interactive non-login Bash shells (aliases, functions, prompts)
  • ~/.bash_profile or ~/.profile — login shells (sometimes source ~/.bashrc)
  • ~/.zshrc — Zsh equivalent for interactive shells (Zsh 5.x is widely used)
# ~/.bashrc additions
# set a safe umask
umask 027

# useful aliases
alias ll='ls -lah --color=auto'
alias gst='git status'

# small function for safe delete (move to trash)
trash() {
  local dest="$HOME/.trash"
  mkdir -p "$dest"
  mv "$@" "$dest/"
}

# export PATH additions
export PATH="$HOME/bin:$PATH"

Best practices:

  • Version-control dotfiles in a private repo and document required environment variables separately (never commit secrets).
  • Test changes in a new shell before deploying to many systems—mistakes in ~/.bashrc can break login shells.
  • Prefer small functions and explicit exports over long one-liners; keep prompts and heavy logic out of automated non-interactive shells.

man pages and inline help

Use man to read manual pages; many commands also support --help for a quick summary.

# full manual
man ls

# quick help
ls --help

When you need authoritative documentation for shell builtins, refer to project root documentation such as https://www.gnu.org/ or https://www.linux.org/.

System info & process management: top, ps, htop

top

top shows live resource usage. Use it to spot CPU or memory hogs quickly.

top
# press M to sort by memory, P to sort by CPU

ps + grep

Use ps with grep to locate processes in automation or scripts.

# find Java processes
ps aux | grep java | grep -v grep

htop

htop provides a friendlier UI (if available). On minimal servers you may prefer ps and top to avoid installing extra packages.

Installing optional tools

If you need htop or other monitoring tools, install them via your distribution's package manager. Examples:

# Debian / Ubuntu
sudo apt update && sudo apt install -y htop

# RHEL / CentOS 7 (YUM)
sudo yum install -y epel-release
sudo yum install -y htop

# RHEL / CentOS 8+ or Fedora (DNF)
sudo dnf install -y htop

# Arch Linux
sudo pacman -Sy --noconfirm htop

# openSUSE
sudo zypper install -y htop

Security note: install packages from official repositories only, and prefer minimal runtime tooling on production hosts. If you must add tools, document and audit them as part of your host baseline.

Troubleshooting example

When a service consumes excessive memory, inspect running processes, note the PID, and collect a stack or logs. Example workflow:

# identify top memory consumer
ps aux --sort=-%mem | head -n 10

# sample logs for that service
journalctl -u my-service -n 200 --no-pager

Backup workflow diagram

Visualizing a simple automated backup flow helps when designing scripts and verifying each step (archive → store → verify). The diagram below shows a common pattern: server archives configs, stores locally, then optionally uploads to remote object storage.

Backup Workflow Server creates archive, stores locally, then uploads to remote storage like S3 Server /etc, /var/www Archive (.tar.gz) Local Archive /srv/backups/YYYY-MM-DD Upload (optional) Remote Storage S3 / Object Store
Figure: Simple backup flow — Server → Local Archive → Remote Storage

Practical Example: Automated Backup Script

Move an example backup script out of the FAQ and into its own section for clarity. The script below is a minimal, safe, idempotent pattern for daily backups that archives a configuration directory locally and includes an optional upload step. The code block is preserved exactly as in the original example.

#!/bin/bash
# /usr/local/bin/daily-backup.sh
set -euo pipefail
BACKUP_DIR="/srv/backups/$(date +%F)"
mkdir -p "$BACKUP_DIR"
# archive /etc/nginx safely preserving attributes
tar -czf "$BACKUP_DIR/nginx-$(date +%F).tar.gz" -C / etc/nginx
# optional: upload to remote storage with AWS CLI v2 (ensure CLI is configured)
# aws s3 cp "$BACKUP_DIR/nginx-$(date +%F).tar.gz" s3://my-backups/nginx/

Operational and security recommendations for this pattern:

  • Run the script as a dedicated low-privilege user that has read access to the configuration directories (avoid running as root when unnecessary).
  • Use an IAM role (for EC2) or an instance profile, or configure an AWS CLI profile for uploads instead of embedding credentials in the script. Prefer AWS CLI v2 for S3 operations where applicable.
  • Enable server-side encryption for S3 buckets (SSE-S3 or SSE-KMS) and enforce bucket policies that restrict who can write or read backups. Use SSE-KMS when you need audit logging and fine-grained control, and ensure key policies and rotation are in place.
  • Keep backup retention and rotation policies explicit: delete or move older archives after verifying restore procedures. Maintain a documented retention schedule and test restores periodically.
  • Log backup runs and upload results; export metrics (success/failure, size) to your monitoring system so you can alert on failures. Store logs separately from the backup archive to avoid accidental deletion.

Scheduling example (cron):

# run daily at 02:00 and log output
0 2 * * * /usr/local/bin/daily-backup.sh >/var/log/daily-backup.log 2>&1

Alternative: use systemd timers for better observability and restart behavior on modern Linux distributions. Always test your backup and restore process end-to-end before relying on it in production. Monitor backup sizes and available disk space to avoid failures due to full disks.

Tips for mastering the command line: shortcuts & resources

Practical habits and shortcuts accelerate daily work and reduce mistakes. Below are concrete tips, keyboard shortcuts, and tools to practice with.

Essential keyboard shortcuts (Bash & most shells)

  • Ctrl-R — reverse-i-search (interactive history lookup)
  • Ctrl-A — jump to start of line; Ctrl-E — jump to end
  • Ctrl-L — clear screen (same as clear)
  • Ctrl-U — delete to start of line; Ctrl-K — delete to end of line
  • Alt-B / Alt-F — move backward / forward one word
  • Ctrl-W — delete previous word; Ctrl-Y — yank (paste) last killed text

Command-line productivity tools to consider

  • tmux (terminal multiplexer, tmux 3.x+ is widely used) — split sessions, keep long-running jobs alive, and reconnect to sessions.
  • ripgrep (rg) — fast recursive search (useful for large codebases; faster than plain grep).
  • fzf — fuzzy finder for interactive file and history selection; integrates well with Ctrl-R workflows.
  • autojump or z — directory jump tools to navigate frequently used paths quickly.

Safe learning practices

  • Practice on a disposable VM or container before applying commands on production hosts.
  • Use --dry-run or test flags when available (for tools that support it), and always preview results with commands like find ... -print before running deletion operations.
  • Keep a small, version-controlled repo of useful snippets and small scripts (without secrets). Document assumptions and required OS/package versions alongside each script.

Resources and learning path

  • Hands-on practice: create a small VM (cloud or local), set up a sample web app, and practice deployment, rollback, and backup/restore.
  • Start with Bash 5.x features (mentioned in this guide) and learn shell scripting best practices: set -euo pipefail, input validation, idempotency, and careful logging.
  • Gradually adopt tools like tmux and fzf to create a reproducible and auditable environment on your systems.

history: Inspect and reuse shell history

The history builtin in Bash is invaluable for reviewing past commands, repeating them, and turning frequently used sequences into scripts or aliases. Below are practical examples, safety notes, and configuration tips for Bash (Bash 5.x assumed).

Basic usage

# show recent commands with numbers
history

# show last 50 entries
history 50

# search history for a term
history | grep -i deploy

Rerunning commands

# repeat last command
!!

# run the command N lines back (e.g., 2 commands ago)
!-2

# run the most recent command starting with 'git'
!git

# run the most recent command containing 'backup' (safer with ?...?)
!?backup?

# use fc to edit and re-run a history entry (opens $EDITOR)
fc -s 123  # re-exec entry number 123

Warning: ! expansions will execute without confirmation; inspect with history or use fc to edit before running when unsure.

Make history more useful

# add to ~/.bashrc to get timestamps and append behavior
export HISTTIMEFORMAT="%F %T "
export HISTSIZE=10000
export HISTFILESIZE=20000
shopt -s histappend

This configuration ensures timestamps on history entries, larger history buffers, and appending across sessions (instead of overwriting on logout).

Security and hygiene

  • Do not type secrets (passwords, tokens) on command line. Commands typed are stored in ~/.bash_history unless you explicitly prevent it.
  • To remove a specific history entry from the current session, use history -d N (where N is the history number), then history -w to write changes to ~/.bash_history.
  • To clear history from the session and file: history -c && history -w — use carefully, as this removes forensic traces.
  • Avoid automated scripts that echo secrets into the shell; use environment variables from a secure store or file with restrictive permissions (e.g., mode 600).

Practical patterns

# find a long command and re-run it interactively
history | grep -i "scp" | tail -n 5
# then use the history number with fc to edit and run
fc -s 432

Use history to discover commands worth turning into a small script or alias, and then migrate those into version-controlled scripts for repeatability and auditing.

Further Reading

For deep dives and authoritative reference material, consult project homepages and official docs for the tools you use (e.g., Bash, GNU coreutils, systemd). Focus on the versions deployed in your environment and keep an internal knowledge base with validated commands and scripts.

Key Takeaways

  • Use safe, test-first patterns for destructive commands (preview before delete).
  • Prefer least privilege and well-scoped sudo rules for automation.
  • Archive with tar -C to avoid absolute paths, and verify archives with checksums.
  • Use shell history and shortcuts to speed up workflows, but never type secrets on the command line.
  • Automate backups with idempotent scripts and schedule with cron or systemd timers; test restores regularly.

Frequently Asked Questions

Q: Where should I store small scripts used across many hosts? A: Keep them in a version-controlled repository and deploy via your configuration management system (Ansible, Chef, Puppet) or place them in a managed /usr/local/bin path with appropriate permissions.

Conclusion

This guide gives a practical foundation for everyday Linux command-line tasks with an emphasis on safety and predictable automation. Start small, practice on non-production systems, and progressively codify repeated workflows into version-controlled scripts and monitored jobs. Automating and hardening these basics will pay off in stability and reduced incident response time.


Published: Dec 05, 2025 | Updated: Jan 09, 2026