Home DevOps Shell Scripting Basics: Writing Scripts That Actually Work in Production

Shell Scripting Basics: Writing Scripts That Actually Work in Production

In Plain English 🔥
Imagine you have a morning routine: wake up, brew coffee, check your phone, pack your bag. You do the exact same steps every single day. Now imagine you could write those steps on a sticky note and hand it to a robot that does them all for you while you sleep. That sticky note is a shell script — a list of commands you'd normally type one by one in a terminal, saved in a file so the computer can run them automatically, repeatedly, and without you sitting there.
⚡ Quick Answer
Imagine you have a morning routine: wake up, brew coffee, check your phone, pack your bag. You do the exact same steps every single day. Now imagine you could write those steps on a sticky note and hand it to a robot that does them all for you while you sleep. That sticky note is a shell script — a list of commands you'd normally type one by one in a terminal, saved in a file so the computer can run them automatically, repeatedly, and without you sitting there.

Every time a deployment pipeline fires at 2am, a log file gets rotated before it fills a disk, or a server backs itself up without anyone lifting a finger — there's almost certainly a shell script doing the heavy lifting. Shell scripts are the connective tissue of DevOps. They glue together tools that weren't designed to talk to each other, automate the boring-but-critical tasks that keep systems healthy, and turn a ten-step manual process into a single command anyone on the team can run safely.

The problem isn't learning to write a shell script — it's writing one that doesn't blow up three months later when a directory name has a space in it, or silently does the wrong thing because an error wasn't caught. Most tutorials show you the syntax and stop there. That leaves you with scripts that work on your laptop but fail in CI, or scripts that overwrite production data because nobody handled the edge cases.

By the end of this article you'll understand not just how to write shell scripts, but why certain patterns exist, when to use functions vs inline commands, how to make your scripts fail loudly instead of silently, and how to structure a real-world deployment script that a teammate could pick up and trust. The code examples here are taken from patterns used in actual production environments — not toy demos.

Variables, Quoting, and Why Your Script Breaks on File Names with Spaces

Variables in shell scripts look simple until they aren't. You assign with no spaces around = and read back with $. But the real skill is knowing when to quote them — and the answer is almost always.

When you write $filename unquoted, the shell performs word splitting on it. So if filename holds my report.txt, the shell sees two arguments: my and report.txt. Your script quietly operates on the wrong thing. Wrapping in double quotes — "$filename" — tells the shell to treat the whole value as one unit regardless of spaces.

Single quotes are different: they freeze everything literally. '$HOME' prints the string $HOME, not your home directory. Use double quotes when you want variables expanded, single quotes when you want a completely literal string.

There's also the distinction between local and exported variables. A variable set in your script only exists in that script's process. If you need a child process or a called program to see it, you must export it — or it simply won't be there.

variable_safety_demo.sh · BASH
12345678910111213141516171819202122232425262728293031323334353637383940414243444546
#!/usr/bin/env bash
# ALWAYS start with a shebang — it tells the OS which interpreter to use.
# Using /usr/bin/env bash is more portable than /bin/bash directly.

# ---------------------------------------------------------------------------
# BAD PRACTICE: unquoted variable — dangerous when values contain spaces
# ---------------------------------------------------------------------------
unsafe_filename="quarterly report.txt"
# This would break: ls $unsafe_filename  (shell sees 'quarterly' and 'report.txt')

# ---------------------------------------------------------------------------
# GOOD PRACTICE: always double-quote variables when using them
# ---------------------------------------------------------------------------
safe_filename="quarterly report.txt"
echo "Processing file: \"$safe_filename\""
# Output: Processing file: "quarterly report.txt"

# ---------------------------------------------------------------------------
# READONLY variables — use these for values that must never change
# ---------------------------------------------------------------------------
readonly MAX_RETRIES=3
readonly LOG_DIR="/var/log/myapp"

echo "Will retry up to $MAX_RETRIES times."
echo "Logs go to: $LOG_DIR"

# ---------------------------------------------------------------------------
# EXPORTED variables — child processes (like scripts you call) can see these
# ---------------------------------------------------------------------------
DATABASE_HOST="db.internal.example.com"
export DATABASE_HOST
# Now if this script calls another script, that script can read $DATABASE_HOST

# ---------------------------------------------------------------------------
# DEFAULT VALUES with parameter expansion — saves you from empty-variable bugs
# ---------------------------------------------------------------------------
# If $DEPLOY_ENV is unset or empty, fall back to 'staging'
DEPLOY_ENV="${DEPLOY_ENV:-staging}"
echo "Deploying to environment: $DEPLOY_ENV"

# ---------------------------------------------------------------------------
# ARITHMETIC — use $(( )) for integer math, not expr
# ---------------------------------------------------------------------------
current_version=4
next_version=$(( current_version + 1 ))
echo "Bumping version from $current_version to $next_version"
▶ Output
Processing file: "quarterly report.txt"
Will retry up to 3 times.
Logs go to: /var/log/myapp
Deploying to environment: staging
Bumping version from 4 to 5
⚠️
Watch Out: The Silent Space BugUnquoted variables are the single most common cause of shell script bugs in production. Make it a rule: every time you write `$variable`, ask yourself 'could this ever contain a space or a glob character?' If yes, it must be `"$variable"`. Tools like ShellCheck (shellcheck.net) will catch these automatically — add it to your CI pipeline.

Conditionals and Exit Codes: Making Your Script Fail Loudly, Not Silently

Exit codes are the heartbeat of shell scripting. Every command returns a number when it finishes — 0 means success, anything else means something went wrong. The shell stores the last exit code in $?. Most beginners ignore this completely, which leads to scripts that soldier on after a critical failure and leave systems in a broken half-state.

The two most important options you'll ever add to a script are set -e and set -u. set -e (errexit) makes the script stop immediately when any command fails. set -u (nounset) treats any unset variable as an error rather than silently substituting an empty string. Combine them with set -o pipefail and your pipelines stop if any command in the middle fails — not just the last one.

For conditionals, the double-bracket [[ ]] syntax is almost always preferable to the old single-bracket [ ] in bash. It handles edge cases more gracefully, supports regex matching with =~, and doesn't choke on empty variables the same way.

The real power comes when you chain conditionals with your exit codes to build scripts that are self-diagnosing — they tell you exactly what went wrong and clean up after themselves.

safe_deploy_check.sh · BASH
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475
#!/usr/bin/env bash
# These three lines should be at the top of EVERY non-trivial script.
# -e : exit immediately if any command fails
# -u : treat unset variables as errors (no silent empty-string substitution)
# -o pipefail : if any command in a pipe fails, the whole pipe fails
set -euo pipefail

# ---------------------------------------------------------------------------
# A pre-deployment health check script — realistic DevOps pattern
# ---------------------------------------------------------------------------

REQUIRED_DISK_SPACE_MB=500
DEPLOY_DIR="/opt/myapp"
HEALTH_CHECK_URL="http://localhost:8080/health"

# ---------------------------------------------------------------------------
# FUNCTION: check available disk space before deploying
# ---------------------------------------------------------------------------
check_disk_space() {
    local available_mb
    # df gives disk usage; awk extracts the available column for our directory
    available_mb=$(df -m "$DEPLOY_DIR" | awk 'NR==2 {print $4}')

    if [[ "$available_mb" -lt "$REQUIRED_DISK_SPACE_MB" ]]; then
        echo "ERROR: Only ${available_mb}MB free in $DEPLOY_DIR. Need at least ${REQUIRED_DISK_SPACE_MB}MB."
        return 1  # non-zero = failure; caller can act on this
    fi

    echo "Disk check passed: ${available_mb}MB available."
    return 0
}

# ---------------------------------------------------------------------------
# FUNCTION: verify the app is responding before we call the deploy complete
# ---------------------------------------------------------------------------
wait_for_healthy() {
    local max_attempts=10
    local attempt=1
    local sleep_seconds=3

    echo "Waiting for app to become healthy at $HEALTH_CHECK_URL ..."

    while [[ "$attempt" -le "$max_attempts" ]]; do
        # curl -sf: -s = silent, -f = fail on HTTP error codes (4xx/5xx)
        # We redirect output to /dev/null — we only care about the exit code
        if curl -sf "$HEALTH_CHECK_URL" > /dev/null 2>&1; then
            echo "App is healthy after $attempt attempt(s)."
            return 0
        fi

        echo "  Attempt $attempt/$max_attempts failed. Retrying in ${sleep_seconds}s..."
        sleep "$sleep_seconds"
        (( attempt++ ))
    done

    echo "ERROR: App did not become healthy after $max_attempts attempts."
    return 1
}

# ---------------------------------------------------------------------------
# MAIN — using && chains: only proceed if each step succeeds
# ---------------------------------------------------------------------------
echo "=== Pre-deployment checks ==="

# if check_disk_space fails, set -e will stop the script here
check_disk_space

echo "=== Deploying application ==="
# Simulate deployment step (replace with your real deploy command)
echo "  rsync / docker pull / helm upgrade would run here..."

echo "=== Post-deployment health check ==="
wait_for_healthy

echo "=== Deployment complete ==="
▶ Output
=== Pre-deployment checks ===
Disk check passed: 12483MB available.
=== Deploying application ===
rsync / docker pull / helm upgrade would run here...
=== Post-deployment health check ===
Waiting for app to become healthy at http://localhost:8080/health ...
Attempt 1/10 failed. Retrying in 3s...
Attempt 2/10 failed. Retrying in 3s...
App is healthy after 3 attempt(s).
=== Deployment complete ===
⚠️
Pro Tip: set -euo pipefail is Your Safety NetWithout `pipefail`, a command like `cat missing_file.txt | grep pattern` returns exit code 0 because grep succeeded — even though cat failed. This means your script thinks everything is fine when it isn't. Add `set -euo pipefail` as lines 3-5 of every script you write. It catches the silent failures that cause 3am incidents.

Functions, Loops, and Building Scripts You Can Actually Maintain

The moment a script grows past about 30 lines, you need functions. Not because the script stops working without them, but because without them nobody — including you six months later — can figure out what it does. A function is a named chunk of logic you can call by name, test in isolation, and reuse across scripts.

The critical habit with functions is declaring variables inside them as local. Without local, every variable in a function pollutes the global script scope. This causes the maddening bug where a function in one part of your script silently overwrites a variable used somewhere else entirely.

Loops in shell come in three main shapes: for (iterate over a known list), while (keep going until a condition changes), and until (keep going until a condition becomes true). The most practical distinction for real DevOps work is iterating over files vs iterating over command output — and these need different patterns to handle edge cases safely.

The for loop over a glob (for file in /path/*.log) is safer than parsing ls, because it handles spaces in filenames correctly and is built into the shell. Parsing ls output is one of the classic shell script antipatterns.

log_rotation_manager.sh · BASH
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102
#!/usr/bin/env bash
set -euo pipefail

# ---------------------------------------------------------------------------
# A realistic log rotation and archiving script
# Shows: functions with local vars, for loops, while loops, arrays
# ---------------------------------------------------------------------------

readonly LOG_SOURCE_DIR="/var/log/myapp"
readonly ARCHIVE_DIR="/mnt/archives/myapp"
readonly MAX_LOG_AGE_DAYS=30
readonly COMPRESSION_LEVEL=9  # 1=fast/large to 9=slow/small

# ---------------------------------------------------------------------------
# FUNCTION: compress a single log file and move it to archive
# 'local' keeps these variables from leaking into the global scope
# ---------------------------------------------------------------------------
archive_log_file() {
    local source_file="$1"   # first argument passed to the function
    local dest_dir="$2"      # second argument
    local filename
    local archived_path

    # basename strips the directory path, leaving just the filename
    filename=$(basename "$source_file")
    archived_path="${dest_dir}/${filename}.gz"

    echo "  Archiving: $filename"

    # gzip -${COMPRESSION_LEVEL} compresses; we keep the original with -c and redirect
    gzip -c -"${COMPRESSION_LEVEL}" "$source_file" > "$archived_path"

    # Only remove the original AFTER we verified the compressed file exists
    if [[ -f "$archived_path" ]]; then
        rm "$source_file"
        echo "  Done: $filename -> $(basename "$archived_path")"
    else
        echo "ERROR: Compression failed for $filename. Original NOT deleted."
        return 1
    fi
}

# ---------------------------------------------------------------------------
# FUNCTION: find and archive logs older than MAX_LOG_AGE_DAYS
# ---------------------------------------------------------------------------
rotate_old_logs() {
    local log_dir="$1"
    local archive_dir="$2"
    local file_count=0

    echo "Scanning $log_dir for logs older than $MAX_LOG_AGE_DAYS days..."

    # We use an array to safely collect results from find
    # This handles filenames with spaces correctly
    local old_log_files=()
    while IFS= read -r -d '' log_file; do
        old_log_files+=("$log_file")
    done < <(find "$log_dir" -maxdepth 1 -name "*.log" -mtime +"$MAX_LOG_AGE_DAYS" -print0)
    # -print0 and read -d '' use null bytes as delimiters — safe for ANY filename

    if [[ "${#old_log_files[@]}" -eq 0 ]]; then
        echo "No logs older than $MAX_LOG_AGE_DAYS days found. Nothing to archive."
        return 0
    fi

    echo "Found ${#old_log_files[@]} file(s) to archive."
    mkdir -p "$archive_dir"  # -p: create parent dirs too, no error if already exists

    # Iterate over the array — safe even with spaces in filenames
    for log_file in "${old_log_files[@]}"; do
        archive_log_file "$log_file" "$archive_dir"
        (( file_count++ ))
    done

    echo "Archived $file_count file(s) successfully."
}

# ---------------------------------------------------------------------------
# FUNCTION: report total size of archived files — shows while loop usage
# ---------------------------------------------------------------------------
report_archive_size() {
    local archive_dir="$1"
    local total_size_kb=0
    local file_size

    # while loop reading line-by-line from du output
    while IFS= read -r archive_file; do
        # du -k gives size in kilobytes
        file_size=$(du -k "$archive_file" | awk '{print $1}')
        (( total_size_kb += file_size ))
    done < <(find "$archive_dir" -name "*.gz" -type f)

    echo "Total archive size: ${total_size_kb}KB"
}

# ---------------------------------------------------------------------------
# MAIN EXECUTION
# ---------------------------------------------------------------------------
echo "=== Log Rotation Started: $(date '+%Y-%m-%d %H:%M:%S') ==="
rotate_old_logs "$LOG_SOURCE_DIR" "$ARCHIVE_DIR"
report_archive_size "$ARCHIVE_DIR"
echo "=== Log Rotation Complete ==="
▶ Output
=== Log Rotation Started: 2024-03-15 02:00:01 ===
Scanning /var/log/myapp for logs older than 30 days...
Found 3 file(s) to archive.
Archiving: app-2024-02-10.log
Done: app-2024-02-10.log -> app-2024-02-10.log.gz
Archiving: app-2024-02-11.log
Done: app-2024-02-11.log -> app-2024-02-11.log.gz
Archiving: app-2024-02-12.log
Done: app-2024-02-12.log -> app-2024-02-12.log.gz
Archived 3 file(s) successfully.
Total archive size: 2847KB
=== Log Rotation Complete ===
🔥
Interview Gold: Why Not Parse ls?Interviewers love asking 'why shouldn't you parse `ls` output?' The answer: `ls` formats output for humans, not machines. Filenames with spaces, newlines, or special characters get mangled. Use globs (`for file in /path/*.log`) or `find` with `-print0` + `read -d ''` instead. This demonstrates you understand shell safety at a production level.

Script Structure, Argument Handling, and the Trap Command for Cleanup

A script that works correctly once isn't enough. It needs to work correctly when someone passes unexpected arguments, when it gets interrupted halfway through, and when the system is under load. These three things separate a 'it works on my machine' script from a production-grade one.

Argument handling with getopts is the built-in way to add named flags to your script (like -e staging -v). It's more robust than just reading $1, $2 directly because it handles combined flags, gives you clear error messages for unknown options, and follows Unix conventions your teammates expect.

The trap command is what makes your scripts trustworthy. It lets you register a function to run when the script exits — whether it finishes normally, hits an error, or gets killed with Ctrl+C. This is how you clean up temp files, release locks, send a failure notification, or restore a backup if something goes wrong mid-deployment. Without trap, interrupted scripts leave messes behind.

Combining good argument handling with a cleanup trap is what turns a one-off script into a tool you'd actually put in a shared repository and hand to a colleague.

database_backup.sh · BASH
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697
#!/usr/bin/env bash
set -euo pipefail

# ---------------------------------------------------------------------------
# A production-pattern database backup script
# Demonstrates: getopts, trap for cleanup, structured argument validation
# Usage: ./database_backup.sh -e production -d myapp_db -o /backups
# ---------------------------------------------------------------------------

# ---------------------------------------------------------------------------
# CLEANUP FUNCTION — registered with trap, runs no matter how script exits
# ---------------------------------------------------------------------------
cleanup() {
    local exit_code=$?  # capture exit code BEFORE we do anything else

    if [[ -f "${temp_backup_file:-}" ]]; then
        echo "Cleaning up temporary file: $temp_backup_file"
        rm -f "$temp_backup_file"
    fi

    if [[ $exit_code -ne 0 ]]; then
        echo "BACKUP FAILED (exit code: $exit_code). Check logs above." >&2
        # In real life: send a Slack/PagerDuty alert here
    fi
}

# Register cleanup to run on: normal exit (EXIT), Ctrl+C (INT), kill signal (TERM)
trap cleanup EXIT INT TERM

# ---------------------------------------------------------------------------
# ARGUMENT PARSING with getopts
# ---------------------------------------------------------------------------
usage() {
    echo "Usage: $0 -e <environment> -d <database_name> -o <output_dir>"
    echo "  -e  Environment (staging|production)"
    echo "  -d  Database name to back up"
    echo "  -o  Output directory for backup files"
    exit 1
}

# Variables that flags will populate — give them empty defaults for set -u safety
environment=""
database_name=""
output_dir=""

# getopts loop: colon after a letter means it expects an argument (e.g. -e staging)
while getopts ":e:d:o:" option; do
    case "$option" in
        e) environment="$OPTARG" ;;   # OPTARG holds the value after the flag
        d) database_name="$OPTARG" ;;
        o) output_dir="$OPTARG" ;;
        :) echo "ERROR: -$OPTARG requires an argument." >&2; usage ;;
        ?) echo "ERROR: Unknown flag -$OPTARG." >&2; usage ;;
    esac
done

# Validate that all required arguments were provided
if [[ -z "$environment" || -z "$database_name" || -z "$output_dir" ]]; then
    echo "ERROR: All flags are required." >&2
    usage
fi

# Validate environment is one of the allowed values
if [[ "$environment" != "staging" && "$environment" != "production" ]]; then
    echo "ERROR: Environment must be 'staging' or 'production'." >&2
    exit 1
fi

# ---------------------------------------------------------------------------
# MAIN BACKUP LOGIC
# ---------------------------------------------------------------------------
timestamp=$(date '+%Y%m%d_%H%M%S')
backup_filename="${database_name}_${environment}_${timestamp}.sql"
temp_backup_file="${output_dir}/.tmp_${backup_filename}"  # hidden temp file
final_backup_file="${output_dir}/${backup_filename}.gz"

mkdir -p "$output_dir"

echo "=== Database Backup ==="
echo "  Environment : $environment"
echo "  Database    : $database_name"
echo "  Output      : $final_backup_file"
echo "Starting backup at $(date '+%H:%M:%S')..."

# Write to a temp file first — if this fails or is interrupted, cleanup() removes it
# Replace this with your real pg_dump / mysqldump command:
# pg_dump -h "$DB_HOST" -U "$DB_USER" "$database_name" > "$temp_backup_file"
echo "-- Simulated SQL dump for $database_name" > "$temp_backup_file"
echo "-- Environment: $environment" >> "$temp_backup_file"

# Only compress and finalise AFTER the dump succeeded
gzip -c "$temp_backup_file" > "$final_backup_file"
rm "$temp_backup_file"  # Remove temp — it's safe now, we have the compressed final

backup_size=$(du -sh "$final_backup_file" | awk '{print $1}')
echo "Backup complete. Size: $backup_size"
echo "Saved to: $final_backup_file"
▶ Output
=== Database Backup ===
Environment : production
Database : myapp_db
Output : /backups/myapp_db_production_20240315_020001.sql.gz
Starting backup at 02:00:01...
Backup complete. Size: 48M
Saved to: /backups/myapp_db_production_20240315_020001.sql.gz
⚠️
Pro Tip: Always Write to a Temp File FirstWrite your output to a hidden `.tmp_` file, then move or compress it to the final name once complete. If the script is interrupted mid-write, your cleanup trap removes the partial temp file and you never have a corrupt half-written backup masquerading as a real one. This pattern is used in every serious backup and deployment tool.
Aspect[ ] Single Bracket[[ ]] Double Bracket
Shell compatibilityPOSIX sh compatible (portable)Bash/Zsh only (not POSIX)
Empty variable handlingBreaks: [ $var = 'x' ] errors if $var is emptySafe: [[ $var == 'x' ]] handles empty variables
Regex matchingNot supportedSupported via =~ operator
String comparisonNeeds quoting: [ "$a" = "$b" ]Safe unquoted: [[ $a == $b ]]
Logical operatorsUse -a and -o (error-prone)Use && and || (clear and safe)
Pattern matchingNot supportedSupported: [[ $file == *.log ]]
Word splitting riskHigh — unquoted vars split on spacesNone — immune to word splitting
RecommendationUse only when writing /bin/sh scriptsUse this in all bash scripts

🎯 Key Takeaways

  • Quote every variable as "$variable" — unquoted variables that contain spaces or glob characters are the root cause of a huge percentage of shell script bugs in production.
  • set -euo pipefail should be in every non-trivial bash script — without it, failures are silent and your script will happily continue running after something critical breaks.
  • Use trap cleanup EXIT INT TERM with a cleanup function to guarantee temp files are deleted and resources are released regardless of how your script exits.
  • Always write output to a temp file first, then move it to the final destination atomically — this prevents corrupt partial files from being mistaken for complete ones after an interruption.

⚠ Common Mistakes to Avoid

  • Mistake 1: Not quoting variables — Writing cp $source_file $dest_dir instead of cp "$source_file" "$dest_dir" — If either path contains a space, the shell word-splits it into multiple arguments and cp fails with 'too many arguments' or silently operates on the wrong files. Fix: quote every variable, every time. Run ShellCheck on your scripts to catch these automatically.
  • Mistake 2: Missing set -euo pipefail — Writing scripts without these safety flags means a failed command in the middle of your script is silently ignored and execution continues. A failed mkdir followed by a cp into a non-existent directory, for example, won't stop the script — it'll just produce confusing errors downstream. Fix: add set -euo pipefail as the third line of every script, right after the shebang and a blank line.
  • Mistake 3: Using global variables inside functions — Declaring count=0 inside a function without local means it overwrites any count variable in the global scope. This causes the classic 'my loop only runs once' bug where a loop counter gets reset by a function call inside it. Fix: prefix every variable declared inside a function with local. Make it a non-negotiable habit.

Interview Questions on This Topic

  • QWhat's the difference between `$*` and `$@` when passing arguments to a script, and which one should you use and why?
  • QHow would you make a shell script that creates a temporary file guarantee that file gets deleted even if the script is killed with Ctrl+C?
  • QIf you have a pipeline like `command_a | command_b | command_c` and command_b fails, what happens by default — and how do you change that behaviour?

Frequently Asked Questions

What's the difference between #!/bin/bash and #!/usr/bin/env bash?

Both run your script with bash, but /usr/bin/env bash searches your PATH for bash rather than hardcoding its location. This matters because on some systems (notably macOS) bash lives in a different place, or you might want to use a newer bash installed via Homebrew. Using /usr/bin/env bash makes your script more portable across different Linux distributions and macOS.

When should I use a shell script vs Python or another language?

Shell scripts are the right tool when you're primarily gluing together existing command-line tools — file operations, calling other programs, CI/CD steps, system administration tasks. Reach for Python (or Go, etc.) when you need data structures more complex than arrays, you're doing string parsing beyond simple pattern matching, you need HTTP requests, JSON parsing, or error handling that's too complex to express cleanly in shell. A good rule of thumb: if your script needs more than two levels of nested logic, consider a real programming language.

Why do people say 'never parse the output of ls'?

The ls command formats its output for human reading — it can reorder files, strip or alter characters, and wrap lines depending on terminal width. If a filename contains a newline, a space, or certain special characters, parsing ls output will silently give you wrong results or fail unpredictably. Use glob patterns (for file in /path/*.log) for simple iteration, and find with -print0 combined with read -d '' when you need more complex file selection — both handle all legal filenames correctly.

🔥
TheCodeForge Editorial Team Verified Author

Written and reviewed by senior developers with real-world experience across enterprise, startup and open-source projects. Every article on TheCodeForge is written to be clear, accurate and genuinely useful — not just SEO filler.

← PreviousLinux Process ManagementNext →Shell Scripting Advanced
Forged with 🔥 at TheCodeForge.io — Where Developers Are Forged