Home DevOps Cron Jobs in Linux Explained — Scheduling, Syntax and Real-World Patterns

Cron Jobs in Linux Explained — Scheduling, Syntax and Real-World Patterns

In Plain English 🔥
Imagine you set a reminder on your phone to water your plants every Monday at 8am. You don't have to remember it — your phone just does it automatically, every single week, even while you're asleep. Cron is exactly that, but for your Linux server. You tell it 'run this script at 2am every night', and it does it forever without you lifting a finger. That's it. A tireless, never-forgetting robot assistant built into every Linux system.
⚡ Quick Answer
Imagine you set a reminder on your phone to water your plants every Monday at 8am. You don't have to remember it — your phone just does it automatically, every single week, even while you're asleep. Cron is exactly that, but for your Linux server. You tell it 'run this script at 2am every night', and it does it forever without you lifting a finger. That's it. A tireless, never-forgetting robot assistant built into every Linux system.

Every production system has a graveyard of tasks that someone used to do manually — rotating log files, sending weekly reports, backing up databases, clearing temp folders. Done by hand, these tasks get forgotten, delayed, or skipped on holidays. Done wrong, they take down services. Cron is the Linux answer to this problem: a built-in scheduler that's been running quietly on Unix systems since 1975, and still powers millions of automated workflows today.

The core problem cron solves is reliability. Humans forget. Cron doesn't. If you need something to happen at a predictable time — daily, hourly, every 15 minutes, or at 3:47am on the last day of the month — cron handles it without a process manager, without a paid SaaS tool, and without a single line of application code. It lives at the OS level, which means it works regardless of what language your app is written in or whether your app is even running.

By the end of this article you'll be able to write cron expressions confidently, manage crontab files without breaking things, debug jobs that silently fail, and apply the real-world patterns that DevOps engineers actually use in production. You'll also know the three mistakes that catch almost everyone the first time they use cron — and exactly how to avoid them.

How Cron Actually Works — The Daemon, the Crontab, and the Schedule

Cron is a daemon — a background process that starts when your system boots and never stops. Its name comes from 'chronos', the Greek word for time. Every minute, the cron daemon wakes up, checks all the crontab files on the system, and asks: 'Is there anything I should run right now?' If yes, it fires the job. Then it goes back to sleep until the next minute.

A crontab (cron table) is just a plain text file that lists scheduled jobs. Each line in a crontab is one job: five time fields followed by the command to run. You never edit this file directly — you use the crontab command, which validates the format before saving, protecting you from syntax errors that would silently break everything.

There are two types of crontabs you'll work with. The first is user crontabs — each Linux user has their own, and jobs run as that user. The second is the system crontab at /etc/crontab and files dropped into /etc/cron.d/, which include an extra field specifying which user to run the job as. For most application-level automation, user crontabs are the right choice. For system-level jobs like log rotation, the system crontab is used.

The key mental model: cron doesn't track whether a previous job finished. If your job takes longer than its schedule interval, you can end up with two copies running at the same time. That's one of the most dangerous production gotchas, and we'll cover how to handle it.

crontab_basics.sh · BASH
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051
# ─────────────────────────────────────────────────
# MANAGING YOUR CRONTAB — the essential commands
# ─────────────────────────────────────────────────

# Open your crontab in the default editor (usually nano or vi)
# This is the ONLY safe way to edit your crontab
crontab -e

# List all your current cron jobs — great for auditing
crontab -l

# Remove ALL your cron jobs (dangerous — no confirmation prompt!)
crontab -r

# Edit the crontab for a specific user (must run as root)
crontab -u deploy_user -e

# ─────────────────────────────────────────────────
# ANATOMY OF A CRON EXPRESSION
# ─────────────────────────────────────────────────
# Each job line follows this exact structure:
#
# ┌──────────── minute        (0 - 59)
# │  ┌─────────── hour          (0 - 23)
# │  │  ┌──────────── day of month  (1 - 31)
# │  │  │  ┌─────────── month         (1 - 12)
# │  │  │  │  ┌──────────── day of week   (0 - 7, both 0 and 7 = Sunday)
# │  │  │  │  │
# *  *  *  *  *  command_to_run

# ─────────────────────────────────────────────────
# REAL EXAMPLES with plain-English explanations
# ─────────────────────────────────────────────────

# Run database backup every day at 2:30am
30 2 * * * /opt/scripts/backup_postgres.sh

# Clear the application temp directory every hour
0 * * * * rm -rf /var/app/tmp/*

# Send a weekly analytics report every Monday at 9am
0 9 * * 1 /opt/scripts/send_weekly_report.py

# Run a health check every 15 minutes
*/15 * * * * /opt/scripts/health_check.sh

# Archive logs on the 1st of every month at midnight
0 0 1 * * /opt/scripts/archive_logs.sh

# Run a job only on weekdays (Mon-Fri) at 6am
0 6 * * 1-5 /opt/scripts/sync_business_data.sh
▶ Output
# Output of: crontab -l
30 2 * * * /opt/scripts/backup_postgres.sh
0 * * * * rm -rf /var/app/tmp/*
0 9 * * 1 /opt/scripts/send_weekly_report.py
*/15 * * * * /opt/scripts/health_check.sh
0 0 1 * * /opt/scripts/archive_logs.sh
0 6 * * 1-5 /opt/scripts/sync_business_data.sh
⚠️
Pro Tip: Use crontab.guruBefore you save any cron expression, paste it into crontab.guru — a free visual editor that translates your expression into plain English in real time. It's saved countless engineers from scheduling jobs at the wrong time.

Writing Production-Ready Cron Jobs — Logging, Environments, and Locking

Here's where most tutorials stop — and where most real problems start. A cron job that runs date will work fine. A cron job that runs your Python script will almost certainly fail silently the first time, and here's why: cron runs with a minimal environment. It doesn't load your .bashrc, .bash_profile, or any of the environment variables you set in your shell session. That means PATH, PYTHONPATH, NODE_ENV, database credentials, API keys — none of it is there unless you explicitly provide it.

The second production concern is logging. By default, cron swallows all output. If your script crashes, you'll never know unless you've set up logging. The fix is simple: redirect both stdout and stderr to a log file on every single cron job.

The third concern is job overlap. If your backup script takes 90 minutes but runs every hour, you'll eventually have two copies fighting over the same files. The solution is a lock file — a mechanism where the script checks if another copy of itself is already running and exits gracefully if so. The flock utility makes this one line.

These three patterns — explicit environment, output logging, and job locking — are what separate a toy cron job from a production one. The example below shows all three working together in a realistic database backup script.

backup_postgres.sh · BASH
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374
#!/bin/bash
# ─────────────────────────────────────────────────────────────────
# backup_postgres.sh
# Production-ready cron job: PostgreSQL backup with logging + locking
# Scheduled via crontab to run daily at 2:30am:
#   30 2 * * * /bin/bash /opt/scripts/backup_postgres.sh
# ─────────────────────────────────────────────────────────────────

# ── 1. EXPLICIT ENVIRONMENT ──────────────────────────────────────
# Cron's PATH is minimal (/usr/bin:/bin). Set it explicitly so
# commands like pg_dump, aws, gzip are all found correctly.
export PATH="/usr/local/bin:/usr/bin:/bin"

# Load app-specific environment variables from a secure file
# This file contains DB_USER, DB_NAME, S3_BUCKET etc.
# Permissions on this file should be 600 (owner read/write only)
source /etc/app/backup.env

# ── 2. LOGGING SETUP ─────────────────────────────────────────────
LOG_DIR="/var/log/app_backups"
LOG_FILE="${LOG_DIR}/backup_$(date +%Y-%m-%d).log"
TIMESTAMP=$(date '+%Y-%m-%d %H:%M:%S')

# Create log directory if it doesn't exist
mkdir -p "$LOG_DIR"

# Redirect ALL output (stdout + stderr) to the log file
# The tee command also prints to the terminal when run manually
exec > >(tee -a "$LOG_FILE") 2>&1

echo "[$TIMESTAMP] ─── Backup job started ───"

# ── 3. JOB LOCKING — prevent overlapping runs ────────────────────
LOCK_FILE="/tmp/backup_postgres.lock"

# flock acquires an exclusive lock on the lock file.
# -n means "non-blocking"if the lock is already held, exit immediately.
# 9 is the file descriptor we're using for the lock.
exec 9>"$LOCK_FILE"
if ! flock -n 9; then
    echo "[$TIMESTAMP] Another backup is already running. Exiting."
    exit 1
fi

# ── 4. THE ACTUAL WORK ───────────────────────────────────────────
BACKUP_FILENAME="${DB_NAME}_$(date +%Y%m%d_%H%M%S).sql.gz"
BACKUP_PATH="/tmp/${BACKUP_FILENAME}"

echo "[$TIMESTAMP] Dumping database: $DB_NAME"

# pg_dump connects to PostgreSQL and streams SQL to gzip for compression
# PGPASSWORD is read from the sourced env file above
pg_dump -U "$DB_USER" -h "$DB_HOST" "$DB_NAME" | gzip > "$BACKUP_PATH"

# Check if the dump succeeded — never assume it worked
if [ $? -ne 0 ]; then
    echo "[$TIMESTAMP] ERROR: pg_dump failed! Aborting upload."
    exit 1
fi

echo "[$TIMESTAMP] Dump complete. Uploading to S3: s3://${S3_BUCKET}/"

# Upload to S3 — credentials come from the sourced env file
aws s3 cp "$BACKUP_PATH" "s3://${S3_BUCKET}/postgres-backups/${BACKUP_FILENAME}"

if [ $? -eq 0 ]; then
    echo "[$TIMESTAMP] Upload successful. Cleaning up local file."
    rm -f "$BACKUP_PATH"
else
    echo "[$TIMESTAMP] ERROR: S3 upload failed! Local backup retained at $BACKUP_PATH"
    exit 1
fi

echo "[$TIMESTAMP] ─── Backup job completed successfully ───"
▶ Output
[2024-03-15 02:30:01] ─── Backup job started ───
[2024-03-15 02:30:01] Dumping database: production_db
[2024-03-15 02:31:47] Dump complete. Uploading to S3: s3://my-company-backups/
upload: /tmp/production_db_20240315_023001.sql.gz to s3://my-company-backups/postgres-backups/production_db_20240315_023001.sql.gz
[2024-03-15 02:32:03] Upload successful. Cleaning up local file.
[2024-03-15 02:32:03] ─── Backup job completed successfully ───
⚠️
Watch Out: Silent Failures Are Cron's DefaultWithout output redirection, a failing cron job produces zero feedback. Add `>> /var/log/myjob.log 2>&1` to every cron line as your absolute minimum. Better yet, build logging directly into the script itself as shown above — that way you get it whether the job is triggered by cron or run manually.

Special Schedules, System Crontabs, and When to Use Alternatives

Once you're comfortable with the five-field syntax, there are shorthand strings that make common schedules far more readable. Instead of 0 0 * you can write @daily. These are called cron nicknames and every modern cron daemon supports them. They're self-documenting and much harder to misread.

Beyond user crontabs, Linux ships with a set of system-managed cron directories. Drop an executable script into /etc/cron.daily/ and it will run once a day — no crontab editing required. The actual run times are controlled by the run-parts entries in /etc/crontab. These are perfect for system maintenance tasks packaged by software installers.

That said, cron isn't always the right tool. It has real limitations: it has no dependency management (it can't wait for Job A to finish before starting Job B), it doesn't retry on failure, it has no built-in alerting, and it doesn't scale across multiple machines. For anything more complex — multi-step pipelines, distributed systems, or jobs that need retry logic — tools like systemd timers, Apache Airflow, or cloud-native schedulers (AWS EventBridge, GCP Cloud Scheduler) are better fits. Knowing when NOT to use cron is as important as knowing how to use it.

cron_advanced_patterns.sh · BASH
1234567891011121314151617181920212223242526272829303132333435363738394041424344
# ─────────────────────────────────────────────────────────────────
# CRON NICKNAMES — readable shorthand for common schedules
# ─────────────────────────────────────────────────────────────────

@reboot   /opt/scripts/start_queue_worker.sh   # Run ONCE at system startup
@hourly   /opt/scripts/refresh_cache.sh         # Same as: 0 * * * *
@daily    /opt/scripts/cleanup_sessions.sh      # Same as: 0 0 * * *
@weekly   /opt/scripts/generate_sitemap.sh      # Same as: 0 0 * * 0
@monthly  /opt/scripts/invoice_generator.sh     # Same as: 0 0 1 * *
@yearly   /opt/scripts/archive_old_records.sh   # Same as: 0 0 1 1 *

# ─────────────────────────────────────────────────────────────────
# SYSTEM CRONTAB (/etc/crontab) — has an extra USER field
# ─────────────────────────────────────────────────────────────────
# Format: minute  hour  day  month  weekday  USER  command

# Run log rotation as root every day at 6:25am
25 6 * * *   root    /usr/sbin/logrotate /etc/logrotate.conf

# Run the app's data sync as the 'deploy' user at midnight
0  0 * * *   deploy  /opt/app/bin/sync_data.sh

# ─────────────────────────────────────────────────────────────────
# RUNNING A CRON JOB MANUALLY FOR TESTING
# ─────────────────────────────────────────────────────────────────
# The most reliable way to test a cron job is to simulate
# cron's environment: no shell customizations, minimal PATH.

# Run your script exactly as cron would — bare environment
env -i HOME=/root LOGNAME=root PATH=/usr/bin:/bin \
    /bin/bash /opt/scripts/backup_postgres.sh

# ─────────────────────────────────────────────────────────────────
# CHECKING CRON LOGS — when a job runs but you don't see output
# ─────────────────────────────────────────────────────────────────

# On systemd-based systems (Ubuntu 20+, RHEL 8+)
journalctl -u cron --since "1 hour ago"

# On older systems using syslog
grep CRON /var/log/syslog | tail -50

# On Red Hat / CentOS systems
grep CRON /var/log/cron | tail -50
▶ Output
# journalctl -u cron --since "1 hour ago" output:
Mar 15 02:30:01 prod-server CRON[14823]: (deploy) CMD (/opt/scripts/backup_postgres.sh)
Mar 15 03:00:01 prod-server CRON[15102]: (root) CMD (/usr/sbin/logrotate /etc/logrotate.conf)
Mar 15 03:15:01 prod-server CRON[15341]: (deploy) CMD (/opt/scripts/health_check.sh)
🔥
Interview Gold: @reboot Is UnderusedThe @reboot nickname runs a command once when the system starts. It's perfect for starting background workers, mounting drives, or seeding caches after a reboot — and interviewers love asking whether you know it exists. Unlike init scripts, it requires zero configuration beyond a single crontab line.
FeatureCron JobsSystemd Timers
Setup complexitySingle crontab lineTwo unit files required (.service + .timer)
LoggingManual — redirect to fileAutomatic via journald
Missed job handlingSilently skipped if system was offCan catch up missed runs with Persistent=true
Dependency managementNone — runs regardlessCan depend on other systemd units
Run on bootVia @reboot keywordNative with OnBootSec=
Environment variablesMinimal — must set explicitlySet in unit file with Environment= directive
Retry on failureNot supportedConfigurable with Restart= directive
Best forSimple, single-command schedulesComplex system-level tasks with dependencies
Visibilitycrontab -l onlysystemctl list-timers — shows next run time
PortabilityWorks on every Linux/Unix systemOnly on systemd-based systems (most modern Linux)

🎯 Key Takeaways

  • Cron runs with a stripped-down environment — never assume PATH, env variables, or shell customizations are available. Use absolute paths and source env files explicitly inside every script.
  • Always redirect both stdout and stderr to a log file (>> /path/to/job.log 2>&1) — without this, silent failures are guaranteed and you'll have no evidence a job even ran.
  • Use flock to create a lock file when your job's runtime could exceed its schedule interval — running two copies of a backup or migration job simultaneously can corrupt data silently.
  • Know when cron is the wrong tool: if you need retries, job dependencies, distributed execution, or failure alerting, reach for systemd timers, Airflow, or a cloud scheduler instead of bolting complexity onto cron.

⚠ Common Mistakes to Avoid

  • Mistake 1: Relying on your shell's PATH — Symptom: the job works perfectly when you run it manually but does nothing (or logs 'command not found') when cron triggers it. Fix: always use absolute paths in cron commands (/usr/bin/python3 not python3) and set PATH explicitly at the top of your script, or source your env file.
  • Mistake 2: Forgetting to redirect stderr — Symptom: your script throws an error, cron emails root (if mail is configured) or discards it entirely, and you have zero visibility into what went wrong. Fix: always append >> /var/log/myjob.log 2>&1 to your crontab line, where 2>&1 merges stderr into stdout so both go to the same log file.
  • Mistake 3: Using crontab -r when you meant crontab -e — Symptom: all your cron jobs disappear instantly with no confirmation prompt and no undo. Fix: before you do anything destructive, run crontab -l > ~/crontab_backup.txt to save a copy. Some teams commit their crontab to version control via a provisioning script specifically to prevent this.

Interview Questions on This Topic

  • QWhat happens to a cron job if the server is rebooted exactly when the job was supposed to run? How would you handle that scenario in production?
  • QA developer says their cron job works fine when they run it manually but fails every time cron triggers it. Walk me through how you'd diagnose and fix that.
  • QTwo cron jobs are scheduled to run at the same time and they both write to the same file. What could go wrong, and what's the standard Linux mechanism to prevent it?

Frequently Asked Questions

How do I run a cron job every 5 minutes in Linux?

Use the step syntax with an asterisk: /5 /path/to/your/script.sh. The /5 in the minutes field means 'every 5 minutes' — it's shorthand for 0, 5, 10, 15...55. You can use this step syntax in any of the five time fields.

Why is my cron job not running even though the syntax looks correct?

The most common causes are: the script isn't executable (run chmod +x /path/to/script.sh), the command uses a PATH that cron doesn't have access to, or the script has a shebang line pointing to the wrong interpreter. Test by running env -i PATH=/usr/bin:/bin /bin/bash /path/to/script.sh to simulate cron's minimal environment and see the real error.

What's the difference between /etc/crontab and a user crontab?

The system crontab at /etc/crontab has an extra field between the time expression and the command — the username that the command should run as. This allows one file to run jobs as multiple different users. User crontabs (edited with crontab -e) always run as the user who owns them and don't have this extra field. For application-level tasks, prefer user crontabs for security isolation.

🔥
TheCodeForge Editorial Team Verified Author

Written and reviewed by senior developers with real-world experience across enterprise, startup and open-source projects. Every article on TheCodeForge is written to be clear, accurate and genuinely useful — not just SEO filler.

← PreviousShell Scripting AdvancedNext →Linux Networking Commands
Forged with 🔥 at TheCodeForge.io — Where Developers Are Forged