Cron Jobs in Linux Explained — Scheduling, Syntax and Real-World Patterns
Every production system has a graveyard of tasks that someone used to do manually — rotating log files, sending weekly reports, backing up databases, clearing temp folders. Done by hand, these tasks get forgotten, delayed, or skipped on holidays. Done wrong, they take down services. Cron is the Linux answer to this problem: a built-in scheduler that's been running quietly on Unix systems since 1975, and still powers millions of automated workflows today.
The core problem cron solves is reliability. Humans forget. Cron doesn't. If you need something to happen at a predictable time — daily, hourly, every 15 minutes, or at 3:47am on the last day of the month — cron handles it without a process manager, without a paid SaaS tool, and without a single line of application code. It lives at the OS level, which means it works regardless of what language your app is written in or whether your app is even running.
By the end of this article you'll be able to write cron expressions confidently, manage crontab files without breaking things, debug jobs that silently fail, and apply the real-world patterns that DevOps engineers actually use in production. You'll also know the three mistakes that catch almost everyone the first time they use cron — and exactly how to avoid them.
How Cron Actually Works — The Daemon, the Crontab, and the Schedule
Cron is a daemon — a background process that starts when your system boots and never stops. Its name comes from 'chronos', the Greek word for time. Every minute, the cron daemon wakes up, checks all the crontab files on the system, and asks: 'Is there anything I should run right now?' If yes, it fires the job. Then it goes back to sleep until the next minute.
A crontab (cron table) is just a plain text file that lists scheduled jobs. Each line in a crontab is one job: five time fields followed by the command to run. You never edit this file directly — you use the crontab command, which validates the format before saving, protecting you from syntax errors that would silently break everything.
There are two types of crontabs you'll work with. The first is user crontabs — each Linux user has their own, and jobs run as that user. The second is the system crontab at /etc/crontab and files dropped into /etc/cron.d/, which include an extra field specifying which user to run the job as. For most application-level automation, user crontabs are the right choice. For system-level jobs like log rotation, the system crontab is used.
The key mental model: cron doesn't track whether a previous job finished. If your job takes longer than its schedule interval, you can end up with two copies running at the same time. That's one of the most dangerous production gotchas, and we'll cover how to handle it.
# ───────────────────────────────────────────────── # MANAGING YOUR CRONTAB — the essential commands # ───────────────────────────────────────────────── # Open your crontab in the default editor (usually nano or vi) # This is the ONLY safe way to edit your crontab crontab -e # List all your current cron jobs — great for auditing crontab -l # Remove ALL your cron jobs (dangerous — no confirmation prompt!) crontab -r # Edit the crontab for a specific user (must run as root) crontab -u deploy_user -e # ───────────────────────────────────────────────── # ANATOMY OF A CRON EXPRESSION # ───────────────────────────────────────────────── # Each job line follows this exact structure: # # ┌──────────── minute (0 - 59) # │ ┌─────────── hour (0 - 23) # │ │ ┌──────────── day of month (1 - 31) # │ │ │ ┌─────────── month (1 - 12) # │ │ │ │ ┌──────────── day of week (0 - 7, both 0 and 7 = Sunday) # │ │ │ │ │ # * * * * * command_to_run # ───────────────────────────────────────────────── # REAL EXAMPLES with plain-English explanations # ───────────────────────────────────────────────── # Run database backup every day at 2:30am 30 2 * * * /opt/scripts/backup_postgres.sh # Clear the application temp directory every hour 0 * * * * rm -rf /var/app/tmp/* # Send a weekly analytics report every Monday at 9am 0 9 * * 1 /opt/scripts/send_weekly_report.py # Run a health check every 15 minutes */15 * * * * /opt/scripts/health_check.sh # Archive logs on the 1st of every month at midnight 0 0 1 * * /opt/scripts/archive_logs.sh # Run a job only on weekdays (Mon-Fri) at 6am 0 6 * * 1-5 /opt/scripts/sync_business_data.sh
30 2 * * * /opt/scripts/backup_postgres.sh
0 * * * * rm -rf /var/app/tmp/*
0 9 * * 1 /opt/scripts/send_weekly_report.py
*/15 * * * * /opt/scripts/health_check.sh
0 0 1 * * /opt/scripts/archive_logs.sh
0 6 * * 1-5 /opt/scripts/sync_business_data.sh
Writing Production-Ready Cron Jobs — Logging, Environments, and Locking
Here's where most tutorials stop — and where most real problems start. A cron job that runs date will work fine. A cron job that runs your Python script will almost certainly fail silently the first time, and here's why: cron runs with a minimal environment. It doesn't load your .bashrc, .bash_profile, or any of the environment variables you set in your shell session. That means PATH, PYTHONPATH, NODE_ENV, database credentials, API keys — none of it is there unless you explicitly provide it.
The second production concern is logging. By default, cron swallows all output. If your script crashes, you'll never know unless you've set up logging. The fix is simple: redirect both stdout and stderr to a log file on every single cron job.
The third concern is job overlap. If your backup script takes 90 minutes but runs every hour, you'll eventually have two copies fighting over the same files. The solution is a lock file — a mechanism where the script checks if another copy of itself is already running and exits gracefully if so. The flock utility makes this one line.
These three patterns — explicit environment, output logging, and job locking — are what separate a toy cron job from a production one. The example below shows all three working together in a realistic database backup script.
#!/bin/bash # ───────────────────────────────────────────────────────────────── # backup_postgres.sh # Production-ready cron job: PostgreSQL backup with logging + locking # Scheduled via crontab to run daily at 2:30am: # 30 2 * * * /bin/bash /opt/scripts/backup_postgres.sh # ───────────────────────────────────────────────────────────────── # ── 1. EXPLICIT ENVIRONMENT ────────────────────────────────────── # Cron's PATH is minimal (/usr/bin:/bin). Set it explicitly so # commands like pg_dump, aws, gzip are all found correctly. export PATH="/usr/local/bin:/usr/bin:/bin" # Load app-specific environment variables from a secure file # This file contains DB_USER, DB_NAME, S3_BUCKET etc. # Permissions on this file should be 600 (owner read/write only) source /etc/app/backup.env # ── 2. LOGGING SETUP ───────────────────────────────────────────── LOG_DIR="/var/log/app_backups" LOG_FILE="${LOG_DIR}/backup_$(date +%Y-%m-%d).log" TIMESTAMP=$(date '+%Y-%m-%d %H:%M:%S') # Create log directory if it doesn't exist mkdir -p "$LOG_DIR" # Redirect ALL output (stdout + stderr) to the log file # The tee command also prints to the terminal when run manually exec > >(tee -a "$LOG_FILE") 2>&1 echo "[$TIMESTAMP] ─── Backup job started ───" # ── 3. JOB LOCKING — prevent overlapping runs ──────────────────── LOCK_FILE="/tmp/backup_postgres.lock" # flock acquires an exclusive lock on the lock file. # -n means "non-blocking" — if the lock is already held, exit immediately. # 9 is the file descriptor we're using for the lock. exec 9>"$LOCK_FILE" if ! flock -n 9; then echo "[$TIMESTAMP] Another backup is already running. Exiting." exit 1 fi # ── 4. THE ACTUAL WORK ─────────────────────────────────────────── BACKUP_FILENAME="${DB_NAME}_$(date +%Y%m%d_%H%M%S).sql.gz" BACKUP_PATH="/tmp/${BACKUP_FILENAME}" echo "[$TIMESTAMP] Dumping database: $DB_NAME" # pg_dump connects to PostgreSQL and streams SQL to gzip for compression # PGPASSWORD is read from the sourced env file above pg_dump -U "$DB_USER" -h "$DB_HOST" "$DB_NAME" | gzip > "$BACKUP_PATH" # Check if the dump succeeded — never assume it worked if [ $? -ne 0 ]; then echo "[$TIMESTAMP] ERROR: pg_dump failed! Aborting upload." exit 1 fi echo "[$TIMESTAMP] Dump complete. Uploading to S3: s3://${S3_BUCKET}/" # Upload to S3 — credentials come from the sourced env file aws s3 cp "$BACKUP_PATH" "s3://${S3_BUCKET}/postgres-backups/${BACKUP_FILENAME}" if [ $? -eq 0 ]; then echo "[$TIMESTAMP] Upload successful. Cleaning up local file." rm -f "$BACKUP_PATH" else echo "[$TIMESTAMP] ERROR: S3 upload failed! Local backup retained at $BACKUP_PATH" exit 1 fi echo "[$TIMESTAMP] ─── Backup job completed successfully ───"
[2024-03-15 02:30:01] Dumping database: production_db
[2024-03-15 02:31:47] Dump complete. Uploading to S3: s3://my-company-backups/
upload: /tmp/production_db_20240315_023001.sql.gz to s3://my-company-backups/postgres-backups/production_db_20240315_023001.sql.gz
[2024-03-15 02:32:03] Upload successful. Cleaning up local file.
[2024-03-15 02:32:03] ─── Backup job completed successfully ───
Special Schedules, System Crontabs, and When to Use Alternatives
Once you're comfortable with the five-field syntax, there are shorthand strings that make common schedules far more readable. Instead of 0 0 * you can write @daily. These are called cron nicknames and every modern cron daemon supports them. They're self-documenting and much harder to misread.
Beyond user crontabs, Linux ships with a set of system-managed cron directories. Drop an executable script into /etc/cron.daily/ and it will run once a day — no crontab editing required. The actual run times are controlled by the run-parts entries in /etc/crontab. These are perfect for system maintenance tasks packaged by software installers.
That said, cron isn't always the right tool. It has real limitations: it has no dependency management (it can't wait for Job A to finish before starting Job B), it doesn't retry on failure, it has no built-in alerting, and it doesn't scale across multiple machines. For anything more complex — multi-step pipelines, distributed systems, or jobs that need retry logic — tools like systemd timers, Apache Airflow, or cloud-native schedulers (AWS EventBridge, GCP Cloud Scheduler) are better fits. Knowing when NOT to use cron is as important as knowing how to use it.
# ───────────────────────────────────────────────────────────────── # CRON NICKNAMES — readable shorthand for common schedules # ───────────────────────────────────────────────────────────────── @reboot /opt/scripts/start_queue_worker.sh # Run ONCE at system startup @hourly /opt/scripts/refresh_cache.sh # Same as: 0 * * * * @daily /opt/scripts/cleanup_sessions.sh # Same as: 0 0 * * * @weekly /opt/scripts/generate_sitemap.sh # Same as: 0 0 * * 0 @monthly /opt/scripts/invoice_generator.sh # Same as: 0 0 1 * * @yearly /opt/scripts/archive_old_records.sh # Same as: 0 0 1 1 * # ───────────────────────────────────────────────────────────────── # SYSTEM CRONTAB (/etc/crontab) — has an extra USER field # ───────────────────────────────────────────────────────────────── # Format: minute hour day month weekday USER command # Run log rotation as root every day at 6:25am 25 6 * * * root /usr/sbin/logrotate /etc/logrotate.conf # Run the app's data sync as the 'deploy' user at midnight 0 0 * * * deploy /opt/app/bin/sync_data.sh # ───────────────────────────────────────────────────────────────── # RUNNING A CRON JOB MANUALLY FOR TESTING # ───────────────────────────────────────────────────────────────── # The most reliable way to test a cron job is to simulate # cron's environment: no shell customizations, minimal PATH. # Run your script exactly as cron would — bare environment env -i HOME=/root LOGNAME=root PATH=/usr/bin:/bin \ /bin/bash /opt/scripts/backup_postgres.sh # ───────────────────────────────────────────────────────────────── # CHECKING CRON LOGS — when a job runs but you don't see output # ───────────────────────────────────────────────────────────────── # On systemd-based systems (Ubuntu 20+, RHEL 8+) journalctl -u cron --since "1 hour ago" # On older systems using syslog grep CRON /var/log/syslog | tail -50 # On Red Hat / CentOS systems grep CRON /var/log/cron | tail -50
Mar 15 02:30:01 prod-server CRON[14823]: (deploy) CMD (/opt/scripts/backup_postgres.sh)
Mar 15 03:00:01 prod-server CRON[15102]: (root) CMD (/usr/sbin/logrotate /etc/logrotate.conf)
Mar 15 03:15:01 prod-server CRON[15341]: (deploy) CMD (/opt/scripts/health_check.sh)
| Feature | Cron Jobs | Systemd Timers |
|---|---|---|
| Setup complexity | Single crontab line | Two unit files required (.service + .timer) |
| Logging | Manual — redirect to file | Automatic via journald |
| Missed job handling | Silently skipped if system was off | Can catch up missed runs with Persistent=true |
| Dependency management | None — runs regardless | Can depend on other systemd units |
| Run on boot | Via @reboot keyword | Native with OnBootSec= |
| Environment variables | Minimal — must set explicitly | Set in unit file with Environment= directive |
| Retry on failure | Not supported | Configurable with Restart= directive |
| Best for | Simple, single-command schedules | Complex system-level tasks with dependencies |
| Visibility | crontab -l only | systemctl list-timers — shows next run time |
| Portability | Works on every Linux/Unix system | Only on systemd-based systems (most modern Linux) |
🎯 Key Takeaways
- Cron runs with a stripped-down environment — never assume PATH, env variables, or shell customizations are available. Use absolute paths and source env files explicitly inside every script.
- Always redirect both stdout and stderr to a log file (
>> /path/to/job.log 2>&1) — without this, silent failures are guaranteed and you'll have no evidence a job even ran. - Use
flockto create a lock file when your job's runtime could exceed its schedule interval — running two copies of a backup or migration job simultaneously can corrupt data silently. - Know when cron is the wrong tool: if you need retries, job dependencies, distributed execution, or failure alerting, reach for systemd timers, Airflow, or a cloud scheduler instead of bolting complexity onto cron.
⚠ Common Mistakes to Avoid
- ✕Mistake 1: Relying on your shell's PATH — Symptom: the job works perfectly when you run it manually but does nothing (or logs 'command not found') when cron triggers it. Fix: always use absolute paths in cron commands (
/usr/bin/python3notpython3) and setPATHexplicitly at the top of your script, or source your env file. - ✕Mistake 2: Forgetting to redirect stderr — Symptom: your script throws an error, cron emails root (if mail is configured) or discards it entirely, and you have zero visibility into what went wrong. Fix: always append
>> /var/log/myjob.log 2>&1to your crontab line, where2>&1merges stderr into stdout so both go to the same log file. - ✕Mistake 3: Using
crontab -rwhen you meantcrontab -e— Symptom: all your cron jobs disappear instantly with no confirmation prompt and no undo. Fix: before you do anything destructive, runcrontab -l > ~/crontab_backup.txtto save a copy. Some teams commit their crontab to version control via a provisioning script specifically to prevent this.
Interview Questions on This Topic
- QWhat happens to a cron job if the server is rebooted exactly when the job was supposed to run? How would you handle that scenario in production?
- QA developer says their cron job works fine when they run it manually but fails every time cron triggers it. Walk me through how you'd diagnose and fix that.
- QTwo cron jobs are scheduled to run at the same time and they both write to the same file. What could go wrong, and what's the standard Linux mechanism to prevent it?
Frequently Asked Questions
How do I run a cron job every 5 minutes in Linux?
Use the step syntax with an asterisk: /5 /path/to/your/script.sh. The /5 in the minutes field means 'every 5 minutes' — it's shorthand for 0, 5, 10, 15...55. You can use this step syntax in any of the five time fields.
Why is my cron job not running even though the syntax looks correct?
The most common causes are: the script isn't executable (run chmod +x /path/to/script.sh), the command uses a PATH that cron doesn't have access to, or the script has a shebang line pointing to the wrong interpreter. Test by running env -i PATH=/usr/bin:/bin /bin/bash /path/to/script.sh to simulate cron's minimal environment and see the real error.
What's the difference between /etc/crontab and a user crontab?
The system crontab at /etc/crontab has an extra field between the time expression and the command — the username that the command should run as. This allows one file to run jobs as multiple different users. User crontabs (edited with crontab -e) always run as the user who owns them and don't have this extra field. For application-level tasks, prefer user crontabs for security isolation.
Written and reviewed by senior developers with real-world experience across enterprise, startup and open-source projects. Every article on TheCodeForge is written to be clear, accurate and genuinely useful — not just SEO filler.