Skip to content
Home DevOps AutoSys Fault Tolerance and Recovery — Building Resilient Batch Workflows

AutoSys Fault Tolerance and Recovery — Building Resilient Batch Workflows

Where developers are forged. · Structured learning · Free forever.
📍 Part of: AutoSys → Topic 25 of 30
Learn AutoSys fault tolerance patterns: n_retrys, box_terminator, HA setup, restart procedures, and recovery strategies for failed batch workflows in production environments.
🔥 Advanced — solid DevOps foundation required
In this tutorial, you'll learn
Learn AutoSys fault tolerance patterns: n_retrys, box_terminator, HA setup, restart procedures, and recovery strategies for failed batch workflows in production environments.
  • n_retrys handles transient failures automatically — set it on jobs prone to temporary external issues
  • box_terminator: 1 stops the entire box when a critical job fails — use it on validation and pre-requisite checks
  • term_run_time prevents hung jobs from blocking everything downstream indefinitely
Fault Tolerance Layers Fault Tolerance Layers. Job · Box · Infrastructure · Job Level · n_retrys: auto-retry · term_run_time: kill hung · alarm_if_fail: alert team · n_retrys: 2 for transientTHECODEFORGE.IOFault Tolerance LayersJob · Box · Infrastructure Job Leveln_retrys: auto-retryterm_run_time: kill hungalarm_if_fail: alert teamn_retrys: 2 for transient Box Levelbox_terminator: kill boxValidation job as terminatorsuccess() chain controldone() for cleanup jobs InfrastructureDual Event Server HAShadow auto-promotesTie-Breaker arbitratesAgent redundancy planningTHECODEFORGE.IO
thecodeforge.io
Fault Tolerance Layers
Autosys Fault Tolerance Recovery
✦ Plain-English analogy ✦ Real code with output ✦ Interview questions
Quick Answer
  • n_retrys: auto-retry a failed job up to N times for transient failures
  • box_terminator: stop the entire box when a critical job fails
  • Dual Event Server HA: infrastructure-level failover for the AutoSys scheduler
  • alarm_if_fail: notify only after all retries exhausted — don't wake ops for blips
  • term_run_time: kill hung jobs so downstream isn't blocked indefinitely
  • Biggest mistake: setting n_retrys too high masks permanent failures and delays escalation by hours
🚨 START HERE

AutoSys Fault Tolerance – Commands & Fixes

The commands you need when jobs fail: check status, force retries, kill hung jobs, and verify HA.
🟡

Job failed — want to retry manually

Immediate ActionCheck job status and retry count.
Commands
autorep -J job_name -w
sendevent -E FORCE_START -J job_name
Fix NowIf you want to retry after fixing the issue, use `sendevent -E CHANGE_STATUS -s ACTIVATED -J job_name` after setting ON_HOLD.
🟡

Job hung — never completes

Immediate ActionKill the job and check term_run_time.
Commands
sendevent -E KILLJOB -J job_name
autorep -J job_name -q | grep term_run_time
Fix NowAdd `term_run_time: 30` to the job definition to prevent future hangs.
🟡

Box not stopping on critical failure

Immediate ActionCheck if the failing job is a box_terminator.
Commands
autorep -J job_name -q | grep box_terminator
sendevent -E CHANGE_STATUS -s STOP_ON_FAILURE -J box_name
Fix NowEdit the job JIL to set `box_terminator:1` and `alarm_if_fail:1`.
🟡

Event Server down, shadow not promoting

Immediate ActionCheck shadow status and connectivity.
Commands
autoflags -a | grep -E 'primary|shadow'
chk_auto_up -A -S SHADOW_INSTANCE
Fix NowIf shadow not syncing, restart the Event Processor on the shadow: `sendevent -E STARTING -S shadow_event_server`.
Production Incident

The Night the Payroll Box Ran All Weekend

Validation job failed silently, downstream jobs ran anyway, payroll was computed on bad data, and no one knew until Monday morning.
SymptomPayroll output was $2.4M off. Downstream jobs completed with success but produced incorrect results. No alarms fired.
AssumptionThe team assumed the validation job's failure would stop the box because it was a prerequisite. They didn't set box_terminator.
Root causeValidation job failed (exit code 1), but without box_terminator:1, the box continued running all other jobs. The remaining jobs used stale data and ran successfully on corrupt input.
FixAdded box_terminator:1 to the validation job. Also added a success() condition on the first processing job after validation so even if box_terminator is accidentally removed, the condition blocks execution.
Key Lesson
Always mark validation and gate jobs as box_terminators — a failure there means all downstream work is garbage.Alarm on validation failures at severity CRITICAL — not just on jobs that crash but on jobs that invalidate the data pipeline.Never assume a box failure cascade works without explicit attributes — test the failure scenario in a non-prod environment.
Production Debug Guide

Symptom → Action quick reference for the most common production failures.

Job shows FAILURE, no retry attemptedCheck job definition: autorep -J job_name -q. Verify n_retrys is set. If 0, the job will not retry automatically. Add retry with sendevent -E CHANGE_STATUS -s ON_HOLD -J job_name then update JIL with n_retrys.
Job status is FAILURE but box status is RUNNINGCheck if the job has box_terminator:0 (default). If needed, set box_terminator:1 and test. Also check if the box has box_terminator overridden at box level.
Job stuck in ACTIVATED status for hoursRun autorep -J job_name -w to see job details. Check term_run_time — without it, the job will wait forever. Use sendevent -E KILLJOB -J job_name to force-stop.
Shadow Event Server never takes over after primary crashCheck shadow status: autoflags -a | grep shadow. Verify network connectivity and that SHADOW_INSTANCE is configured in autosys.conf. Test failover quarterly.
Alarm did not fire on job failureCheck alarm_if_fail:1 is set on the job. Verify notification rules in WCC or custom alarm scripts. Remember: if n_retrys > 0, the alarm only fires after all retries exhausted.

Enterprise batch workflows run overnight when no one is watching. The jobs that matter most — payroll, settlement, reconciliation — are the ones where failures are most costly. Building fault tolerance into your AutoSys design means many failures recover automatically, and when they don't, the right people are notified with enough context to fix things quickly. It's not about eliminating failures — it's about controlling how they propagate and how fast you bounce back.

Automatic retry with n_retrys

The simplest fault tolerance mechanism. n_retrys tells AutoSys to automatically rerun a failed job N times before declaring it a final FAILURE. This handles transient failures like brief network blips or temporary database connection issues.

Here's the thing: each retry is a full new attempt — the job script runs again from scratch. AutoSys doesn't resume from where it left off. So if your job is not idempotent, retries can cause data duplication or corruption. For example, an INSERT without a uniqueness check will happily create duplicate rows on each retry. Make sure your scripts handle re-entry safely: use idempotency keys, checkpoints, or database MERGE (upsert) logic.

When setting n_retrys, choose a number that matches the expected transient window. If network blips last ~30 seconds, and your job runs in 2 minutes, n_retrys: 3 gives about 6 minutes of recovery time. That's enough for most intermittent issues without delaying the pipeline too much.

retry_config.jil · BASH
123456789101112
insert_job: extract_market_data
job_type: CMD
command: /scripts/extract_market.sh
machine: data-server-01
owner: batchuser
date_conditions: 1
days_of_week: all
start_times: "18:00"
n_retrys: 3            /* retry up to 3 times after initial failure = 4 total attempts */
alarm_if_fail: 1       /* alarm only after all retries exhausted */
term_run_time: 45      /* kill if running over 45 minutes */
std_err_file: /logs/autosys/extract_market_data.err
▶ Output
/* Execution sequence on failure:
18:00:01 — Attempt 1: FAILURE (exit code 1)
18:00:31 — Retry 1: FAILURE (exit code 1)
18:01:01 — Retry 2: SUCCESS (exit code 0)
18:01:01 — extract_market_data: SUCCESS — downstream jobs proceed */
⚠ Idempotency is not optional
If your script inserts data, writes to a file, or sends an API call, each retry repeats that action. Without idempotency, you'll get duplicate rows, corrupt files, or duplicate charges. Test your script's behaviour under retry before putting it in production.
📊 Production Insight
n_retrys masks flaky scripts. If a job fails intermittently and retry succeeds, you never investigate the root cause — until the underlying issue grows worse.
A job that always fails on retry 3 may indicate resource exhaustion (temp tablespace, file handles) that only triggers under load.
Rule: set a maximum of 3 retries and monitor retry counts via AutoSys reports. A job that retries every night needs investigation, not tolerance.
🎯 Key Takeaway
n_retrys is for transient failures, not buggy scripts.
Set 2–3 retries max and pair with idempotent job logic.
If a job uses all retries, treat it as an incident — not a new baseline.

box_terminator — stopping the box on critical failure

In a BOX with multiple independent jobs, a failure in one job normally leaves other jobs to continue. If one job's failure should stop everything — because its output is required or its failure invalidates all subsequent work — mark it as a box_terminator.

When a job with box_terminator:1 fails, AutoSys immediately transitions the parent box to FAILURE. All currently pending inner jobs are skipped (their status becomes TERMINATED). Any jobs already running are killed. This prevents wasted compute on bad data and reduces the time to detect and recover.

In practice, use box_terminator on
  • Data validation jobs (schema checks, referential integrity)
  • Prerequisite extraction jobs (if upstream source is unavailable)
  • Configuration or lookup table loads (everything depends on them)

Do not use box_terminator on jobs that have graceful degradation paths. If a downstream job can handle missing data (e.g., produce a partial report with a warning), let it run.

box_terminator.jil · BASH
1234567891011
insert_job: validate_input_data
job_type: CMD
box_name: eod_box
command: /scripts/validate.sh
machine: server01
owner: batch
box_terminator: 1          /* if this fails, the entire box fails immediately */
alarm_if_fail: 1

/* Without box_terminator: other jobs in the box would continue even after validate fails */
/* With box_terminator: box immediately moves to FAILURE, all pending inner jobs skip */
🔥Put validation jobs as box_terminators
Data validation jobs are ideal box_terminator candidates. If input data is invalid, there's no point running any of the downstream processing jobs — they'd produce bad output. Mark the validation job as box_terminator: 1 to stop the entire box immediately on validation failure.
📊 Production Insight
A validation job that is not a box_terminator allows downstream jobs to run on garbage data. The result: corrupt output that passes all success checks.
Debugging that scenario is brutal — every downstream job shows SUCCESS, but the data is wrong. You lose hours tracing the problem back.
Rule: if the job's output is a hard prerequisite for everything that follows, it must be a box_terminator. No exceptions.
🎯 Key Takeaway
box_terminator stops the entire box on failure.
Use it on validation, extraction, and configuration jobs — anything whose failure makes downstream work worthless.
Without it, a box can 'succeed' even when critical jobs inside it fail.

alarm_if_fail and notification — when to wake someone up

alarm_if_fail:1 tells AutoSys to trigger an alarm when a job fails. But the timing matters: if you also have n_retrys > 0, the alarm only fires after all retries are exhausted. That's the right behaviour for transient failures — you don't want the on-call engineer paged for a 30-second network glitch.

However, some jobs should always alarm on the first failure, regardless of retries. For those, consider splitting the job: set a dummy pre-step that does the retry logic, and the main job with alarm_if_fail:1 and n_retrys:0. Or use a different notification mechanism: a custom script that sends a page on exit code != 0.

In AutoSys, the alarm mechanism is typically configured in WCC or via an external event handler. The job attribute alarm_if_fail sets a flag that AutoSys propagates to the event server. Make sure your notification system (email, SMS, PagerDuty) is subscribed to these events. Many teams set up automated alerting rules that trigger on job status FAILURE, but if those rules don't respect the retry state, they may fire on every transient blip.

alarm_config.jil · BASH
12345678910111213141516
insert_job: critical_report_generation
job_type: CMD
box_name: eod_box
command: /scripts/generate_report.sh
machine: server01
owner: batch
n_retrys: 0              /* no retries — always alarm on first failure */
alarm_if_fail: 1          /* alarm immediately */

/* Alternative: job that retries, but you want alarm on first failure too */
/* Use a wrapper script that pages on exit code 1 */
insert_job: critical_workflow
job_type: CMD
command: /scripts/run_and_page.sh   /* wrapper does retry logic internally */
n_retrys: 0
alarm_if_fail: 1
⚠ Don't overlook the retry → alarm delay
If a job has n_retrys:3 and alarm_if_fail:1, an engineer won't be notified until at least 3 retries have occurred. If each retry takes 5 minutes, that's 15 minutes of delay. For critical jobs, that may be too long. Consider reducing retries or using custom alerting.
📊 Production Insight
Many teams assume alarm_if_fail fires instantly on job failure. But combined with n_retrys, it fires only after all retries are exhausted — which can be minutes or hours later.
We've seen a payroll job with n_retrys:10 (yes, 10) and alarm_if_fail:1. It retried for 40 minutes before alarming. Production data was delayed by 40 minutes because of a simple missing file that could have been detected instantly.
Rule: for time-sensitive jobs, either reduce retries or build a separate health-check job that alarms if the main job hasn't completed within a window.
🎯 Key Takeaway
alarm_if_fail only fires after all retries are used.
For jobs that need immediate attention, set n_retrys:0 or use custom notification.
Test your alerting pipeline — verify alarms reach the right people.

HA architecture for fault tolerance

At the infrastructure level, AutoSys supports high availability through the dual Event Server architecture. For mission-critical batch environments, this is non-negotiable.

The setup involves two AutoSys instances: a primary and a shadow (standby) Event Server. They share a common file system (NFS) where the AutoSys database and binaries are stored. The shadow Event Server monitors the primary via a heartbeat. If the primary becomes unreachable, the shadow promotes itself to active within a configurable timeout (default is typically 5 minutes).

Important: the shadow is not an active-active cluster. Only one Event Processor runs jobs at a time. The shadow is a cold standby — it must be ready to take over but does not process jobs while the primary is healthy.

Failover is automatic, but not instantaneous. During the promotion period, no jobs are scheduled, no events are processed. If the failover happens during a critical window, that gap can cause SLAs to be missed. Consider scheduling maintenance windows around failover testing.

ha_check.sh · BASH
1234567891011
# Check which Event Server is currently primary
autoflags -a | grep -i 'primary\|shadow\|active'

# Verify shadow is in sync
autoflags -a | grep -i 'shadow\|standby'

# Check Event Processor status (should be RUNNING on primary)
chk_auto_up -A

# In a dual-server setup, this also shows shadow status
# chk_auto_up -A -S SHADOW_INSTANCE
▶ Output
AutoSys Instance: ACE
Event Server Role: PRIMARY (active)
Shadow Status: IN_SYNC
Event Processor: RUNNING
Mental Model
Think of HA like a co-pilot
The primary pilot flies the plane; the co-pilot stays alert, ready to take over if the pilot is incapacitated.
  • Primary Event Server actively schedules and runs all jobs.
  • Shadow Event Server watches the primary's heartbeat; it does not schedule jobs.
  • On failure, the shadow promotes itself, reads the shared database, and starts processing.
  • The shared file system must be highly available itself — if NFS goes down, failover fails.
  • Failover takes time (typically 1–5 minutes) — jobs scheduled in that window are delayed.
📊 Production Insight
HA failover is only as reliable as the shared file system. We've seen cases where the NFS mount hung, causing both primary and shadow to assume the other is dead — a split-brain scenario.
AutoSys does not have built-in split-brain prevention. If both instances think they're primary, you'll get duplicate job executions and database corruption.
Rule: use an HA-aware file system (e.g., GPFS, NetApp SnapMirror) and test failover quarterly to ensure the shadow is in sync and promotion works cleanly.
🎯 Key Takeaway
Dual Event Server HA protects against AutoSys server failure.
The shadow is a cold standby — not active-active.
Test failover regularly. Ensure shared file system is also HA. Without testing, the HA setup is just false confidence.

Recovery jobs and manual intervention patterns

Even with automatic retries and box_terminators, some failures require human intervention. Recovery jobs are specially designed jobs that repair the state after a failure and allow the pipeline to resume from a clean point.

Common recovery patterns
  • Rollback jobs: Reverse the effects of a partially completed batch (e.g., delete inserted rows, restore files from backup).
  • Re-run jobs: A job that reinitialises the pipeline after a failure — often a wrapper that truncates and re-imports data.
  • Compensation jobs: Run after a failure to fix data integrity issues before the next cycle.
  • Manual restart procedures: Documented steps to use sendevent to reset job statuses and re-trigger the box.

When designing recovery, think about idempotency: the recovery job should be safe to run multiple times if the first attempt also fails. Use checkpoints in your scripts: record completion steps in a control table so that rerunning the recovery job doesn't repeat already-completed actions.

A good practice is to separate recovery jobs into their own box with no dependencies on the business-critical timeline. Keep them available for ops to trigger via a JIL override or sendevent.

recovery_job.jil · BASH
12345678910111213141516171819
/* Recovery box: triggers after EOD box failure */
insert_job: daily_recovery
group: recovery
job_type: BOX
condition: FAILURE(eod_box)    /* runs only if eod_box fails */
start_times: "06:00"           /* but manual trigger also works via sendevent */

/* Inside the recovery box */
insert_job: rollback_data
box_name: daily_recovery
command: /scripts/rollback.sh $CHECKPOINT $LAST_FAILED_STEP
machine: server01
owner: batch

insert_job: notify_recovery
box_name: daily_recovery
condition: success(rollback_data)
command: /scripts/send_notification.sh "Recovery complete for eod_box"
alarm_if_fail: 1
💡Make recovery jobs testable in isolation
Don't design recovery jobs that only work when the box is in a specific failure state. Make them able to run standalone with parameters. Use global variables to pass the failure context so ops can trigger exactly what's needed.
📊 Production Insight
Recovery jobs that are not idempotent are dangerous. If a rollback job fails mid-way and runs again, it might try to drop a table that's no longer there.
The worst case we've seen: a compensation job inserted duplicate records because its script didn't check if the fix had already been applied. The ops team ran it three times, each adding more duplicates. By the time they noticed, reconciliation took three days.
Rule: every recovery script must be idempotent. Use a control table to record progress. And never run recovery jobs blindly — log what they do and let humans review before the next cycle.
🎯 Key Takeaway
Recovery jobs fix state after failures.
Make them idempotent: running twice should be safe.
Separate recovery into its own box and document manual trigger steps.
Without idempotent recovery, you'll make production problems worse, not better.
🗂 Fault tolerance mechanisms in AutoSys
Compare attributes and infrastructure options for building resilient workflows.
Fault tolerance mechanismWhat it handlesConfigured where
n_retrysTransient job failures (network blips)Job definition attribute
box_terminatorCritical failure that should stop the whole boxJob definition attribute
term_run_timeHung jobs that never completeJob definition attribute
alarm_if_fail + notificationHuman awareness and responseJob definition attributes
Dual Event Server (HA)AutoSys server/infrastructure failureAutoSys installation config
Remote Agent redundancyAgent machine failureMachine definitions + job failover logic
Recovery jobsPost-failure state repairDedicated JIL definitions + manual trigger

🎯 Key Takeaways

  • n_retrys handles transient failures automatically — set it on jobs prone to temporary external issues
  • box_terminator: 1 stops the entire box when a critical job fails — use it on validation and pre-requisite checks
  • term_run_time prevents hung jobs from blocking everything downstream indefinitely
  • alarm_if_fail only fires after all retries are exhausted — adjust retry count or use custom alerting for time-sensitive jobs
  • Infrastructure-level fault tolerance requires the dual Event Server HA setup — test failover regularly
  • Recovery jobs must be idempotent and testable in isolation — document manual restart procedures clearly

⚠ Common Mistakes to Avoid

    Setting n_retrys too high (e.g., 10)
    Symptom

    When the underlying issue is permanent, all retries just delay the failure alarm by hours. The job keeps retrying long after the pipeline should have been halted for investigation.

    Fix

    Set n_retrys to 2 or 3 maximum. Use a separate health-check job to detect persistent issues early.

    Not using box_terminator on validation jobs
    Symptom

    Downstream jobs run with bad input and produce corrupt results. The box shows SUCCESS because the failing job didn't stop it.

    Fix

    Always set box_terminator:1 on data validation, prerequisite extraction, and configuration load jobs.

    Treating n_retrys as a substitute for fixing flaky scripts
    Symptom

    The job retries successfully every night, but no one investigates the root cause. Over time, the problem worsens into a hard failure that brings down the whole pipeline.

    Fix

    Monitor retry rates using AutoSys reports or custom scripts. Any job that retries more than once a week should be investigated and fixed.

    Not testing HA failover
    Symptom

    When the primary Event Server fails, the shadow does not promote properly, or jobs stop being scheduled. Many teams discover their shadow Event Server isn't actually in sync only when they need it.

    Fix

    Test failover quarterly in a non-production environment that mirrors production. Verify shadow status weekly with autoflags -a.

    Ignoring idempotency when using retries
    Symptom

    On retry, the job inserts duplicate database rows, appends to log files without checking, or sends duplicate API calls. Data corruption propagates downstream.

    Fix

    Make all job scripts idempotent: use upsert logic, idempotency tokens, or checkpoints. Ensure that running the same job twice produces the same final state.

    Over-using alarm_if_fail on every job
    Symptom

    The on-call engineer receives dozens of pages for transient failures that auto-recover. Desensitisation leads to ignored alarms and missed critical failures.

    Fix

    Only set alarm_if_fail:1 on jobs where a failure requires human intervention. For retryable jobs, the alarm after exhaust is sufficient. Use different severity levels for different job classes.

Interview Questions on This Topic

  • QHow does n_retrys work in AutoSys and what are its limitations?SeniorReveal
    n_retrys specifies the number of automatic retries after a job failure. Each retry is a full re-execution of the job script. The alarm (if alarm_if_fail:1) fires only after all retries are exhausted. Limitations: it is not suitable for jobs that are not idempotent because retries can cause duplicate side effects. It also masks persistent issues — if a job retries successfully each time, the root cause is never investigated. The retry delay can be problematic for time-sensitive jobs.
  • QWhat is box_terminator and when would you use it?Mid-levelReveal
    box_terminator:1 marks a job that, upon failure, immediately terminates its parent box. All pending inner jobs are skipped and any running jobs are killed. Use it on jobs whose failure makes all downstream processing invalid: validation, data extraction, configuration loads. Do not use it on jobs where graceful degradation is acceptable.
  • QWhat is the difference between fault tolerance at the job level and at the infrastructure level in AutoSys?SeniorReveal
    Job-level fault tolerance includes n_retrys, box_terminator, and term_run_time — they handle job-specific failures like transient errors, critical prerequisites, and hung processes. Infrastructure-level fault tolerance is the dual Event Server HA setup that protects against AutoSys server failures, network partitions, or process crashes on the scheduler host. Both are needed: job-level handles daily operational failures, infrastructure-level ensures the scheduler itself stays up.
  • QIf a validation job fails, how do you ensure none of the downstream jobs in the box run?Mid-levelReveal
    Two approaches: 1) Set the validation job's box_terminator attribute to 1 — this immediately fails the entire box. 2) Use condition: success(upstream_job) on every downstream job — they only run if the validation job succeeds. The combination is safest: box_terminator stops the box quickly, and success() conditions prevent accidental execution if box_terminator is removed later.
  • QHow do you verify that AutoSys HA is working correctly?SeniorReveal
    Run autoflags -a to check the primary and shadow status. The shadow should show IN_SYNC. Then perform a controlled failover test in a non-production environment: stop the primary Event Processor (sendevent -E STOPPING), observe the shadow promote, and verify jobs schedule on the new primary. Restart the original primary and ensure it becomes shadow. Check Event Processor status with chk_auto_up -A. Document the procedure and test quarterly.
  • QHow would you design a recovery plan for a critical batch box that failed at 2 AM?SeniorReveal
    First, assess the failure symptom using autorep and logs. If the failure is transient, use sendevent to retry the box. If a job is hung, kill it and restart. If the failure is due to bad data, run a rollback job first. Document the steps in a runbook. Automate recovery where possible with a recovery box triggered by condition: FAILURE(original_box). Ensure all recovery scripts are idempotent. After recovery, post-mortem: why did it fail, and what can be added to prevent recurrence (n_retrys, box_terminator, better monitoring).

Frequently Asked Questions

How does n_retrys work in AutoSys?

n_retrys specifies how many automatic retries AutoSys performs after a job fails. With n_retrys: 3, the job runs up to 4 times total: the original attempt plus 3 retries. The alarm only fires (if alarm_if_fail: 1) after all retries are exhausted.

What is box_terminator in AutoSys?

box_terminator: 1 marks a job as the kill switch for its parent box. If this job fails, AutoSys immediately terminates the box and all remaining pending inner jobs. It's ideal for validation or prerequisite jobs whose failure makes all downstream processing meaningless.

How do I prevent downstream jobs from running after a failure?

Use `condition: success(upstream_job)` on downstream jobs, and/or use box_terminator: 1 on the critical upstream job. With success() conditions, downstream jobs only start when the upstream succeeds. With box_terminator, the entire box stops on failure.

How do I test AutoSys HA failover?

In a test environment, stop the primary Event Server and verify the shadow promotes automatically within the expected time. Check with autoflags -a that the shadow is now the primary, and verify that jobs continue to be scheduled correctly. Document the failover procedure and test it annually in production-equivalent environments.

Should I set n_retrys on every job?

Not necessarily. n_retrys is best for jobs that interface with external systems prone to transient failures (network services, external APIs, databases under load). For jobs with deterministic inputs and outputs, a single failure usually warrants human investigation rather than automatic retry.

What is a recovery job and when should I use one?

A recovery job is a dedicated job (or box) that performs state repair after a failure — rolling back partial changes, truncating tables, restoring files. Use it when automatic retries are insufficient and manual repair would be error-prone. Always make recovery jobs idempotent so they can be safely re-run.

How do I handle a job that is stuck in STARTING status?

A job stuck in STARTING usually means the Remote Agent is unreachable or the Event Processor cannot start the process. Check the agent status with autoping or autorep -m machine. If the agent is down, restart it. If the job is orphaned, kill it with sendevent -E KILLJOB -J job_name and then reschedule.

🔥
Naren Founder & Author

Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.

← PreviousAutoSys Alarms and NotificationsNext →AutoSys Job Failure Handling and Restart
Forged with 🔥 at TheCodeForge.io — Where Developers Are Forged