FCFS Scheduling — Convoy Effect Cripples Payment API
- FCFS: sort by arrival, execute in order — non-preemptive.
- Waiting Time = Turnaround Time - Burst Time. Turnaround Time = Completion - Arrival.
- Convoy effect: short processes wait behind long ones → high average waiting time.
- Processes execute in arrival order — pure FIFO queue.
- No preemption: once running, a process keeps the CPU until it blocks or exits.
- Convoy effect: a single long job holds the CPU while short jobs wait idle, inflating average waiting time.
- Worst-case average waiting time grows linearly with burst time variance — not robust for interactive loads.
- Production trap: using FCFS for mixed workloads (CPU + I/O) causes I/O devices to starve, tanking throughput.
Quick FCFS Debug Commands
High average waiting time, short tasks delayed
ps -eo pid,comm,etime,pri,nice --sort=start_timefor p in $(pgrep -f batch); do cat /proc/$p/sched | head -20; doneThread pool queue depth growing while CPU idle
jstack <pid> | grep -A5 'pool-' | head -30java -jar jmxterm.jar -l localhost:9010 -i commands.txtProduction Incident
Production Debug GuideHow to identify convoy effect and confirm FCFS behaviour in production
FCFS is the algorithm you implement when you have a queue and zero information about job durations. It is fair in the sense that arrival order is respected — no process waits longer than all processes that arrived before it. But 'fair' does not mean 'efficient', and the convoy effect makes FCFS genuinely harmful for interactive workloads.
Understanding FCFS matters not for implementing it (it is trivially a FIFO queue) but for understanding what it optimises and what it sacrifices. Every scheduling algorithm trades off fairness, average waiting time, response time, and throughput differently. FCFS optimises arrival-order fairness at the cost of average waiting time. This trade-off analysis is the interview skill being tested.
Algorithm and Metrics
- Completion Time (CT): When the process finishes
- Turnaround Time (TAT): CT
- Arrival Time
- Waiting Time (WT): TAT
- Burst Time
- Response Time: Time from arrival to first execution (= WT for non-preemptive)
FCFS is trivially a FIFO queue. You sort processes by arrival time, then execute each to completion. The metrics tell the real story: average waiting time can skyrocket when a long job arrives early.
def fcfs(processes: list[dict]) -> list[dict]: """ processes: list of {pid, arrival, burst} Returns processes with ct, tat, wt added. """ # Sort by arrival time procs = sorted(processes, key=lambda p: p['arrival']) current_time = 0 results = [] for p in procs: start = max(current_time, p['arrival']) ct = start + p['burst'] tat = ct - p['arrival'] wt = tat - p['burst'] current_time = ct results.append({**p, 'start': start, 'ct': ct, 'tat': tat, 'wt': wt}) return results processes = [ {'pid':'P1','arrival':0,'burst':24}, {'pid':'P2','arrival':1,'burst':3}, {'pid':'P3','arrival':2,'burst':3}, ] results = fcfs(processes) print(f"{'PID':<4} {'AT':>3} {'BT':>3} {'ST':>3} {'CT':>3} {'TAT':>4} {'WT':>4}") for r in results: print(f"{r['pid']:<4} {r['arrival']:>3} {r['burst']:>3} {r['start']:>3} {r['ct']:>3} {r['tat']:>4} {r['wt']:>4}") avg_tat = sum(r['tat'] for r in results) / len(results) avg_wt = sum(r['wt'] for r in results) / len(results) print(f"\nAverage TAT: {avg_tat:.2f}") print(f"Average WT: {avg_wt:.2f}")
P1 0 24 0 24 24 0
P2 1 3 24 27 26 23
P3 2 3 27 30 28 25
Average TAT: 26.00
Average WT: 16.00
The Convoy Effect
In the example above, P2 and P3 each take only 3ms but wait 23ms and 25ms because P1 (24ms) arrived first. This is the convoy effect — short processes form a convoy behind a long one.
- P2: WT=0, P3: WT=2, P1: WT=5
- Average WT = 2.33ms vs 16ms for FCFS
This is why FCFS has poor average waiting time for mixed workloads.
Gantt Chart Representation
A Gantt chart visually shows which process runs at each time interval. For FCFS, the chart is simply the processes in arrival order, each spanning its burst duration. This representation makes the convoy effect obvious: long bars push short bars to the right.
def print_gantt(results: list[dict]) -> None: print('Gantt Chart:') bar = '|' times = '0' for r in results: bar += f" {r['pid']:^5}|" times += f"{r['ct']:>7}" print(bar) print(times) print_gantt(results)
| P1 | P2 | P3 |
0 24 27 30
When to Use FCFS in Practice
- All jobs have roughly equal burst times (e.g., batch file processing with similar file sizes).
- Order must be strictly preserved (e.g., transactions that must commit in arrival sequence).
- No prior information about job durations exists and you cannot risk priority inversion.
- Interactive systems where response time matters.
- Mixed workloads with heterogeneous burst times.
- Real-time systems where deadlines must be met.
In most general-purpose operating systems, FCFS is used only as a fallback scheduler for equal-priority processes under SCHED_OTHER. Linux's Completely Fair Scheduler (CFS) is fundamentally different — it's a weighted fair queuing scheduler that preempts to maintain fairness.
- No decision: just a queue. Predictable in behaviour, unpredictable in performance.
- Linux CFS targets a 1ms scheduling granularity — the opposite of FCFS's no-granularity.
- FCFS survives as a special case for cooperative multitasking or dispatcher systems where preemption is impossible.
FCFS vs. SJF vs. Round Robin
Comparing scheduling algorithms around burst time variance:
| Algorithm | Average Waiting Time | Starvation Risk | Preemptive |
|---|---|---|---|
| FCFS | High with variance | None (everyone runs eventually) | No |
| SJF (non-preemptive) | Optimal for given arrival order | Yes, long processes | No |
| Round Robin (preemptive) | Moderate, depends on quantum | Low | Yes |
FCFS minimises the maximum waiting time for processes with the same arrival order — that's its fairness guarantee. But it maximises average waiting time when burst times vary.
SJF exploits knowledge of burst times to minimise average waiting time, but can starve long processes. Round Robin trades off average waiting time for response time — good for interactive workloads.
In production, you rarely pick one in isolation. Hybrid schedulers like CFS combine elements: target latency (like RR) with fairness (like FCFS) and dynamic priority adjustments.
| Algorithm | Preemptive? | Avg. Wait Time | Starvation Risk | Best Use Case |
|---|---|---|---|---|
| FCFS | No | High with burst variance | None | Batch systems with homogeneous jobs |
| SJF (non-preemptive) | No | Optimal (theoretical) | High (long tasks) | When burst times known and short jobs dominate |
| Round Robin | Yes (time quantum) | Moderate, depends on quantum | Low | Interactive workloads, time-sharing |
🎯 Key Takeaways
- FCFS: sort by arrival, execute in order — non-preemptive.
- Waiting Time = Turnaround Time - Burst Time. Turnaround Time = Completion - Arrival.
- Convoy effect: short processes wait behind long ones → high average waiting time.
- Simple to implement (FIFO queue) but rarely optimal.
- Best used when all processes have similar burst times or process order must be preserved.
⚠ Common Mistakes to Avoid
Interview Questions on This Topic
- QExplain the convoy effect in FCFS scheduling and show it with an example.Mid-levelReveal
- QIs FCFS preemptive or non-preemptive? What are the implications?JuniorReveal
- QWhen would you choose FCFS over other scheduling algorithms in a production system?SeniorReveal
- QCalculate average waiting time for FCFS with processes P1(AT=0,BT=8), P2(AT=1,BT=4), P3(AT=2,BT=2).Mid-levelReveal
- QDoes FCFS cause starvation? Explain.JuniorReveal
Frequently Asked Questions
Does FCFS cause starvation?
No — every process eventually gets the CPU since there is no priority-based preemption. However, average waiting time can be very high (effectively long waits for short processes), which is sometimes confused with starvation.
Can FCFS be preemptive?
No — by definition, FCFS is non-preemptive. If you add preemption, it becomes Round Robin (with a time quantum) or priority-based scheduling.
Is FCFS the same as FIFO scheduling?
Yes, for processes with the same arrival order. FIFO refers to the queue discipline (First In, First Out); FCFS is the scheduling policy built on that queue.
How does FCFS handle processes arriving at the exact same time?
If two processes have identical arrival times, tie-breaking is typically by process ID. The scheduler can choose any order, but FCFS assumes a stable ordering — the exact choice does not affect the algorithm's definition.
Why is FCFS still taught if it's rarely used in modern OS?
Because it is the simplest scheduling model, serving as a baseline for evaluating all other algorithms. Understanding FCFS builds intuition for trade-offs around waiting time, fairness, and the convoy effect.
Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.