Home JavaScript Web Workers in JavaScript: True Parallelism, Internals & Production Pitfalls

Web Workers in JavaScript: True Parallelism, Internals & Production Pitfalls

In Plain English 🔥
Imagine you run a busy restaurant kitchen. Normally you have one chef who takes an order, cooks it, plates it, then finally calls the customer — nobody gets served while cooking is happening. A Web Worker is like hiring a second chef who works in a back kitchen. Your front chef keeps taking orders and chatting with customers while the back chef silently handles the slow, complicated cooking. When the back chef finishes, they ring a bell and pass the plate through a hatch. JavaScript's main thread is the front chef, and the Web Worker is the back chef — completely separate, communicating only through that hatch.
⚡ Quick Answer
Imagine you run a busy restaurant kitchen. Normally you have one chef who takes an order, cooks it, plates it, then finally calls the customer — nobody gets served while cooking is happening. A Web Worker is like hiring a second chef who works in a back kitchen. Your front chef keeps taking orders and chatting with customers while the back chef silently handles the slow, complicated cooking. When the back chef finishes, they ring a bell and pass the plate through a hatch. JavaScript's main thread is the front chef, and the Web Worker is the back chef — completely separate, communicating only through that hatch.

Every JavaScript runtime — browser or Node.js — runs your code on a single main thread by default. That single thread is responsible for parsing the DOM, responding to user clicks, running your animations at 60 fps, and executing every line of your application logic simultaneously. When you throw a CPU-intensive operation into that mix — image processing, large dataset sorting, cryptographic hashing, physics simulations — the entire pipeline stalls. Users see frozen UI, janky scrolling, and unresponsive buttons. Chrome's DevTools will flag this as a 'long task', and Lighthouse will hammer your performance score. This isn't a hypothetical edge case; it's the most common silent killer of perceived performance in production web apps.

Web Workers were introduced precisely to solve this. They give you a way to spawn a true background thread — isolated from the main thread, with its own JavaScript engine instance, its own event loop, and its own memory heap. The two threads communicate exclusively by passing messages, which means you can't accidentally cause race conditions through shared mutable state (with one powerful exception we'll cover). This model trades the complexity of traditional thread synchronization for a clean, explicit message-passing interface.

By the end of this article you'll understand not just the API surface, but the internal mechanics: how the structured clone algorithm serialises data across thread boundaries, when to use Transferable objects to avoid copying megabytes of data, how SharedArrayBuffer and Atomics unlock true shared memory (and why they were briefly disabled across the entire web), and the production-level gotchas around module workers, error handling, and worker pooling that most tutorials skip entirely.

How the Browser Actually Creates and Runs a Web Worker

When you call new Worker('worker.js'), the browser does several non-trivial things under the hood. It spins up a new OS-level thread (not a process — threads share memory at the OS level, but V8 isolates each Worker in its own heap to prevent JS-level data races). It then bootstraps a fresh V8 isolate on that thread: a completely separate garbage collector, separate JIT compiler state, separate event loop, and a separate global scope called DedicatedWorkerGlobalScope — not window.

This is why you can't access the DOM inside a Worker. The DOM is owned by the main thread's isolate. Trying to reference document or window inside a Worker throws a ReferenceError immediately. The Worker's global scope exposes a different set of APIs: fetch, setTimeout, IndexedDB, the Cache API, WebSockets, and crucially postMessage / onmessage.

Communication happens through the structured clone algorithm — think of it as a deep JSON.stringify/parse, but smarter. It handles Date, RegExp, Map, Set, ArrayBuffer, Blob, ImageData, and circular references. What it can't clone: functions, DOM nodes, class instances with prototype chains (they become plain objects), and anything marked non-transferable. The clone is a full copy — meaning each side has its own version of the data after a postMessage call. This copying is where most Worker performance problems actually live.

BasicWorkerSetup.js · JAVASCRIPT
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859
// ─── main.js (runs on the main thread) ───────────────────────────────────────

// Spawn a dedicated worker from a separate file.
// The browser creates a new thread + V8 isolate immediately.
const primeWorker = new Worker('prime-calculator.js');

// Listen for results coming back FROM the worker.
// This callback runs on the main thread — safe to update the DOM here.
primeWorker.onmessage = function (event) {
  const { primes, computationTimeMs } = event.data;
  console.log(`Found ${primes.length} primes in ${computationTimeMs}ms`);
  console.log('First 5:', primes.slice(0, 5)); // [2, 3, 5, 7, 11]
  document.getElementById('result').textContent = `Primes found: ${primes.length}`;
};

// Handle any uncaught errors thrown inside the worker.
// Without this, errors inside workers fail SILENTLY in many browsers.
primeWorker.onerror = function (errorEvent) {
  console.error('Worker blew up:', errorEvent.message, 'at line', errorEvent.lineno);
  errorEvent.preventDefault(); // Prevent the error bubbling to window.onerror
};

// Send a task to the worker. The object is deep-cloned via structured clone.
// The main thread is NOT blocked — it continues executing immediately after this.
primeWorker.postMessage({ upperLimit: 1_000_000 });
console.log('Message sent — main thread is still free to handle clicks, paint frames, etc.');

// ─── prime-calculator.js (runs inside the Worker's isolated thread) ───────────

// 'self' is the DedicatedWorkerGlobalScope — equivalent to 'window' on main thread.
// There is no 'document', no 'window', no DOM access here.
self.onmessage = function (event) {
  const { upperLimit } = event.data;
  const startTime = performance.now();

  // Sieve of Eratosthenes — CPU-intensive, would freeze the UI on the main thread
  const sieve = new Uint8Array(upperLimit + 1).fill(1);
  sieve[0] = 0;
  sieve[1] = 0;

  for (let i = 2; i * i <= upperLimit; i++) {
    if (sieve[i] === 1) {
      for (let multiple = i * i; multiple <= upperLimit; multiple += i) {
        sieve[multiple] = 0; // Mark composite numbers
      }
    }
  }

  const primes = [];
  for (let n = 2; n <= upperLimit; n++) {
    if (sieve[n] === 1) primes.push(n);
  }

  const computationTimeMs = Math.round(performance.now() - startTime);

  // Send results back. The primes array is deep-cloned back to main thread.
  self.postMessage({ primes, computationTimeMs });
};
▶ Output
Message sent — main thread is still free to handle clicks, paint frames, etc.
(~180ms later, when Worker finishes)
Found 78498 primes in 176ms
First 5: [2, 3, 5, 7, 11]
⚠️
Watch Out: Silent Worker ErrorsIf you don't attach an `onerror` handler to your Worker, exceptions thrown inside it vanish without a trace — no console output, no crash, nothing. Always wire up `onerror` before calling `postMessage`. In production, log `errorEvent.message`, `errorEvent.filename`, and `errorEvent.lineno` to your error-tracking service.

Transferable Objects: Avoiding the Megabyte Copying Tax

The structured clone algorithm is clever, but it's still a copy. If you're sending a 50MB ArrayBuffer from a Worker back to the main thread — say you decoded a video frame or ran a WASM image filter — you're literally allocating 50MB of memory on the receiving side and memcpy-ing every byte. Do that at 30fps and you've given the GC a full-time job. This is where Transferable objects come in.

When you transfer an object instead of cloning it, the underlying memory buffer is handed off between threads at zero cost — no copy, just a pointer reassignment at the OS level. The source side's reference is immediately neutered (the ArrayBuffer becomes detached, with byteLength === 0). You own it or you don't — there's no sharing, no race condition.

The transferable types are: ArrayBuffer, MessagePort, ReadableStream, WritableStream, TransformStream, AudioData, ImageBitmap, VideoFrame, OffscreenCanvas, and RTCDataChannel. The trick is the second argument to postMessage — an array listing the objects to transfer rather than clone. If you forget to list them there, they still get cloned, not transferred, and you get zero performance benefit.

This pattern is the foundation of efficient Worker-based image and audio pipelines. Process a frame off-thread, transfer the result back, render it — all without allocating twice.

TransferableArrayBuffer.js · JAVASCRIPT
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869
// ─── main.js ──────────────────────────────────────────────────────────────────

const imageWorker = new Worker('image-processor.js');

// Simulate a raw RGBA pixel buffer for a 1024x768 image
// 1024 * 768 * 4 bytes (R,G,B,A) = 3,145,728 bytes (~3MB)
const imageWidth = 1024;
const imageHeight = 768;
const rawPixelBuffer = new ArrayBuffer(imageWidth * imageHeight * 4);
const pixelView = new Uint8ClampedArray(rawPixelBuffer);

// Fill with fake image data (checkerboard pattern)
for (let i = 0; i < pixelView.length; i += 4) {
  const pixelIndex = i / 4;
  const isEvenRow = Math.floor(pixelIndex / imageWidth) % 2 === 0;
  const isEvenCol = (pixelIndex % imageWidth) % 2 === 0;
  const isBright = isEvenRow === isEvenCol;
  pixelView[i] = isBright ? 200 : 50;     // Red channel
  pixelView[i + 1] = isBright ? 200 : 50; // Green channel
  pixelView[i + 2] = isBright ? 200 : 50; // Blue channel
  pixelView[i + 3] = 255;                  // Alpha — fully opaque
}

console.log('Before transfer — buffer byteLength on main thread:', rawPixelBuffer.byteLength);
// Output: 3145728

// TRANSFER the buffer (2nd arg to postMessage) instead of cloning it.
// rawPixelBuffer is now DETACHED on the main thread after this call.
imageWorker.postMessage(
  { width: imageWidth, height: imageHeight, buffer: rawPixelBuffer },
  [rawPixelBuffer] // <-- The transfer list. This is what makes it a transfer, not a clone.
);

// The buffer is now neutered — the main thread no longer owns it.
console.log('After transfer — buffer byteLength on main thread:', rawPixelBuffer.byteLength);
// Output: 0  (buffer is detached — attempting to read it throws TypeError)

imageWorker.onmessage = function (event) {
  const { processedBuffer, width, height } = event.data;
  // Worker transferred the processed buffer back — zero copy, instant
  console.log('Received processed buffer, byteLength:', processedBuffer.byteLength);
  // Output: 3145728 — back to full size, now owned by main thread

  const processedPixels = new Uint8ClampedArray(processedBuffer);
  const imageData = new ImageData(processedPixels, width, height);
  const canvas = document.getElementById('output-canvas');
  canvas.getContext('2d').putImageData(imageData, 0, 0);
};

// ─── image-processor.js ───────────────────────────────────────────────────────

self.onmessage = function (event) {
  const { buffer, width, height } = event.data;
  // Worker now OWNS the buffer — main thread can't touch it
  const pixels = new Uint8ClampedArray(buffer);

  // Apply a simple grayscale filter in-place
  for (let i = 0; i < pixels.length; i += 4) {
    const luminance = 0.299 * pixels[i] + 0.587 * pixels[i + 1] + 0.114 * pixels[i + 2];
    pixels[i] = luminance;     // Red
    pixels[i + 1] = luminance; // Green
    pixels[i + 2] = luminance; // Blue
    // Alpha unchanged
  }

  // Transfer the processed buffer back — no copy
  self.postMessage({ processedBuffer: buffer, width, height }, [buffer]);
};
▶ Output
Before transfer — buffer byteLength on main thread: 3145728
After transfer — buffer byteLength on main thread: 0
Received processed buffer, byteLength: 3145728
⚠️
Pro Tip: Ping-Pong BufferingIn real-time graphics pipelines, use two ArrayBuffers alternately — while the Worker processes buffer A, the main thread renders buffer B, then they swap. This 'ping-pong' pattern eliminates idle time on both threads and is how professional WebGL post-processing pipelines are built.

SharedArrayBuffer and Atomics: When You Actually Need Shared Memory

Transferables solve the copying problem but not the coordination problem. Sometimes two threads genuinely need to read and write the same chunk of memory concurrently — think a ring buffer feeding audio samples from a Worker to the Web Audio API, or a WASM module sharing a heap with its JS wrapper. For this, JavaScript has SharedArrayBuffer.

Unlike a regular ArrayBuffer, a SharedArrayBuffer is backed by a memory region that's mapped into multiple V8 isolates simultaneously. Both threads see the same bytes. This is real shared memory — the same concept that makes C++ multithreading both powerful and terrifying.

The catch: without synchronisation, concurrent writes cause data races. JavaScript's answer is the Atomics object — a set of guaranteed-atomic read-modify-write operations: Atomics.add, Atomics.compareExchange, Atomics.load, Atomics.store, and the critical Atomics.wait / Atomics.notify pair for blocking/waking threads (note: Atomics.wait blocks the calling thread, so it's forbidden on the main thread to prevent UI freezes — use Atomics.waitAsync there instead).

There's an important security context here: SharedArrayBuffer was disabled across all browsers in January 2018 after the Spectre CPU vulnerability was disclosed. It was re-enabled only in contexts that are cross-origin isolated, meaning your server must send two specific HTTP headers: Cross-Origin-Opener-Policy: same-origin and Cross-Origin-Embedder-Policy: require-corp. Without these headers, typeof SharedArrayBuffer === 'undefined' at runtime — a confusing silent failure if you don't know to look for it.

SharedMemoryRingBuffer.js · JAVASCRIPT
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495
// ─── main.js ──────────────────────────────────────────────────────────────────
// REQUIRES the page to be served with:
//   Cross-Origin-Opener-Policy: same-origin
//   Cross-Origin-Embedder-Policy: require-corp
// Without these headers, SharedArrayBuffer is undefined and this throws.

if (typeof SharedArrayBuffer === 'undefined') {
  throw new Error('SharedArrayBuffer unavailable — check COOP/COEP response headers');
}

const audioWorker = new Worker('audio-generator.js');

// A simple ring buffer layout using a SharedArrayBuffer:
// Index 0: write cursor (updated by Worker)
// Index 1: read cursor (updated by main thread)
// Index 2..N: audio sample data
const RING_BUFFER_CAPACITY = 256; // Number of audio samples in the ring
const METADATA_SLOTS = 2;         // write_cursor + read_cursor
const totalSlots = METADATA_SLOTS + RING_BUFFER_CAPACITY;

// Int32Array because Atomics works on integer typed arrays (Int32, Uint8, etc.)
const sharedBuffer = new SharedArrayBuffer(totalSlots * Int32Array.BYTES_PER_ELEMENT);
const sharedView = new Int32Array(sharedBuffer);

const WRITE_CURSOR_INDEX = 0;
const READ_CURSOR_INDEX = 1;
const DATA_START_INDEX = 2;

// Hand the same SharedArrayBuffer to the worker — NO COPY, same physical memory
audioWorker.postMessage({ sharedBuffer, capacity: RING_BUFFER_CAPACITY });

// Main thread reads samples from the ring buffer on a schedule
function consumeSamples() {
  const writeCursor = Atomics.load(sharedView, WRITE_CURSOR_INDEX); // Atomic read
  const readCursor = Atomics.load(sharedView, READ_CURSOR_INDEX);   // Atomic read

  const availableSamples = (writeCursor - readCursor + RING_BUFFER_CAPACITY) % RING_BUFFER_CAPACITY;

  if (availableSamples > 0) {
    const sampleIndex = DATA_START_INDEX + (readCursor % RING_BUFFER_CAPACITY);
    const sample = Atomics.load(sharedView, sampleIndex); // Thread-safe read

    // Atomically advance the read cursor
    Atomics.add(sharedView, READ_CURSOR_INDEX, 1);
    console.log(`Main consumed sample: ${sample}, available remaining: ${availableSamples - 1}`);

    // Wake the worker in case it was waiting for buffer space
    Atomics.notify(sharedView, WRITE_CURSOR_INDEX, 1);
  }
}

setInterval(consumeSamples, 10); // Consume up to 100 samples per second

// ─── audio-generator.js ───────────────────────────────────────────────────────

let sharedView;
let capacity;

self.onmessage = function (event) {
  const { sharedBuffer, capacity: cap } = event.data;
  // Worker receives the SAME SharedArrayBuffer — same physical RAM
  sharedView = new Int32Array(sharedBuffer);
  capacity = cap;
  generateSamples();
};

const WRITE_CURSOR_INDEX = 0;
const READ_CURSOR_INDEX = 1;
const DATA_START_INDEX = 2;

function generateSamples() {
  let sampleCounter = 0;

  while (sampleCounter < 50) { // Generate 50 test samples
    const writeCursor = Atomics.load(sharedView, WRITE_CURSOR_INDEX);
    const readCursor = Atomics.load(sharedView, READ_CURSOR_INDEX);
    const usedSlots = (writeCursor - readCursor + capacity) % capacity;

    if (usedSlots < capacity - 1) {
      // Buffer has space — write the sample atomically
      const targetIndex = DATA_START_INDEX + (writeCursor % capacity);
      const newSample = Math.floor(Math.sin(sampleCounter * 0.1) * 1000); // Fake audio
      Atomics.store(sharedView, targetIndex, newSample);
      Atomics.add(sharedView, WRITE_CURSOR_INDEX, 1); // Advance write cursor
      sampleCounter++;
    } else {
      // Buffer is full — wait for main thread to consume (blocks THIS worker thread only)
      // Atomics.wait is FORBIDDEN on the main thread — use waitAsync there
      Atomics.wait(sharedView, WRITE_CURSOR_INDEX, writeCursor, 100); // 100ms timeout
    }
  }

  console.log('Worker: Finished generating 50 samples');
}
▶ Output
Main consumed sample: 0, available remaining: 0
Main consumed sample: 841, available remaining: 1
Main consumed sample: 909, available remaining: 2
...
Worker: Finished generating 50 samples
⚠️
Watch Out: Atomics.wait Deadlocks the Main Thread`Atomics.wait` throws `TypeError: Atomics.wait cannot be used on the main thread` if you call it anywhere except inside a Worker. This catches people out when they refactor code from a Worker back to the main thread. The non-blocking alternative is `Atomics.waitAsync()`, which returns a Promise and is safe to use anywhere — use it in main-thread code that needs to react to SAB state changes.

Module Workers, Worker Pools, and Production Architecture

Classic Workers load a script file via URL. But modern apps use ES modules everywhere, and you want import statements inside Workers too. Module Workers solve this: pass { type: 'module' } as the second constructor option. The Worker then gets a full ES module environment — static imports, dynamic imports, tree-shaking, the works. The caveat: module Workers aren't supported in Firefox until version 114 and aren't available at all in Node.js worker_threads without a flag.

For production apps, spawning a new Worker per task is expensive — thread creation has OS-level overhead (typically 1-5ms plus memory for the stack). For high-frequency tasks you want a Worker pool: a fixed set of Workers that you keep alive and assign tasks to via a queue. The pool returns a Promise for each task, resolves it when the Worker responds, then marks that Worker as idle. This is exactly how Comlink (Google's Worker abstraction library) and threads.js work under the hood.

Inline Workers — created from a Blob URL — let you define Worker code in the same file as your main thread code, useful for bundler-unfriendly environments or quick demos. The pattern uses URL.createObjectURL(new Blob([workerCode], { type: 'text/javascript' })). Remember to call URL.revokeObjectURL after the Worker is created, or you leak the Blob URL.

For error resilience in production: always set a timeout on Worker tasks and terminate unresponsive Workers with worker.terminate(). A Worker that's in an infinite loop or stuck on a network request will never respond and silently consume a thread forever if you don't guard against it.

WorkerPool.js · JAVASCRIPT
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113
// ─── WorkerPool.js — A production-ready Worker pool implementation ────────────

class WorkerPool {
  /**
   * @param {string | URL} workerScript  - Path or URL to the worker script
   * @param {number} poolSize            - Number of persistent workers to create
   * @param {number} taskTimeoutMs       - Max ms to wait for a task before rejecting
   */
  constructor(workerScript, poolSize = navigator.hardwareConcurrency || 4, taskTimeoutMs = 10_000) {
    this.poolSize = poolSize;
    this.taskTimeoutMs = taskTimeoutMs;
    this.workers = [];       // All worker instances
    this.idleWorkers = [];   // Workers currently waiting for work
    this.taskQueue = [];     // Tasks waiting for a free worker

    // Pre-warm all workers immediately so first tasks don't pay spawn cost
    for (let i = 0; i < poolSize; i++) {
      const worker = new Worker(workerScript, { type: 'module' });
      worker.poolId = i; // Tag for debugging
      this.workers.push(worker);
      this.idleWorkers.push(worker);
    }

    console.log(`WorkerPool: ${poolSize} workers ready (hardwareConcurrency: ${navigator.hardwareConcurrency})`);
  }

  /**
   * Submit a task to the pool. Returns a Promise that resolves with the Worker's response.
   * @param {any} taskPayload      - Data to send to the worker
   * @param {Transferable[]} transferList - Buffers to transfer (not copy)
   */
  runTask(taskPayload, transferList = []) {
    return new Promise((resolve, reject) => {
      const taskEntry = { taskPayload, transferList, resolve, reject };

      if (this.idleWorkers.length > 0) {
        // A worker is free — dispatch immediately
        this._dispatchTask(this.idleWorkers.pop(), taskEntry);
      } else {
        // All workers busy — queue the task
        this.taskQueue.push(taskEntry);
        console.log(`WorkerPool: Task queued. Queue depth: ${this.taskQueue.length}`);
      }
    });
  }

  _dispatchTask(worker, taskEntry) {
    const { taskPayload, transferList, resolve, reject } = taskEntry;

    // Guard against runaway workers with a timeout
    const timeoutHandle = setTimeout(() => {
      console.error(`WorkerPool: Worker ${worker.poolId} timed out — terminating and replacing`);
      worker.terminate();
      // Replace the dead worker with a fresh one
      const replacementWorker = new Worker(worker._scriptURL, { type: 'module' });
      replacementWorker.poolId = worker.poolId;
      this.workers[worker.poolId] = replacementWorker;
      this.idleWorkers.push(replacementWorker);
      reject(new Error(`Worker ${worker.poolId} timed out after ${this.taskTimeoutMs}ms`));
    }, this.taskTimeoutMs);

    // One-shot message handler — listen for exactly one response then clean up
    worker.onmessage = (event) => {
      clearTimeout(timeoutHandle); // Task completed in time — cancel the timeout
      resolve(event.data);
      this._returnWorkerToPool(worker);
    };

    worker.onerror = (errorEvent) => {
      clearTimeout(timeoutHandle);
      reject(new Error(`Worker error: ${errorEvent.message} at ${errorEvent.filename}:${errorEvent.lineno}`));
      this._returnWorkerToPool(worker);
      errorEvent.preventDefault();
    };

    worker.postMessage(taskPayload, transferList);
  }

  _returnWorkerToPool(worker) {
    if (this.taskQueue.length > 0) {
      // There's a queued task — assign it immediately rather than idling the worker
      const nextTask = this.taskQueue.shift();
      this._dispatchTask(worker, nextTask);
    } else {
      this.idleWorkers.push(worker);
    }
  }

  /** Shut down all workers cleanly — call this on app teardown */
  destroy() {
    this.workers.forEach(worker => worker.terminate());
    this.workers = [];
    this.idleWorkers = [];
    this.taskQueue = [];
    console.log('WorkerPool: All workers terminated');
  }
}

// ─── Usage example ────────────────────────────────────────────────────────────

const pool = new WorkerPool('./hash-worker.js', 4, 5_000);

// Fire off 10 tasks — pool queues the overflow and drains it automatically
const tasks = Array.from({ length: 10 }, (_, i) => ({
  inputData: `dataset-chunk-${i}`,
  chunkIndex: i
}));

const results = await Promise.all(tasks.map(task => pool.runTask(task)));
console.log('All tasks complete. Results count:', results.length);

pool.destroy();
▶ Output
WorkerPool: 4 workers ready (hardwareConcurrency: 8)
WorkerPool: Task queued. Queue depth: 1
WorkerPool: Task queued. Queue depth: 2
WorkerPool: Task queued. Queue depth: 3
WorkerPool: Task queued. Queue depth: 4
WorkerPool: Task queued. Queue depth: 5
WorkerPool: Task queued. Queue depth: 6
All tasks complete. Results count: 10
WorkerPool: All workers terminated
🔥
Interview Gold: navigator.hardwareConcurrency`navigator.hardwareConcurrency` returns the number of logical CPU cores available to the browser — use it as the default pool size instead of hardcoding a number. On a MacBook Pro M2 it returns 12; on a budget Android it might return 4. Sizing your pool to the hardware means you max out parallelism on powerful machines without thrashing slower ones.
Feature / AspectWeb Worker (Dedicated)SharedArrayBuffer + Atomics
Communication modelMessage passing (postMessage)Shared memory (direct byte access)
Data transfer costStructured clone (copy) or transfer (zero-copy)Zero cost — same memory, no transfer needed
Race condition riskNone — only one side owns data at a timeHigh — must use Atomics for all concurrent access
DOM accessForbiddenForbidden
Required security headersNoneCOOP: same-origin + COEP: require-corp
Use case fitCPU tasks, data processing, image/video filtersReal-time audio, WASM heaps, ring buffers
Main thread blockingNeverAtomics.waitAsync only (wait blocks worker threads)
Browser supportUniversal (IE10+)Chrome 60+, Firefox 114+, Safari 15.2+ (with headers)
Debugging experienceGood — DevTools shows worker thread separatelyHarder — data races are non-deterministic
Complexity to implement correctlyLow-MediumHigh — requires deep concurrency knowledge

🎯 Key Takeaways

  • Web Workers run in a separate V8 isolate with their own heap, event loop, and GC — they're true OS threads, not coroutines or fibers, which is why they don't block the main thread at all.
  • Always pass Transferable objects (ArrayBuffer, ImageBitmap, etc.) in the transfer list of postMessage for large data — cloning megabyte buffers at 60fps is a GC timebomb, not a performance strategy.
  • SharedArrayBuffer requires Cross-Origin-Opener-Policy: same-origin and Cross-Origin-Embedder-Policy: require-corp HTTP headers — their absence makes typeof SharedArrayBuffer === 'undefined' at runtime with zero explanation in the console.
  • Never spawn a Worker per task in a hot path — pre-warm a pool sized to navigator.hardwareConcurrency, implement task queuing with timeouts, and replace timed-out workers rather than letting them silently consume threads forever.

⚠ Common Mistakes to Avoid

  • Mistake 1: Trying to pass DOM nodes or class instances with methods via postMessage — The structured clone algorithm silently strips prototype chains, so a class instance arrives as a plain Object on the other side with all methods missing. There's no error thrown. Fix: either serialize to a plain data object (DTO pattern) before posting, or use a shared module that both sides import to reconstruct the class from the plain data.
  • Mistake 2: Forgetting to list transferables in the transfer list — Writing worker.postMessage({ buffer: myArrayBuffer }) when you meant to transfer instead of clone. The buffer is copied silently (no error), you get no zero-copy benefit, and the original buffer remains valid on the sending side (which may even look like it worked correctly). Fix: always pass the transfer list as the second argument: worker.postMessage({ buffer: myArrayBuffer }, [myArrayBuffer]). Lint rules from eslint-plugin-unicorn can catch this.
  • Mistake 3: Spawning a new Worker for every task instead of pooling — Each Worker creation allocates a new OS thread and V8 isolate, costing 1-5ms and several MB of memory. In a loop processing 1000 items, this creates 1000 threads and likely crashes the tab. Fix: create a fixed-size Worker pool on app init, submit tasks to the pool's queue, and reuse workers across the lifetime of the page. Size the pool with navigator.hardwareConcurrency.

Interview Questions on This Topic

  • QWhy can't Web Workers access the DOM, and what would happen at the architecture level if they could?
  • QExplain the difference between transferring an ArrayBuffer and cloning it. When would cloning actually be preferable to transferring?
  • QSharedArrayBuffer was disabled across all browsers in early 2018 and later re-enabled. What vulnerability caused the ban, and what headers must a server send today to allow its use — and why do those specific headers mitigate the risk?

Frequently Asked Questions

Can a Web Worker access localStorage or sessionStorage?

No — both localStorage and sessionStorage are synchronous APIs tied to the window object, which doesn't exist in a Worker's scope. If you need persistent storage from a Worker, use the IndexedDB API (fully supported in Workers) or the Cache API. For simple key-value storage, you can also use a SharedArrayBuffer as a fast shared state layer.

What's the difference between a Dedicated Worker and a Shared Worker?

A Dedicated Worker is owned by exactly one document and is destroyed when that page closes. A Shared Worker can be connected to by multiple browser contexts (tabs, iframes) that share the same origin — they communicate via a MessagePort obtained through the connect event. Shared Workers are useful for cross-tab state synchronisation but are not supported in Safari as of mid-2024 and have tricky lifecycle semantics, so most teams prefer BroadcastChannel or a ServiceWorker for cross-tab communication instead.

Does using a Web Worker actually run code in parallel on multiple CPU cores?

Yes — this is the key difference between Web Workers and async/await. Async/await is cooperative concurrency on a single core: the JS engine interleaves tasks between awaits but only one thing runs at a time. A Web Worker maps to a real OS thread, which the OS scheduler can assign to a different CPU core. CPU-intensive work in a Worker literally runs simultaneously with your main thread's work, achieving true parallelism. This is why Workers actually speed up heavy computation, while async/await only improves responsiveness for I/O-bound work.

🔥
TheCodeForge Editorial Team Verified Author

Written and reviewed by senior developers with real-world experience across enterprise, startup and open-source projects. Every article on TheCodeForge is written to be clear, accurate and genuinely useful — not just SEO filler.

← PreviousMapped Types in TypeScriptNext →Node.js Clustering
Forged with 🔥 at TheCodeForge.io — Where Developers Are Forged