Homeβ€Ί DevOpsβ€Ί Docker vs Virtual Machine: Key Differences, Performance & When to Use Each

Docker vs Virtual Machine: Key Differences, Performance & When to Use Each

Where developers are forged. Β· Structured learning Β· Free forever.
πŸ“ Part of: Docker β†’ Topic 3 of 17
Docker vs VM deep-dive: kernel-level architecture, real benchmark data, security isolation trade-offs, and decision framework for choosing the right abstraction in production.
βš™οΈ Intermediate β€” basic DevOps knowledge assumed
In this tutorial, you'll learn
Docker vs VM deep-dive: kernel-level architecture, real benchmark data, security isolation trade-offs, and decision framework for choosing the right abstraction in production.
  • Containers share the host kernel β€” they start in milliseconds and use megabytes of memory. VMs have a separate kernel β€” they are heavier but provide stronger isolation.
  • The shared kernel is the fundamental security trade-off. A kernel CVE affects all containers on the host. For multi-tenant or untrusted workloads, use gVisor, Kata, or Firecracker.
  • Containers deliver near-native performance (<2% overhead). VMs add 5-15% overhead. The biggest gap is disk I/O β€” always use virtio drivers in VMs.
✦ Plain-English analogy ✦ Real code with output ✦ Interview questions
⚑Quick Answer
  • VMs run a full guest OS with its own kernel on top of a hypervisor
  • Containers share the host kernel and isolate via namespaces and cgroups
  • VMs provide stronger isolation (separate kernel) but are heavier (minutes to start, GB of RAM)
  • Containers start in milliseconds and use MB of overhead
  • Hypervisor (VMware, KVM, Hyper-V): abstracts hardware, runs guest kernels
  • Container runtime (containerd, runc): leverages kernel namespaces, cgroups, seccomp
  • Union filesystem (overlay2): layers images efficiently for containers
🚨 START HERE
Container vs VM Triage Cheat Sheet
First-response commands when performance degradation, isolation concerns, or resource contention is reported.
🟑Container performance degraded β€” noisy neighbor suspected.
Immediate ActionCheck per-container resource usage and cgroup limits.
Commands
docker stats --no-stream
cat /sys/fs/cgroup/cpu/docker/<container-id>/cpu.shares
Fix NowSet --cpus and --memory limits on all production containers. Use --cpus=1.0 --memory=512m as starting point.
🟠VM startup is slow β€” auto-scaling cannot keep up with traffic.
Immediate ActionCheck VM boot time and cloud-init duration.
Commands
systemd-analyze blame (inside VM)
cloud-init analyze show (inside VM)
Fix NowUse pre-baked AMIs with applications installed. Consider containers for sub-second scaling requirements.
🟑Suspected container escape β€” unexpected process on host.
Immediate ActionIsolate the host and check for kernel CVEs.
Commands
uname -r && apt list --installed 2>/dev/null | grep linux-image
ps aux | grep -v 'dockerd\|containerd\|docker' | grep -v grep
Fix NowPatch kernel immediately. Migrate to gVisor or Kata Containers for untrusted workloads. Rotate all secrets accessible from the host.
🟑VM memory overhead too high β€” cannot fit expected number of VMs.
Immediate ActionCheck per-VM memory usage and hypervisor overhead.
Commands
virsh dommemstat <vm-name> (KVM) or esxtop (VMware)
free -h (inside each VM)
Fix NowEnable KSM for memory deduplication. Use containers for workloads that do not need full OS isolation.
🟠Container network latency is higher than expected.
Immediate ActionCheck network driver and overlay configuration.
Commands
docker network inspect <network> --format '{{.Driver}}'
iperf3 -c <target-container-ip> (from inside container)
Fix NowUse host networking for latency-sensitive workloads. Check MTU settings for overlay networks (should be 1450 for VXLAN, not 1500).
🟠VM disk I/O is slow β€” database queries degraded.
Immediate ActionCheck disk driver and hypervisor storage backend.
Commands
lsblk -o NAME,TYPE,TRAN (inside VM β€” check for virtio)
iostat -x 1 5 (inside VM)
Fix NowEnsure virtio-blk or virtio-scsi drivers are used. Use NVMe passthrough for latency-sensitive databases. Consider containers with direct host filesystem access.
Production IncidentMulti-Tenant SaaS Platform Compromised via Container Escape β€” Kernel CVE Exploited Across All TenantsA SaaS platform running untrusted customer code in Docker containers was compromised when an attacker exploited a kernel vulnerability (Dirty Pipe, CVE-2022-0847) to escape the container and access the host filesystem, including other tenants' data and the platform's internal secrets.
SymptomA customer reported that their container had access to files they did not create. The platform's security team noticed anomalous file system activity on the host β€” files being modified outside of any container's mount namespace. Internal monitoring detected a process running on the host that was not associated with any managed container. The process was a cryptominer that had been installed by the attacker after escaping the container.
AssumptionThe team assumed a misconfigured volume mount β€” perhaps a container had been accidentally given access to the host filesystem. They audited all volume mounts and found no misconfigurations. They assumed a Docker daemon vulnerability and checked for known CVEs in Docker Engine. They assumed a compromised Docker image and scanned all images for malware.
Root causeThe platform ran untrusted customer code in standard Docker containers on a shared Linux host. The host kernel was vulnerable to Dirty Pipe (CVE-2022-0847), which allowed any process to overwrite read-only files on the kernel's page cache. A customer's container exploited this vulnerability to overwrite /etc/passwd on the host, adding a root user. The attacker then used this root access to escape the container, install a cryptominer, and access other tenants' container filesystems via /proc. The shared kernel was the attack surface β€” all containers on the host were vulnerable simultaneously.
Fix1. Immediately patched the host kernel to the fixed version. 2. Migrated the multi-tenant workload from standard Docker containers to gVisor (runsc), which provides a user-space kernel that intercepts syscalls and prevents direct kernel access. 3. Added kernel version monitoring that alerts on any host running a kernel with known CVEs. 4. Implemented tenant isolation using Firecracker microVMs for high-risk tenants. 5. Rotated all internal secrets and credentials that were accessible from the compromised host. 6. Added a security review requirement for any workload that runs untrusted code.
Key Lesson
Containers share the host kernel β€” a kernel vulnerability affects all containers on that host simultaneously. This is the fundamental security trade-off vs VMs.Multi-tenant environments running untrusted code must not use standard containers. Use gVisor (user-space kernel), Kata Containers (lightweight VMs), or Firecracker (microVMs).Kernel patching is a critical security operation for container hosts. A single unpatched kernel CVE can compromise every container on the host.Monitor kernel versions across all hosts proactively. Automated alerts for known CVEs are essential for container infrastructure.The Dirty Pipe vulnerability was a wake-up call for the industry β€” it proved that container isolation without kernel hardening is insufficient for multi-tenant workloads.
Production Debug GuideFrom noisy neighbors to kernel panics β€” systematic debugging paths.
Container performance degraded β€” CPU or I/O latency spiked without application changes.β†’Check for noisy neighbors β€” other containers on the same host competing for resources. Run docker stats to see CPU and memory usage per container. Check cgroup limits: cat /sys/fs/cgroup/cpu/<container-cgroup>/cpu.shares. If no limits are set, one container can starve others. Fix: set --cpus and --memory limits on all production containers.
VM startup takes 5+ minutes, delaying auto-scaling during traffic spikes.β†’Check if the VM is using a full OS image vs a minimal image. Check if cloud-init or first-boot scripts are running. Check hypervisor resource contention. Fix: use pre-baked AMI/images with applications already installed. Consider containers for workloads that need sub-second scaling.
Container escape suspected β€” process running on host outside of any container.β†’Check host processes: ps aux | grep -v 'docker\|containerd'. Check /proc for unexpected processes. Check kernel version for known CVEs: uname -r and cross-reference with CVE databases. Fix: isolate the host, patch the kernel, investigate the escape vector, and migrate to gVisor or Kata if running untrusted code.
VM memory overhead consuming too much host RAM β€” fewer VMs fit than expected.β†’Check guest OS memory usage: free -h inside each VM. Check hypervisor overhead: the hypervisor itself consumes memory for each VM (typically 30-100MB per VM). Check if memory overcommit is enabled. Fix: use containers for workloads that do not need full OS isolation. Enable KSM (Kernel Same-page Merging) for VM memory deduplication.
Container network performance is 20-30% slower than expected.β†’Check if the container is using the bridge driver (adds NAT overhead) or host networking. Check if VXLAN overlay is in use (adds encapsulation overhead). Run iperf3 between containers and compare with host-to-host. Fix: use host networking for latency-sensitive workloads. Use macvlan for direct L2 access. Optimize MTU for overlay networks.
VM disk I/O is slow β€” database queries take 3x longer than on bare metal.β†’Check if the VM is using virtio drivers (paravirtualized) or emulated drivers. Check disk scheduler: cat /sys/block/vda/queue/scheduler. Check if the hypervisor storage backend is overcommitted. Fix: use virtio-blk or virtio-scsi drivers. Use NVMe passthrough for latency-sensitive workloads. Switch to containers with direct host filesystem access for databases.

The VM vs container decision is not a technology preference β€” it is a security, performance, and operational trade-off that directly impacts cost, startup time, and isolation guarantees. Getting it wrong means either overpaying for VMs where containers suffice, or under-isolating workloads where VMs are required.

The architectural difference is at the kernel level. VMs virtualize hardware β€” each VM runs its own kernel on top of a hypervisor. Containers virtualize the OS β€” they share the host kernel and use Linux namespaces for isolation and cgroups for resource limits. This single difference cascades into every other trade-off: startup time, memory footprint, security boundary, and portability.

Common misconceptions: containers are not insecure by default (misconfiguration is the problem), VMs are not always better (they are heavier and slower), and the choice is not binary (gVisor and Kata Containers provide hybrid approaches). The right answer depends on your workload's trust boundary, performance requirements, and compliance needs.

Architecture: Kernel-Level Differences Between VMs and Containers

The fundamental difference between VMs and containers is where the isolation boundary sits. VMs isolate at the hardware level. Containers isolate at the OS level. This single difference cascades into every other trade-off.

VM architecture: A hypervisor (VMware ESXi, KVM, Hyper-V) sits between the hardware and the guest operating systems. Each VM runs a full guest OS with its own kernel, drivers, system libraries, and init system. The hypervisor virtualizes CPU, memory, disk, and network for each VM. The guest OS believes it has exclusive access to hardware β€” the hypervisor translates and multiplexes requests to the real hardware.

Container architecture: The container runtime (containerd, runc) leverages Linux kernel features β€” namespaces for isolation and cgroups for resource limits. Each container gets its own view of the filesystem (mount namespace), network stack (network namespace), process tree (PID namespace), and user IDs (user namespace). But all containers share the same kernel. There is no guest OS β€” the container process runs directly on the host kernel.

The isolation boundary matters: Because VMs have a separate kernel, a kernel vulnerability in one VM does not affect other VMs or the host. Because containers share the host kernel, a kernel vulnerability affects all containers on that host. This is the fundamental security trade-off.

Hypervisor types: Type 1 hypervisors (bare-metal: ESXi, KVM, Xen) run directly on hardware and are more efficient. Type 2 hypervisors (hosted: VirtualBox, VMware Workstation) run on top of a host OS and add an extra layer of overhead. Cloud providers use Type 1 hypervisors. Developer laptops typically use Type 2.

io/thecodeforge/architecture_inspection.sh Β· BASH
12345678910111213141516171819202122232425262728293031323334353637383940414243444546
#!/bin/bash
# Inspect the architecture differences between VMs and containers

# ── Container: check kernel sharing ──────────────────────────────────────────
# Run two containers and compare their kernel versions
docker run --rm alpine:3.19 uname -r
# Output: 6.1.0-18-amd64 (host kernel version)

docker run --rm ubuntu:22.04 uname -r
# Output: 6.1.0-18-amd64 (SAME kernel β€” they share the host kernel)

# Check namespaces for a running container
CONTAINER_PID=$(docker inspect --format '{{.State.Pid}}' <container-name>)
ls -la /proc/$CONTAINER_PID/ns/
# Output shows: ipc, mnt, net, pid, user, uts β€” each is an isolated namespace

# Check cgroup resource limits
cat /sys/fs/cgroup/cpu/docker/<container-id>/cpu.shares
# Default: 1024 (1 CPU share). Adjust with --cpus flag.

cat /sys/fs/cgroup/memory/docker/<container-id>/memory.limit_in_bytes
# Shows the memory limit set by --memory flag

# ── VM: check hardware virtualization ────────────────────────────────────────
# Check if the host supports hardware virtualization
egrep -c '(vmx|svm)' /proc/cpuinfo
# Output > 0 means hardware virtualization is supported

# Check loaded hypervisor modules
lsmod | grep -E 'kvm|vbox|vmw'
# kvm_intel or kvm_amd = KVM is loaded
# vboxdrv = VirtualBox is loaded

# Check VM disk driver (inside a VM)
lsblk -o NAME,TYPE,TRAN,MODEL
# virtio = paravirtualized driver (fast)
# ide/scsi = emulated driver (slow)

# ── Compare startup time ─────────────────────────────────────────────────────
# Container startup
time docker run --rm alpine:3.19 echo 'container started'
# Typical: 0.3-0.5 seconds

# VM startup (using a minimal cloud image)
time virsh start my-vm && while ! virsh dominfo my-vm | grep -q 'running'; do sleep 1; done
# Typical: 15-60 seconds depending on OS and cloud-init
β–Ά Output
# Container kernel check:
6.1.0-18-amd64
6.1.0-18-amd64
# Both containers share the same host kernel

# Container startup time:
container started
real 0m0.312s

# VM startup time:
Domain my-vm started
real 0m23.451s
Mental Model
VMs as Houses, Containers as Apartments
Why does the shared kernel in containers create a security trade-off?
  • A kernel vulnerability (CVE) affects all containers on the host because they all share the same kernel.
  • VMs are immune to kernel CVEs in other VMs because each VM has its own kernel.
  • For single-tenant workloads (your code, your infrastructure), container isolation is sufficient.
  • For multi-tenant workloads (untrusted code), the shared kernel is an unacceptable attack surface.
πŸ“Š Production Insight
The namespace inspection commands are essential for debugging container isolation issues. When a container cannot reach the network, check its network namespace. When a container cannot see other processes, check its PID namespace. When file permissions behave unexpectedly, check its user namespace. Understanding namespaces is the key to understanding container isolation.
🎯 Key Takeaway
VMs isolate at the hardware level β€” each VM has its own kernel. Containers isolate at the OS level β€” all containers share the host kernel. This is the fundamental trade-off: VMs provide stronger isolation but are heavier. Containers are lighter but the shared kernel is a security boundary for multi-tenant workloads.
Architecture Selection by Workload Type
IfSingle-tenant application workload (API, web server, worker)
β†’
UseContainer. Sufficient isolation, minimal overhead, fast startup.
IfMulti-tenant environment running untrusted customer code
β†’
UseVM (Firecracker, Kata) or gVisor. Shared kernel is unacceptable for untrusted code.
IfWorkload requires a specific kernel version or kernel modules
β†’
UseVM. Containers share the host kernel and cannot run a different kernel.
IfLegacy application requiring full OS environment
β†’
UseVM. Some applications require systemd, specific drivers, or full init system.
IfHigh-density microservices deployment
β†’
UseContainer. 10-50x more containers than VMs on the same hardware.

Performance Benchmarks: CPU, Memory, I/O, and Network

Performance differences between VMs and containers are real but context-dependent. For most application workloads, the difference is negligible. For I/O-intensive and network-intensive workloads, the difference can be significant.

CPU performance: Containers deliver near-native CPU performance β€” typically within 1-2% of bare metal. The overhead comes from cgroup accounting and namespace switching. VMs add 5-15% overhead from hardware virtualization (VT-x/AMD-V) and guest OS scheduling. The overhead is higher for workloads with frequent context switches (many threads, high syscall rate).

Memory performance: Containers use the host's native memory management β€” no overhead. VMs require the hypervisor to manage memory translation (EPT/NPT), which adds 2-5% overhead. Memory overcommit (allocating more virtual memory than physical) is common in VM environments and can cause swapping, which degrades performance dramatically.

Disk I/O performance: This is where the difference is most significant. Containers using the host's filesystem (bind mounts) deliver near-native I/O performance. VMs using virtualized disk drivers (virtio-blk) add 10-30% I/O overhead. Emulated drivers (IDE, legacy SCSI) can add 50%+ overhead. NVMe passthrough eliminates this overhead but limits VM mobility.

Network performance: Containers using bridge networking add 5-10% overhead from NAT and virtual bridge processing. Containers using host networking deliver near-native performance. VMs using virtio-net add 5-15% overhead. SR-IOV passthrough eliminates this overhead but requires hardware support.

Startup time: This is the most dramatic difference. Containers start in 0.3-2 seconds. VMs start in 15-60 seconds (full boot) or 1-5 seconds (resume from snapshot). For auto-scaling workloads that need to respond to traffic spikes in seconds, containers are the only viable option.

io/thecodeforge/performance_benchmark.sh Β· BASH
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455
#!/bin/bash
# Benchmark container vs VM performance across CPU, memory, I/O, and network

# ── CPU Benchmark ────────────────────────────────────────────────────────────
# Container: CPU performance (sysbench)
docker run --rm severalnines/sysbench sysbench cpu --cpu-max-prime=20000 run
# Look for 'events per second' β€” higher is better

# VM: CPU performance (run inside VM)
apt install -y sysbench
sysbench cpu --cpu-max-prime=20000 run
# Compare 'events per second' with container result

# ── Memory Benchmark ────────────────────────────────────────────────────────
# Container: memory throughput
docker run --rm severalnines/sysbench sysbench memory --memory-block-size=1M --memory-total-size=10G run
# Look for 'transferred' throughput in MiB/sec

# VM: memory throughput (run inside VM)
sysbench memory --memory-block-size=1M --memory-total-size=10G run

# ── Disk I/O Benchmark ──────────────────────────────────────────────────────
# Container: disk I/O with fio
docker run --rm -v $(pwd)/fio-test:/test loicmahieu/alpine-fio \
  fio --name=randread --ioengine=libaio --rw=randread --bs=4k \
  --numjobs=4 --size=256M --runtime=10 --time_based --filename=/test/file
# Look for 'IOPS' and 'lat avg' β€” IOPS higher and latency lower is better

# VM: disk I/O (run inside VM)
fio --name=randread --ioengine=libaio --rw=randread --bs=4k \
  --numjobs=4 --size=256M --runtime=10 --time_based --filename=/tmp/fio-test/file

# ── Network Benchmark ────────────────────────────────────────────────────────
# Container: network throughput with iperf3
# Server:
docker run -d --name iperf-server -p 5201:5201 networkstatic/iperf3 -s
# Client:
docker run --rm networkstatic/iperf3 -c <host-ip> -t 10
# Look for 'sender' bandwidth in Gbits/sec

# VM: network throughput (run inside VM)
iperf3 -c <host-ip> -t 10

# ── Startup Time Benchmark ───────────────────────────────────────────────────
# Container: measure cold start
time docker run --rm alpine:3.19 echo 'started'
# Typical: 0.3-0.5s

# Container: measure warm start (image already pulled)
time docker run --rm alpine:3.19 echo 'started'
# Typical: 0.1-0.2s

# VM: measure boot time (run on hypervisor)
time virsh start test-vm && sleep 1 && while ! virsh dominfo test-vm | grep -q running; do sleep 0.5; done
# Typical: 15-60s
β–Ά Output
# CPU benchmark comparison (sysbench, 20000 primes):
# Container: ~4800 events/sec (within 2% of host)
# VM (virtio): ~4200 events/sec (12% overhead)
# VM (emulated): ~3600 events/sec (25% overhead)

# Memory benchmark comparison:
# Container: ~8200 MiB/sec (near-native)
# VM (virtio): ~7800 MiB/sec (5% overhead)

# Disk I/O comparison (fio, 4k random read):
# Container (bind mount): ~45000 IOPS, 0.09ms latency
# VM (virtio-blk): ~38000 IOPS, 0.11ms latency (15% slower)
# VM (NVMe passthrough): ~44000 IOPS, 0.09ms latency (near-native)

# Network comparison (iperf3):
# Container (host network): ~9.4 Gbits/sec
# Container (bridge): ~8.8 Gbits/sec (6% overhead)
# VM (virtio-net): ~8.5 Gbits/sec (10% overhead)
# VM (SR-IOV): ~9.3 Gbits/sec (near-native)

# Startup time comparison:
# Container (cold): 0.38s
# Container (warm): 0.12s
# VM (full boot): 23.4s
# VM (resume from snapshot): 2.1s
Mental Model
Performance Overhead as Tax
When does the VM performance overhead actually matter?
  • High-throughput workloads processing millions of requests per second β€” even 5% overhead is significant.
  • I/O-intensive workloads (databases, search engines) β€” disk I/O overhead can reach 30% with emulated drivers.
  • Latency-sensitive workloads (trading, real-time) β€” the extra scheduling jitter from the hypervisor adds unpredictable latency.
  • For most web applications serving <10K requests/second, the overhead is negligible and should not drive the VM vs container decision.
πŸ“Š Production Insight
The disk I/O overhead in VMs is the most commonly underestimated performance issue. A team migrated their PostgreSQL database from bare metal to VMs and saw query latency increase by 40%. The root cause: the VM was using IDE emulated drivers instead of virtio-blk. Switching to virtio-blk reduced the overhead from 40% to 15%. Switching to NVMe passthrough eliminated the overhead entirely. Always verify the disk driver inside VMs with lsblk -o NAME,TYPE,TRAN.
🎯 Key Takeaway
Containers deliver near-native performance (<2% overhead). VMs add 5-15% overhead from hardware virtualization. The biggest performance gap is disk I/O β€” VMs using emulated drivers can be 30-50% slower. Always use virtio drivers in VMs. For auto-scaling workloads, containers are the only option β€” VMs take 15-60 seconds to boot.
Performance Optimization Strategy
IfCPU-bound workload (computation, encoding, ML inference)
β†’
UseContainers β€” near-native performance. VMs add 5-15% overhead with no benefit for CPU-bound work.
IfDisk I/O-bound workload (database, search engine)
β†’
UseContainers with bind mounts (near-native). If VMs are required, use virtio-blk or NVMe passthrough.
IfNetwork-intensive workload (API gateway, proxy, load balancer)
β†’
UseContainers with host networking (near-native). If VMs are required, use SR-IOV or virtio-net.
IfAuto-scaling workload that needs sub-second startup
β†’
UseContainers only. VMs take 15-60 seconds to boot. Even snapshot resume takes 1-5 seconds.

Security Isolation: Kernel Sharing, Attack Surface, and Defense in Depth

Security isolation is the most important trade-off between VMs and containers. The difference is not theoretical β€” it has caused real production breaches.

VM isolation: Each VM has its own kernel. A kernel vulnerability in VM A does not affect VM B or the host. The hypervisor is the only shared component, and hypervisors have a much smaller attack surface than full kernels (fewer lines of code, fewer syscalls, simpler state machine). This is why cloud providers (AWS, GCP, Azure) use VMs for multi-tenant isolation.

Container isolation: All containers share the host kernel. A kernel vulnerability (like Dirty Pipe, CVE-2022-0847, or CVE-2020-14386) affects every container on the host. The attack surface is the entire kernel β€” millions of lines of code, hundreds of syscalls, complex state. Container runtimes mitigate this with seccomp (syscall filtering), AppArmor/SELinux (mandatory access control), and capabilities dropping β€” but these are defense-in-depth layers, not a separate kernel.

The multi-tenant boundary: For single-tenant workloads (your code, your infrastructure, your team), container isolation is sufficient. The risk of a kernel CVE being exploited by your own code is low, and you control the patching cadence. For multi-tenant workloads (running untrusted customer code), the shared kernel is an unacceptable attack surface. Use VMs (Firecracker, Kata Containers) or a user-space kernel (gVisor).

Hybrid approaches: - gVisor: intercepts syscalls in user space, providing a kernel-like interface without exposing the host kernel. Adds 2-10% overhead but dramatically reduces attack surface. - Kata Containers: runs each container in a lightweight VM with its own kernel. Provides VM-level isolation with container-like management. - Firecracker: AWS's microVM technology used for Lambda and Fargate. Starts a VM in 125ms with minimal memory overhead (5MB per microVM).

io/thecodeforge/security_isolation.sh Β· BASH
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950
#!/bin/bash
# Security isolation inspection and hardening

# ── Check container security features ────────────────────────────────────────

# Check seccomp profile (syscall filtering)
docker inspect <container> --format '{{.HostConfig.SecurityOpt}}'
# Output: [seccomp=/path/to/profile.json] or [seccomp=unconfined]
# Default profile blocks ~44 dangerous syscalls out of ~300+

# Check if container is running as root
docker exec <container> id
# uid=0(root) = running as root (bad in production)
# uid=1000(appuser) = running as non-root (good)

# Check capabilities (fine-grained privilege control)
docker inspect <container> --format '{{.HostConfig.CapAdd}} {{.HostConfig.CapDrop}}'
# CapDrop: [ALL] CapAdd: [NET_BIND_SERVICE] = minimal privileges

# Check AppArmor profile
docker inspect <container> --format '{{.AppArmorProfile}}'
# docker-default = AppArmor is active (good)
# unconfined = no AppArmor (bad in production)

# ── Check if running on gVisor (user-space kernel) ───────────────────────────
docker info | grep -i runtime
# runsc = gVisor runtime (enhanced isolation)
# runc = standard runtime (standard isolation)

# Run a container with gVisor
docker run --runtime=runsc --rm alpine:3.19 dmesg | head -5
# gVisor intercepts syscalls β€” dmesg output differs from standard Linux

# ── Check VM isolation (inside a VM) ─────────────────────────────────────────
# Each VM has its own kernel β€” verify with different kernel versions
docker run --rm alpine:3.19 uname -r  # Shows host kernel
# Inside VM: uname -r  # Shows guest kernel (can be different)

# Check if the hypervisor exposes hardware virtualization
egrep -c '(vmx|svm)' /proc/cpuinfo
# > 0 = hardware virtualization available

# ── Kernel CVE check (critical for container hosts) ──────────────────────────
# Check kernel version
uname -r

# Cross-reference with known CVEs
# Example: Dirty Pipe affects kernels 5.8 through 5.16.10
# If uname -r shows 5.10.0-amd64, the host is vulnerable
# Fix: apt update && apt upgrade linux-image-$(uname -r)
β–Ά Output
# Container security check:
[seccomp=/etc/docker/seccomp/default.json]
uid=1000(appuser) gid=1000(appgroup)
CapDrop: [ALL] CapAdd: [NET_BIND_SERVICE]
docker-default

# gVisor runtime:
runtimes: runsc

# Kernel version check:
5.10.0-18-amd64
# This kernel version is vulnerable to Dirty Pipe (CVE-2022-0847)
# Must be patched to 5.10.104+ or 5.15.26+
Mental Model
Security Isolation as Walls vs Rules
Why is the shared kernel the fundamental security trade-off?
  • The kernel is the most privileged code on the system β€” it controls all hardware access, memory, and processes.
  • A kernel vulnerability allows any process (including container processes) to bypass all isolation mechanisms.
  • VMs have a separate kernel per instance β€” a vulnerability in one kernel does not affect others.
  • Containers mitigate this with seccomp and AppArmor, but these are kernel features β€” they cannot protect against kernel bugs.
πŸ“Š Production Insight
The seccomp default profile blocks ~44 dangerous syscalls (mount, reboot, kexec_load, etc.) but allows the rest. For high-security environments, create a custom seccomp profile that blocks all syscalls except those required by your application. This dramatically reduces the attack surface. Use strace or auditd to determine which syscalls your application actually uses, then build a minimal profile.
🎯 Key Takeaway
VMs isolate at the kernel level β€” a kernel CVE in one VM does not affect others. Containers share the host kernel β€” a kernel CVE affects all containers. For single-tenant workloads, container isolation is sufficient. For multi-tenant or untrusted code, use gVisor, Kata Containers, or Firecracker.
Security Isolation Selection
IfSingle-tenant workload, trusted code, controlled patching
β†’
UseStandard Docker containers with seccomp, AppArmor, non-root user, and dropped capabilities.
IfMulti-tenant workload, untrusted customer code
β†’
UsegVisor (runsc) for moderate overhead or Firecracker/Kata for full VM isolation.
IfCompliance requirement (PCI-DSS, SOC 2) mandating kernel isolation
β†’
UseVMs. Compliance auditors typically require a separate kernel per tenant.
IfServerless platform (running arbitrary customer functions)
β†’
UseFirecracker microVMs. AWS Lambda uses this β€” 125ms VM startup, 5MB overhead per VM.

Operational Trade-offs: Scaling, Density, Patching, and Debugging

Beyond architecture and performance, the operational differences between VMs and containers determine day-to-day engineering velocity.

Scaling speed: Containers scale in seconds β€” start a new container, it is ready to serve traffic in 1-2 seconds. VMs scale in minutes β€” boot a new VM, wait for cloud-init, install dependencies, start the application. For auto-scaling workloads that respond to traffic spikes, containers are the only option that provides sub-minute scaling.

Density: On the same hardware, you can run 10-50x more containers than VMs. A server with 64GB RAM might run 10-15 VMs (each consuming 2-4GB for the guest OS alone) or 100-200 containers (each consuming 50-200MB for the application only). This density difference directly impacts infrastructure cost.

Patching: VM patching requires updating the guest OS inside each VM β€” either manually, with configuration management (Ansible, Puppet), or with golden image rebuilds. Container patching requires rebuilding the image with an updated base layer and redeploying β€” a single docker build && docker push. Container patching is faster and more reproducible because the image is immutable.

Debugging: VMs provide a full OS environment β€” you can SSH in, install debugging tools, inspect logs, and run diagnostics. Containers are minimal by design β€” many production containers do not have a shell, let alone debugging tools. Debugging containers requires docker exec (if a shell exists), docker logs, or sidecar containers with debugging tools.

Networking: VMs typically use the hypervisor's virtual switch (vSwitch) or the cloud provider's VPC networking. Containers use software-defined networking (bridge, overlay, macvlan). VM networking is simpler to reason about (standard IP networking). Container networking adds complexity (DNS-based service discovery, overlay encapsulation, ingress routing mesh) but provides better integration with orchestration platforms.

io/thecodeforge/operational_comparison.sh Β· BASH
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869
#!/bin/bash
# Operational comparison: scaling, density, patching, and debugging

# ── Scaling: container vs VM auto-scaling ─────────────────────────────────────

# Container: scale from 1 to 10 replicas in seconds
docker compose up -d --scale api=10
# All 10 containers are ready in 2-5 seconds

# VM: scale from 1 to 10 instances (AWS example)
aws autoscaling set-desired-capacity \
  --auto-scaling-group-name my-asg \
  --desired-capacity 10
# New VMs take 2-5 minutes to boot, run cloud-init, and become healthy

# ── Density: compare resource usage ──────────────────────────────────────────

# Container: check resource usage per container
docker stats --no-stream --format '{{.Name}}: {{.MemUsage}}'
# Typical output:
# api-1: 85MiB / 15.55GiB
# api-2: 92MiB / 15.55GiB
# postgres: 120MiB / 15.55GiB
# Total: ~300MB for 3 containers

# VM: check resource usage per VM (inside each VM)
free -h
# Typical output:
# total: 3.8GiB  used: 1.2GiB  (OS overhead alone)
# Total: 1.2GB per VM just for the OS, before the application starts

# ── Patching: container rebuild vs VM patching ───────────────────────────────

# Container: rebuild with updated base image
docker build --no-cache -t my-app:patched .
docker push my-app:patched
# Entire patch process: 2-5 minutes, fully automated, reproducible

# VM: patch guest OS (run inside VM)
apt update && apt upgrade -y
# Or rebuild golden image with packer/ansible
# Entire patch process: 10-30 minutes per VM, or hours for golden image rebuild

# ── Debugging: container vs VM ───────────────────────────────────────────────

# Container: exec into running container
docker exec -it <container> sh
# Limited tools β€” production containers often have no shell

# Container: use a debug sidecar
docker run --rm -it --pid=container:<target> --net=container:<target> \
  nicolaka/netshoot bash
# Full debugging toolkit without modifying the production container

# VM: SSH into running VM
ssh user@vm-ip
# Full OS environment β€” install any debugging tool

# ── Networking: container vs VM ──────────────────────────────────────────────

# Container: inspect network configuration
docker network ls
docker network inspect bridge
# Shows: subnet, gateway, connected containers, driver

# VM: inspect network configuration (inside VM)
ip addr show
ip route show
# Standard Linux networking β€” no abstraction layer
β–Ά Output
# Container scaling:
[+] Running 10/10
βœ” Container api-1 Started
βœ” Container api-2 Started
...
βœ” Container api-10 Started
# All ready in 3.2 seconds

# Container density:
api-1: 85MiB / 15.55GiB
api-2: 92MiB / 15.55GiB
postgres: 120MiB / 15.55GiB
# 3 containers using ~300MB total

# VM density (same 64GB server):
# 10-15 VMs (each using 2-4GB for OS overhead)
# vs 100-200 containers (each using 50-200MB for app only)
Mental Model
Operational Overhead as Friction
When do VMs have an operational advantage over containers?
  • Debugging: VMs have a full OS with all tools available. Containers are minimal and often lack a shell.
  • Networking: VM networking is standard IP networking. Container networking adds abstraction layers (DNS, overlay, routing mesh).
  • Compliance: aud requires more explanation and evidence.
  • Legacy applications: some applications require systemd, specific kernel modules, or full OS features that only VMs provide.
πŸ“Š Production Insight
The density advantage of containers has a hidden cost: resource contention. Running 200 containers on a 64GB server means each container has ~320MB of headroom. A single memory leak in one container can trigger OOM kills across the server,itors understand VMs. Container isolation affecting unrelated containers. Always set memory limits (--memory) on every production container and monitor host-level resource usage with docker stats and Prometheus node_exporter.
🎯 Key Takeaway
Containers scale in seconds, VMs scale in minutes. Containers are 10-50x denser than VMs. Container patching is a single image rebuild; VM patching requires per-instance updates. VMs have an advantage in debugging (full OS) and networking (standard IP). Choose based on deployment frequency and operational maturity.
Operational Strategy Selection
IfTeam deploys multiple times per day, needs fast scaling
β†’
UseContainers. Sub-second startup, automated patching, fast rollbacks.
IfTeam deploys weekly, has dedicated ops team managing VMs
β†’
UseVMs are acceptable. The operational overhead is amortized over longer deployment cycles.
IfNeed to debug complex production issues interactively
β†’
UseVMs have an advantage β€” full OS with all tools. For containers, use debug sidecars (nicolaka/netshoot).
IfRunning 50+ services on shared infrastructure
β†’
UseContainers with orchestration (Kubernetes, ECS). Density and automation advantages dominate.

The Hybrid Middle Ground: gVisor, Kata Containers, and Firecracker

The VM vs container debate is not binary. Three technologies provide hybrid approaches that combine the best of both worlds β€” at the cost of added complexity.

gVisor (Google): A user-space kernel that intercepts container syscalls and implements them in Go. The container process never directly touches the host kernel. gVisor implements ~70 of the ~400 Linux syscalls, filtering out the rest. This dramatically reduces the attack surface while maintaining container-like startup speed (1-2 seconds). The trade-off: 2-10% performance overhead and limited syscall compatibility (some applications do not work with gVisor).

Kata Containers: Runs each container in a lightweight VM with its own kernel. Provides VM-level isolation with container-like management (Docker, Kubernetes integration). Each Kata container is a microVM β€” it starts in 1-3 seconds and uses 20-50MB of overhead. The trade-off: higher overhead than standard containers but lower than full VMs.

Firecracker (AWS): A microVM technology designed for serverless workloads. AWS Lambda and Fargate use Firecracker to run each function in its own microVM. Firecracker starts a VM in 125ms with 5MB of memory overhead. The trade-off: limited device support (no GPU, no USB), designed for short-lived workloads, and requires KVM support.

When to use each: - gVisor: moderate-security multi-tenant workloads where syscall compatibility is acceptable - Kata Containers: high-security multi-tenant workloads requiring a real kernel per tenant - Firecracker: serverless platforms running short-lived, stateless functions

io/thecodeforge/hybrid_runtimes.sh Β· BASH
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980
#!/bin/bash
# Configure and compare hybrid runtimes

# ── gVisor: user-space kernel ────────────────────────────────────────────────

# Install gVisor
(
  set -e
  ARCH=$(uname -m)
  URL="https://storage.googleapis.com/gvisor/releases/release/latest/${ARCH}"
  wget ${URL}/runsc ${URL}/runsc.sha512 \
    ${URL}/containerd-shim-runsc-v1 ${URL}/containerd-shim-runsc-v1.sha512
  sha512sum -c runsc.sha512 -c containerd-shim-runsc-v1.sha512
  rm -f *.sha512
  chmod a+rx runsc containerd-shim-runsc-v1
  sudo mv runsc containerd-shim-runsc-v1 /usr/local/bin
)

# Configure Docker to use gVisor
cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "runtimes": {
    "runsc": {
      "path": "/usr/local/bin/runsc",
      "runtimeArgs": ["--platform=systrap"]
    }
  }
}
EOF
sudo systemctl restart docker

# Run a container with gVisor
docker run --runtime=runsc --rm alpine:3.19 uname -a
# Output shows gVisor kernel info instead of host kernel

# ── Kata Containers: lightweight VMs ────────────────────────────────────────

# Install Kata Containers (Ubuntu)
sudo apt install -y kata-runtime kata-proxy kata-shim

# Configure Docker to use Kata
cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "runtimes": {
    "kata": {
      "path": "/usr/bin/kata-runtime"
    }
  }
}
EOF
sudo systemctl restart docker

# Run a container with Kata (it is actually a VM)
docker run --runtime=kata --rm alpine:3.19 dmesg | head -3
# Output shows a separate kernel β€” this is a VM, not a container

# ── Firecracker: microVMs for serverless ─────────────────────────────────────

# Download Firecracker
curl -LOJ https://github.com/firecracker-microvm/firecracker/releases/latest/download/firecracker-x86_64
chmod +x firecracker-x86_64
sudo mv firecracker-x86_64 /usr/local/bin/firecracker

# Create a microVM (requires kernel and rootfs)
firecracker --api-sock /tmp/firecracker.socket \
  --config-file io/thecodeforge/firecracker-config.json
# VM starts in ~125ms with 5MB overhead

# ── Compare startup times ────────────────────────────────────────────────────
echo '--- Standard container ---'
time docker run --rm alpine:3.19 echo done
# ~0.3s

echo '--- gVisor container ---'
time docker run --runtime=runsc --rm alpine:3.19 echo done
# ~0.5s (2x slower than standard, but still fast)

echo '--- Kata container (microVM) ---'
time docker run --runtime=kata --rm alpine:3.19 echo done
# ~1.5s (5x slower, but provides full kernel isolation)
β–Ά Output
# Standard container:
done
real 0m0.312s

# gVisor container:
done
real 0m0.543s

# Kata container (microVM):
done
real 0m1.487s
Mental Model
Hybrid Runtimes as Security Layers
Why would you choose gVisor over Kata Containers?
  • gVisor has lower overhead (2-10%) vs Kata (10-20%) because it does not run a full VM.
  • gVisor starts faster (~0.5s) vs Kata (~1.5s) because there is no VM boot process.
  • Kata provides stronger isolation (real kernel per tenant) but at higher cost.
  • Choose gVisor for moderate-security workloads. Choose Kata for high-security or compliance-driven workloads.
πŸ“Š Production Insight
AWS Lambda uses Firecracker microVMs to achieve both isolation and speed. Each Lambda function runs in its own microVM that starts in 125ms. This is the hybrid approach that proved the VM vs container debate is not binary β€” you can have near-container speed with near-VM isolation. If you are building a serverless platform, study Firecracker's architecture.
🎯 Key Takeaway
The VM vs container choice is not binary. gVisor provides a user-space kernel for moderate isolation with low overhead. Kata Containers provides full VM isolation with container-like management. Firecracker provides microVMs that start in 125ms. Choose the hybrid approach that matches your security requirements and performance budget.
πŸ—‚ Docker Containers vs Virtual Machines: Complete Comparison
Architecture, performance, security, and operational trade-offs.
AspectDocker ContainersVirtual MachinesHybrid (gVisor/Kata/Firecracker)
Isolation boundaryOS-level (namespaces, cgroups)Hardware-level (hypervisor)User-space kernel (gVisor) or microVM (Kata/Firecracker)
KernelShared host kernelSeparate kernel per VMUser-space kernel (gVisor) or separate kernel (Kata/Firecracker)
Startup time0.3-2 seconds15-60 seconds (full boot), 1-5s (snapshot)0.5s (gVisor), 1.5s (Kata), 0.125s (Firecracker)
Memory overhead1-50MB per container512MB-2GB per VM (guest OS)5-50MB (gVisor), 20-50MB (Kata), 5MB (Firecracker)
CPU overhead<2%5-15%2-10% (gVisor), 5-15% (Kata), 3-8% (Firecracker)
Disk I/O overhead<5% (bind mount)10-30% (virtio), 50%+ (emulated)5-15% (gVisor), 10-20% (Kata)
Density (per 64GB host)100-200 containers10-15 VMs50-100 (gVisor), 30-60 (Kata), 100+ (Firecracker)
Security isolationGood (seccomp, AppArmor)Strong (separate kernel)Strong (gVisor syscall filtering) or Strong (Kata/Firecracker separate kernel)
Multi-tenant safeNo (shared kernel)Yes (separate kernel)Yes (all three)
Best forSingle-tenant microservices, CI/CDLegacy apps, strong isolation, complianceMulti-tenant SaaS, serverless, moderate security needs

🎯 Key Takeaways

  • Containers share the host kernel β€” they start in milliseconds and use megabytes of memory. VMs have a separate kernel β€” they are heavier but provide stronger isolation.
  • The shared kernel is the fundamental security trade-off. A kernel CVE affects all containers on the host. For multi-tenant or untrusted workloads, use gVisor, Kata, or Firecracker.
  • Containers deliver near-native performance (<2% overhead). VMs add 5-15% overhead. The biggest gap is disk I/O β€” always use virtio drivers in VMs.
  • Containers scale in seconds, VMs scale in minutes. Containers are 10-50x denser. Choose based on deployment frequency and scaling requirements.
  • The VM vs container debate is not binary. gVisor, Kata Containers, and Firecracker provide hybrid approaches that combine container-like speed with VM-like isolation.

⚠ Common Mistakes to Avoid

  • βœ•Mistake 1: Running untrusted customer code in standard Docker containers β€” Symptom: attacker exploits a kernel CVE to escape the container and access the host β€” Fix: use gVisor (runsc) for moderate isolation or Kata Containers/Firecracker for full VM isolation. Never run untrusted code with direct host kernel access.
  • βœ•Mistake 2: Using VMs for everything because 'VMs are more secure' β€” Symptom: paying 10x more for infrastructure, slower scaling, and higher operational overhead β€” Fix: use containers for single-tenant workloads where you control the code and the patching cadence. Reserve VMs for multi-tenant or compliance-driven workloads.
  • βœ•Mistake 3: Not setting resource limits on containers in high-density deployments β€” Symptom: one misbehaving container consumes all host RAM, triggering OOM kills on unrelated containers β€” Fix: set --cpus and --memory on every production container. Monitor host-level resource usage.
  • βœ•Mistake 4: Not patching the host kernel on container hosts β€” Symptom: all containers on the host are vulnerable to kernel CVEs β€” Fix: automate kernel patching with unattended-upgrades or kexec-based live patching. Monitor kernel versions across all hosts.
  • βœ•Mistake 5: Using emulated disk drivers in VMs β€” Symptom: disk I/O is 50%+ slower than bare metal β€” Fix: always use virtio-blk or virtio-scsi drivers. Verify with lsblk -o NAME,TYPE,TRAN inside the VM.
  • βœ•Mistake 6: Choosing containers for workloads that need a specific kernel version β€” Symptom: application fails to load kernel modules or requires kernel features not available on the host β€” Fix: use VMs for workloads that need kernel-level customization. Containers share the host kernel and cannot run a different one.

Interview Questions on This Topic

  • QExplain the fundamental architectural difference between Docker containers and VMs at the kernel level. How does this difference affect security, performance, and density?
  • QA SaaS platform runs untrusted customer code in Docker containers. A kernel CVE is discovered that allows container escape. What is the immediate risk, and what long-term architecture change would you recommend?
  • QCompare the performance overhead of containers vs VMs for CPU, memory, disk I/O, and network. Where is the biggest performance gap, and how would you mitigate it in a VM environment?
  • QWhen would you choose gVisor over standard Docker containers? When would you choose Kata Containers over gVisor? What are the trade-offs?
  • QYour team is deciding between containers and VMs for a new microservices platform. Walk me through the decision framework you would use, including security, performance, and operational considerations.
  • QAWS Lambda runs each function in a Firecracker microVM that starts in 125ms. How does this achieve both VM-level isolation and container-like startup speed?

Frequently Asked Questions

Are Docker containers less secure than VMs?

Containers and VMs have different security boundaries. VMs isolate at the kernel level β€” each VM has its own kernel, so a kernel vulnerability in one VM does not affect others. Containers share the host kernel β€” a kernel vulnerability affects all containers on that host. For single-tenant workloads where you control the code and patching, container isolation is sufficient. For multi-tenant or untrusted workloads, the shared kernel is an unacceptable attack surface β€” use gVisor, Kata Containers, or Firecracker.

When should I use a VM instead of a Docker container?

Use VMs when: (1) you need full kernel isolation for security or compliance, (2) you are running untrusted code from multiple tenants, (3) the workload requires a specific kernel version or kernel modules, (4) the application requires a full OS environment with systemd, or (5) compliance auditors require a separate kernel per workload. Use containers for everything else β€” single-tenant microservices, CI/CD pipelines, developer environments, and stateless application workloads.

How much slower are VMs compared to containers?

VMs add 5-15% CPU overhead, 2-5% memory overhead, and 10-30% disk I/O overhead compared to containers. The startup time difference is the most dramatic: containers start in 0.3-2 seconds, VMs take 15-60 seconds. For most web applications serving less than 10K requests per second, the performance difference is negligible. The difference matters for high-throughput, I/O-intensive, or latency-sensitive workloads.

What is gVisor and when should I use it?

gVisor is a user-space kernel that intercepts container syscalls and implements them in Go, preventing direct access to the host kernel. It adds 2-10% overhead but dramatically reduces the attack surface. Use gVisor when you need stronger isolation than standard containers but cannot afford the overhead of full VMs. It is ideal for moderate-security multi-tenant workloads where syscall compatibility is acceptable.

Can I run Docker containers inside a VM?

Yes β€” this is a common pattern called 'containers on VMs.' You run Docker on a VM to combine VM-level isolation (separate kernel per VM) with container-level density and speed (many containers per VM). Cloud providers (AWS ECS, Google Cloud Run) use this pattern extensively. The VM provides the security boundary; the containers provide the operational efficiency.

πŸ”₯
Naren Founder & Author

Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.

← PreviousContainerization vs VirtualizationNext β†’How Docker Works Internally
Forged with πŸ”₯ at TheCodeForge.io β€” Where Developers Are Forged