Skip to content
Home DevOps Kubernetes Network Policies: Zero-Trust Segmentation in Production

Kubernetes Network Policies: Zero-Trust Segmentation in Production

Where developers are forged. · Structured learning · Free forever.
📍 Part of: Kubernetes → Topic 11 of 12
Kubernetes Network Policies explained deeply — how they work internally, CNI enforcement, ingress/egress rules, real YAML examples, and production gotchas you won't find elsewhere.
🔥 Advanced — solid DevOps foundation required
In this tutorial, you'll learn
Kubernetes Network Policies explained deeply — how they work internally, CNI enforcement, ingress/egress rules, real YAML examples, and production gotchas you won't find elsewhere.
  • The Kubernetes API server stores Network Policies but never enforces them — enforcement lives entirely in your CNI plugin. If your CNI doesn't support policies (Flannel), they are silently ignored.
  • An empty podSelector in a NetworkPolicy matches ALL Pods in the namespace — not zero Pods. This is how you write a namespace-wide default-deny baseline.
  • Multiple from/to entries are ORed together. Fields within a single entry are ANDed. This YAML indentation distinction determines your actual security posture and produces no errors when wrong.
✦ Plain-English analogy ✦ Real code with output ✦ Interview questions
Quick Answer
  • Enforcement is done by the CNI plugin (Calico, Cilium), NOT the API server. Flannel ignores policies silently.
  • Default behavior: if no policy selects a Pod, ALL traffic is allowed in both directions.
  • Once any policy selects a Pod, that direction enters implicit default-deny. Only explicitly whitelisted traffic passes.
  • Policies are additive whitelists. There is no deny rule in the standard API. Multiple policies selecting the same Pod are unioned (OR).
  • iptables-based CNIs (Calico) scale O(n) with rule count. Performance degrades at 1000+ Pods.
  • eBPF-based CNIs (Cilium) scale O(1) with hash maps. Better performance but requires kernel 4.9+.
  • Forgetting DNS egress carve-out when applying default-deny egress. Every service discovery call silently times out after 30 seconds.
🚨 START HERE
Network Policy Triage Commands
Rapid commands to isolate Network Policy enforcement issues.
🟡Connection timeout between Pods.
Immediate ActionTest connectivity directly from source Pod to destination Pod IP.
Commands
kubectl exec -n <src-ns> <src-pod> -- curl -s --max-time 3 http://<dst-pod-ip>:<port>/health
kubectl get networkpolicy -n <dst-ns> -o yaml | grep -A 20 podSelector
Fix NowIf curl times out, a NetworkPolicy is blocking the traffic. Check if source Pod labels match the policy's from selector. If no policy exists, check if the CNI supports NetworkPolicy.
🟡DNS resolution failing (30-second timeouts).
Immediate ActionCheck for egress policies blocking port 53.
Commands
kubectl exec -n <ns> <pod> -- nslookup kubernetes.default.svc.cluster.local
kubectl get networkpolicy -n <ns> -o json | jq '.items[] | select(.spec.policyTypes[]=="Egress") | {name: .metadata.name, egressRules: .spec.egress}'
Fix NowIf DNS fails and an egress policy exists without a port 53 rule, add a DNS carve-out targeting kube-system/kube-dns on UDP/TCP 53.
🟡Policy applied but not enforced.
Immediate ActionVerify CNI supports NetworkPolicy.
Commands
kubectl get pods -n kube-system | grep -E 'calico|cilium|weave|antrea'
kubectl get networkpolicy -n <ns> -o json | jq '.items[].spec.podSelector'
Fix NowIf no policy-enforcing CNI is present, install Calico or Cilium. If CNI is present, check that podSelector matches actual Pod labels.
🟡Traffic from unexpected source reaching Pod.
Immediate ActionCheck for missing default-deny baseline.
Commands
kubectl get networkpolicy -n <ns> -o json | jq '.items[] | select(.spec.podSelector=={})'
kubectl get networkpolicy -n <ns> -o json | jq '.items[] | select(.spec.podSelector.matchLabels.app=="<target-app>") | .metadata.name'
Fix NowIf no default-deny policy exists, apply one. If a policy exists but the Pod's labels don't match any policy's podSelector, the Pod is ungoverned and all traffic is allowed.
🟡AND vs OR logic confusion in policy.
Immediate ActionInspect the from/to entries and their indentation.
Commands
kubectl get networkpolicy <name> -n <ns> -o yaml | grep -A 30 'ingress\|egress'
kubectl exec -n <wrong-ns> <wrong-pod> -- curl -s --max-time 3 http://<target>:<port>/health
Fix NowSame dash (-) entry with multiple fields = AND. Separate dash entries = OR. Restructure the YAML to match intended access matrix.
Production IncidentDefault-Deny Egress Without DNS Carve-Out: Cluster-Wide Service Discovery FailureA platform team applied default-deny egress to all production namespaces during a security hardening sprint. Within 15 minutes, every service in the cluster began returning 503 errors. No Pods crashed. No OOMKills. No CPU spikes. The cluster appeared healthy from every observability angle except actual traffic flow.
SymptomAll services returned HTTP 503 or connection timeout errors. Readiness probes passed (they used localhost or IP addresses). Application logs showed 'could not resolve host' and 'DNS resolution failed' errors with 30-second delays per request. Prometheus targets all showed DOWN. No Kubernetes events were generated.
AssumptionA bad deployment was pushed that broke the application code. Or CoreDNS was down.
Root causeThe team applied a default-deny NetworkPolicy with both Ingress and Egress in policyTypes. The policy had no egress rules defined, which meant all outbound traffic was blocked — including UDP and TCP port 53 to CoreDNS. Every Pod in every namespace could no longer resolve DNS names. Services that connected to other services by DNS name (e.g., postgres.payments.svc.cluster.local) failed immediately. Services that connected by IP address continued to work, which made the failure pattern appear random.
Fix1. Immediately patched every default-deny policy to include a DNS egress carve-out to CoreDNS on UDP/TCP port 53. 2. Created a namespace bootstrap template that always includes the DNS carve-out as a non-removable base rule. 3. Added a CI check that validates every NetworkPolicy with Egress in policyTypes has at least one egress rule targeting kube-system/kube-dns on port 53. 4. Documented the DNS trap in the team's runbook with a bold warning.
Key Lesson
Default-deny egress blocks DNS by default. Always add a carve-out for CoreDNS on UDP and TCP port 53.DNS failure manifests as 30-second timeouts, not immediate errors. This makes it look like a latency problem, not a connectivity problem.Readiness probes that use localhost or IP addresses pass even when DNS is broken. Use DNS-based probes to catch this.Test default-deny egress in staging with a curl-based smoke test before applying to production.CI validation of NetworkPolicy completeness prevents this class of incident entirely.
Production Debug GuideSymptom-first investigation path for Network Policy issues in production.
Connection timeout between two Pods that should be able to communicate.Check if a NetworkPolicy selects the destination Pod. If yes, verify the source Pod's labels match the policy's podSelector. Check namespace labels match namespaceSelector. Test with a throwaway curl Pod from the source namespace.
All Pods in a namespace have 30-second DNS resolution delays.Check for a default-deny egress policy without a DNS carve-out. Verify CoreDNS is reachable from the Pod: kubectl exec into the Pod and run 'nslookup kubernetes.default.svc.cluster.local'. If it times out, the egress policy is blocking UDP/TCP 53.
NetworkPolicy applied but traffic still flows freely.Verify the CNI supports NetworkPolicy. Run 'kubectl get pods -n kube-system | grep -E calico|cilium|weave|antrea'. If only Flannel is present, policies are silently ignored. If a policy-enforcing CNI is present, check that the policy's podSelector actually matches running Pods.
Traffic blocked from a source that should be allowed.Check AND vs OR logic in the from/to selectors. Fields within the same list entry are ANDed. Separate entries are ORed. A common mistake is putting podSelector and namespaceSelector in the same entry (AND) when they should be separate entries (OR).
Intermittent connectivity between Pods after applying NetworkPolicy.Check if Pods have the expected labels. A deployment rollout may have created new Pods with different labels. Run 'kubectl get pods -n <ns> --show-labels' and compare against the policy's podSelector. Label mismatches cause silent default-deny.

Most teams get Kubernetes running, deploy their apps, and move on — never realizing their payment service can freely dial their logging sidecar, which can freely dial their database, which can freely reach the internet. That's not paranoia; that's the default. Kubernetes was designed for rapid connectivity, not zero-trust isolation. The moment you run multiple tenants, compliance workloads, or anything that touches PII or financial data, that open-door model becomes a liability.

Network Policies solve this by letting you express intent in YAML: only Pods with this label may reach my database on port 5432, from this namespace only, and my database can reach nothing outbound except DNS. The CNI plugin — not the Kubernetes API server — enforces those rules in the kernel using iptables, eBPF, or nftables depending on your stack. That distinction matters enormously for debugging and performance.

This is not a syntax reference. It covers how policies are evaluated and merged, how to write airtight ingress and egress rules without accidentally blackholing DNS, how to verify enforcement at the network level rather than trusting your YAML applied cleanly, and the production mistakes that silently leave clusters wide open.

How Network Policy Enforcement Actually Works — The CNI Layer

Here's the thing most tutorials skip: the Kubernetes API server doesn't enforce Network Policies. It just stores them. The actual enforcement happens inside your CNI plugin — Calico, Cilium, Weave, Antrea — which watches the API server for NetworkPolicy objects and translates them into kernel-level firewall rules on each node.

With Calico on older kernels, that means iptables chains per endpoint. With Cilium, it's eBPF programs loaded into the kernel that intercept packets at the socket layer before they ever hit iptables — significantly lower latency and dramatically better observability. With Flannel, enforcement is zero because Flannel doesn't implement Network Policies at all. This is one of the most common production surprises: a team applies policies and believes they're enforced, but their CNI silently ignores them.

Policy evaluation works like a firewall whitelist. If no NetworkPolicy selects a Pod, all traffic is allowed. The moment any policy selects a Pod — via podSelector — that Pod enters an implicit 'default deny' for the traffic directions that policy governs. Multiple policies selecting the same Pod are unioned together: a packet is allowed if it matches any one of them. There's no precedence, no ordering, no 'deny' rule type in the core API. You get whitelisting only, which is both a simplicity win and a constraint you need to design around.

default-deny-all.yaml · YAML
12345678910111213141516
# STEP 1: Apply a default-deny baseline to a namespace.
# This selects ALL pods in the namespace (empty podSelector matches everything)
# and specifies BOTH policyTypes — so both ingress and egress are now default-deny.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all-traffic
  namespace: payments
spec:
  podSelector: {}              # Empty selector = matches every Pod in this namespace
  policyTypes:
    - Ingress                  # Explicitly govern inbound traffic
    - Egress                   # Explicitly govern outbound traffic
  # No ingress or egress rules defined here — that's intentional.
  # The absence of rules under a governed policyType means: deny everything.
  # This is your zero-trust baseline. Now you add back only what you need.
▶ Output
networkpolicy.networking.k8s.io/default-deny-all-traffic created
Mental Model
Flannel Won't Enforce Anything
Applying a NetworkPolicy on a Flannel cluster is like putting a sign on an unlocked door. The sign is there. The door is still open.
  • Flannel: Provides networking only. No NetworkPolicy enforcement. Zero.
  • Calico: Full NetworkPolicy support via iptables or eBPF (with Calico CNI).
  • Cilium: Full NetworkPolicy support via eBPF. Extended CRDs for L7 policies.
  • Weave: NetworkPolicy support but less performant than Calico/Cilium.
  • Antrea: VMware's CNI with full NetworkPolicy support and traceflow debugging.
📊 Production Insight
The most dangerous state is a cluster that has a policy-enforcing CNI but where the CNI's policy controller is not running. Calico's calico-kube-controllers and Cilium's operator must be healthy for policies to be translated into kernel rules. If the controller crashes, existing rules remain (they are already in the kernel), but new policies are not applied and deleted policies are not removed. Monitor the CNI controller's health as critical infrastructure.
🎯 Key Takeaway
The API server stores Network Policies. The CNI enforces them. Flannel ignores them entirely. Always verify your CNI supports enforcement before trusting any policy. Monitor the CNI controller as critical infrastructure.

Writing Precise Ingress and Egress Rules — With the DNS Trap Explained

Once you've applied default-deny, you need to surgically re-open only the traffic paths your application legitimately needs. Ingress rules control what can reach your Pod. Egress rules control what your Pod can reach. Both use the same selector primitives: podSelector, namespaceSelector, and ipBlock, which you can combine with AND logic inside a single from/to entry, or use OR logic across multiple entries.

The subtlety that burns everyone: a from entry with both podSelector AND namespaceSelector means the source must match BOTH selectors simultaneously — it's an AND. Two separate from entries each with their own selector is an OR. The indentation in YAML is load-bearing here. Get it wrong and you either over-permit or under-permit with no error from the API server.

The DNS trap is equally nasty. When you lock down egress, your Pods immediately lose DNS resolution because they can no longer reach CoreDNS on port 53 UDP/TCP. Every connection attempt fails not with a 'connection refused' but with a timeout waiting for DNS — which takes 30 seconds to surface. Always add an explicit egress rule for CoreDNS as part of your default-deny rollout, or you'll wonder why your app is broken when your network policy looks correct.

api-server-network-policy.yaml · YAML
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354
# This policy governs the 'api-server' Pods in the 'payments' namespace.
# It allows:
#   INGRESS: Only from Pods labeled 'app: frontend' in the 'web' namespace
#   EGRESS:  Only to the PostgreSQL database pods on port 5432
#            AND to CoreDNS on port 53 (critical — without this, DNS breaks)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: api-server-traffic-rules
  namespace: payments
spec:
  podSelector:
    matchLabels:
      app: api-server
  policyTypes:
    - Ingress
    - Egress

  ingress:
    - from:
        # AND logic: source must be in namespace 'web' AND have label 'app: frontend'
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: web
          podSelector:
            matchLabels:
              app: frontend
      ports:
        - protocol: TCP
          port: 8080

  egress:
    # Rule 1: Allow outbound to PostgreSQL pods only
    - to:
        - podSelector:
            matchLabels:
              app: postgres-primary
      ports:
        - protocol: TCP
          port: 5432

    # Rule 2: Allow DNS resolution — NEVER omit this in an egress-restricted policy
    - to:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: kube-system
          podSelector:
            matchLabels:
              k8s-app: kube-dns
      ports:
        - protocol: UDP
          port: 53
        - protocol: TCP
          port: 53
▶ Output
networkpolicy.networking.k8s.io/api-server-traffic-rules created
Mental Model
AND vs OR is Indentation-Deep
Draw your intended access matrix on paper before writing YAML. The indentation determines your security posture and produces no errors when wrong.
  • Same dash entry with podSelector AND namespaceSelector: source must match BOTH (AND).
  • Separate dash entries with podSelector OR namespaceSelector: source can match EITHER (OR).
  • No from/to clause under a governed policyType: deny all for that direction.
  • Empty from/to clause (from: []): also deny all — same as omitting the clause.
  • ipBlock can be combined with podSelector/namespaceSelector in the same entry (AND).
📊 Production Insight
The ipBlock selector is the only way to restrict traffic to external IPs (non-Pod destinations). However, ipBlock rules interact poorly with NAT. If your Pods go through a NAT gateway to reach the internet, the source IP seen by the destination is the NAT gateway's IP, not the Pod's IP. Conversely, for inbound traffic from outside the cluster via LoadBalancer/NodePort, the source IP seen by the Pod is the node's IP (unless externalTrafficPolicy: Local). Always test ipBlock rules with the actual traffic path, not a theoretical one.
🎯 Key Takeaway
AND vs OR is determined by YAML indentation — same dash = AND, separate dashes = OR. The DNS trap is the most common production incident when applying default-deny egress. Always include a CoreDNS carve-out on UDP and TCP port 53.

Verifying Real Enforcement and Debugging Policy Failures in Production

Applying a NetworkPolicy and assuming it works is a mistake you only make once in production. The API server accepts any syntactically valid policy regardless of whether your CNI supports it. You need to verify enforcement at the traffic level, not the YAML level.

The gold-standard test is running a temporary Pod in the source namespace and attempting a connection directly — not through a Service mesh or load balancer that might bypass node-level rules. Use kubectl run with --rm -it to spin up a throwaway Pod, then use curl, nc, or wget to probe the target. A dropped connection times out; a policy-permitted connection either succeeds or returns an application-level error (which is actually what you want to see — it means the packet reached the target).

For Cilium clusters, cilium monitor and the Hubble UI are exceptionally powerful — they show you in real time which policies matched or dropped each flow, with source/destination Pod identity, namespace, and labels. For Calico clusters, calicoctl get networkpolicy and iptables -L -n --line-numbers on the node running your Pod reveal the actual enforced rules. Always test both directions — a policy that allows egress from Pod A to Pod B doesn't automatically allow ingress to Pod B from Pod A unless Pod B also has a matching ingress rule.

verify-network-policy-enforcement.sh · BASH
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354
#!/usr/bin/env bash
# Verify Network Policy enforcement empirically.
# Tests both allowed and blocked paths.

set -euo pipefail

TARGET_NAMESPACE="payments"
TARGET_SERVICE="api-server"
TARGET_PORT="8080"
ALLOWED_SOURCE_NAMESPACE="web"
BLOCKED_SOURCE_NAMESPACE="monitoring"
CONNECT_TIMEOUT_SECONDS="3"

echo "=== Network Policy Enforcement Verification ==="
echo ""

# Test 1: Allowed path
echo "[TEST 1] Allowed ingress: frontend (web) -> api-server (payments)"
ALLOWED_RESULT=$(kubectl run policy-test-allowed \
  --namespace="$ALLOWED_SOURCE_NAMESPACE" \
  --image=curlimages/curl:8.5.0 \
  --restart=Never --rm --quiet -it \
  -- curl --silent --max-time "$CONNECT_TIMEOUT_SECONDS" \
       --output /dev/null --write-out "%{http_code}" \
       "http://${TARGET_SERVICE}.${TARGET_NAMESPACE}.svc.cluster.local:${TARGET_PORT}/health" \
  2>/dev/null || echo "FAILED")

if [ "$ALLOWED_RESULT" = "200" ]; then
  echo "  PASS: HTTP 200 received"
else
  echo "  FAIL: Expected HTTP 200, got '$ALLOWED_RESULT'"
fi

echo ""

# Test 2: Blocked path
echo "[TEST 2] Blocked ingress: prometheus (monitoring) -> api-server (payments)"
BLOCKED_RESULT=$(kubectl run policy-test-blocked \
  --namespace="$BLOCKED_SOURCE_NAMESPACE" \
  --image=curlimages/curl:8.5.0 \
  --restart=Never --rm --quiet -it \
  -- curl --silent --max-time "$CONNECT_TIMEOUT_SECONDS" \
       --output /dev/null --write-out "%{http_code}" \
       "http://${TARGET_SERVICE}.${TARGET_NAMESPACE}.svc.cluster.local:${TARGET_PORT}/health" \
  2>/dev/null || echo "TIMEOUT")

if [ "$BLOCKED_RESULT" = "TIMEOUT" ] || [ "$BLOCKED_RESULT" = "000" ]; then
  echo "  PASS: Connection timed out — blocked source is correctly dropped"
else
  echo "  FAIL: Expected timeout, got '$BLOCKED_RESULT' — policy NOT enforced!"
fi

echo ""
echo "=== Verification Complete ==="
▶ Output
=== Network Policy Enforcement Verification ===

[TEST 1] Allowed ingress: frontend (web) -> api-server (payments)
PASS: HTTP 200 received

[TEST 2] Blocked ingress: prometheus (monitoring) -> api-server (payments)
PASS: Connection timed out — blocked source is correctly dropped

=== Verification Complete ===
Mental Model
Timeout vs Refused
Timeout = CNI drop (policy working). Refused = application rejection (policy NOT working). This distinction is critical for debugging.
  • Timeout (after 3-30s): Packet was dropped by the CNI. NetworkPolicy is enforcing correctly.
  • Connection refused (immediate): Packet reached the target process. NetworkPolicy is NOT blocking this path.
  • HTTP 200: Packet reached the application and got a valid response. Policy allows this traffic.
  • HTTP 5xx: Packet reached the application but the app returned an error. Policy allows, app has issues.
  • DNS timeout (30s): UDP 53 to CoreDNS is blocked. Check egress rules for DNS carve-out.
📊 Production Insight
Testing Network Policies in CI/CD is essential. Create a pipeline stage that deploys the NetworkPolicy, runs the verification script, and fails the pipeline if enforcement is incorrect. Use a dedicated test namespace with known source and destination Pods. This catches policy regressions before they reach production. For Cilium, integrate Hubble flow logs into your CI to verify policy verdicts programmatically.
🎯 Key Takeaway
Never trust that a NetworkPolicy is enforced without empirical testing. Timeout = CNI drop (correct). Refused = application rejection (policy not working). Test both allowed and blocked paths. Automate verification in CI/CD.

Production Patterns: Namespace Isolation, Monitoring Carve-outs and Label Hygiene

In a real multi-tenant cluster, you can't write policies Pod-by-Pod. You need namespace-scoped baselines combined with additive per-workload rules. The pattern that works at scale is: one default-deny policy per namespace applied by your CD pipeline at namespace creation, then application-specific policies delivered alongside each Helm chart or Kustomize overlay.

Monitoring is the most common carve-out needed. Prometheus needs to scrape metrics from every namespace, but you don't want to globally allow all ingress. The clean solution is a namespace label like monitoring.io/allow-scrape: 'true' and a policy in each target namespace that allows ingress from the monitoring namespace on port 9090 or whatever your metrics port is. This keeps control local to the target namespace.

Label hygiene is non-negotiable. Network Policies inherit whatever labels your Pods have — if a developer changes a label during a refactor, the policy selector silently stops matching and the Pod falls back to default-deny behavior with no warning event. Use immutable labels like app: payment-api for security selectors and mutable labels like version: v2 only for routing. Audit your selectors in CI with kubectl get pods -l app=api-server -n payments and fail the pipeline if the expected count is zero.

prometheus-scrape-carveout.yaml · YAML
123456789101112131415161718192021222324252627
# Prometheus scrape carve-out for the 'payments' namespace.
# Complements the default-deny-all policy already in place.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-prometheus-scrape
  namespace: payments
  labels:
    policy-type: monitoring-carveout
    managed-by: platform-team
spec:
  podSelector:
    matchLabels:
      monitoring.io/expose-metrics: "true"
  policyTypes:
    - Ingress
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: monitoring
          podSelector:
            matchLabels:
              app: prometheus
      ports:
        - protocol: TCP
          port: 8080
▶ Output
networkpolicy.networking.k8s.io/allow-prometheus-scrape created
Mental Model
Audit Policy Coverage with a One-Liner
Label mismatches are the silent security gap. A Pod without the expected label falls through every policy selector and reverts to allow-all or default-deny — neither of which is your intended posture.
  • Security labels (app, tier, team) should be immutable. Enforce with admission webhooks.
  • Routing labels (version, canary, blue-green) should NOT be used in NetworkPolicy selectors.
  • CI check: fail the pipeline if kubectl get pods -l app=<name> returns zero Pods.
  • Namespace labels (kubernetes.io/metadata.name) are auto-applied in Kubernetes 1.21+. Use them for namespaceSelector.
  • Adopt a naming convention: all NetworkPolicy names should include the namespace and workload they govern.
📊 Production Insight
Namespace isolation at scale requires a namespace provisioning pipeline that automatically applies default-deny, DNS carve-out, and monitoring carve-out as non-removable base policies. Use Kyverno or OPA Gatekeeper to enforce that every namespace has a default-deny policy and that no Pod exists without the required security labels. This shifts security left and prevents configuration drift.
🎯 Key Takeaway
Namespace-scoped default-deny plus per-workload additive rules is the production pattern. Monitoring carve-outs keep Prometheus working without broad ingress. Label hygiene is non-negotiable — enforce immutable security labels with admission webhooks.

Network Policy Performance: iptables vs eBPF at Scale

The CNI enforcement mechanism directly impacts network latency and control plane load. Understanding the performance characteristics of your CNI is critical for capacity planning and troubleshooting latency issues that appear only at scale.

network-policy-performance-check.sh · BASH
1234567891011121314151617181920212223242526272829303132333435363738394041
#!/usr/bin/env bash
# Check NetworkPolicy rule count and CNI performance characteristics.
# Run on a node with calicoctl or cilium CLI installed.

set -euo pipefail

# For Calico: count iptables rules per endpoint
echo "=== Calico iptables Rule Count ==="
if command -v calicoctl &> /dev/null; then
  calicoctl get networkpolicy -A -o wide | wc -l
  echo "Total iptables chains on this node:"
  iptables -L -n | grep -c '^Chain'
  echo "Total iptables rules on this node:"
  iptables -L -n | grep -c '^[0-9]'
else
  echo "calicoctl not found — skipping Calico check"
fi

echo ""

# For Cilium: check eBPF policy programs
echo "=== Cilium eBPF Policy Status ==="
if command -v cilium &> /dev/null; then
  cilium status
  echo ""
  echo "Policy verdicts (last 100 flows):"
  cilium monitor --type policy-verdict -n payments | tail -20
else
  echo "cilium CLI not found — skipping Cilium check"
fi

echo ""

# General: check for NetworkPolicy count per namespace
echo "=== NetworkPolicy Distribution ==="
for ns in $(kubectl get namespaces -o jsonpath='{.items[*].metadata.name}'); do
  count=$(kubectl get networkpolicy -n "$ns" --no-headers 2>/dev/null | wc -l)
  if [ "$count" -gt 0 ]; then
    echo "  $ns: $count policies"
  fi
done
▶ Output
=== Calico iptables Rule Count ===
Total iptables chains on this node: 847
Total iptables rules on this node: 12403

=== NetworkPolicy Distribution ===
payments: 12 policies
web: 8 policies
monitoring: 3 policies
Mental Model
iptables O(n) vs eBPF O(1)
  • iptables (Calico default): Sequential rule matching. Degrades at 1000+ Pods per node.
  • eBPF (Cilium, Calico with eBPF dataplane): Hash map lookups. Scales linearly.
  • iptables rule churn: Every policy change triggers iptables-restore on all nodes. Brief packet drops possible during restore.
  • eBPF program updates: Atomic program replacement. No packet drops during policy updates.
  • Kernel requirement: eBPF requires kernel 4.9+ minimum. Full features require 5.10+.
📊 Production Insight
At scale (500+ Pods per node), iptables-based enforcement causes measurable latency increases and CPU overhead from rule traversal. The iptables-restore operation during policy updates can cause brief packet drops (1-5ms). For latency-sensitive workloads, use Cilium's eBPF dataplane. Monitor cilium_datapath_conntrack_gc_entries and iptables_restore_duration_seconds to detect enforcement bottlenecks.
🎯 Key Takeaway
iptables enforcement scales O(n) with rule count. eBPF enforcement scales O(1) with hash maps. At 500+ Pods per node, the difference is measurable. For latency-sensitive workloads, use Cilium. Monitor enforcement overhead as part of capacity planning.
🗂 CNI Plugin Network Policy Enforcement Comparison
Understanding how different CNI plugins enforce Network Policies and their production trade-offs.
AspectCalico (iptables mode)Calico (eBPF mode)Cilium (eBPF mode)Flannel
Policy enforcement layeriptables chains per endpointeBPF programs at TC layereBPF programs at socket/TC layerNone — ignores all policies
Observabilityiptables rule counters, calicoctlcalicoctl, BPF map inspectionHubble UI, per-flow policy verdict loggingN/A
Performance at scaleO(n) rule matching — degrades at 1000+ PodsO(1) hash maps — scales linearlyO(1) hash maps — scales linearlyN/A
Layer 7 policiesNot supported in core APINot supported in core APISupported natively (HTTP method, path, gRPC)N/A
DNS-based egressRequires GlobalNetworkPolicy (proprietary)Requires GlobalNetworkPolicy (proprietary)Built-in DNS-aware egressN/A
Kernel requirementAny Linux kernelKernel 5.10+Kernel 4.9+ (5.10+ for full features)Any Linux kernel
NetworkPolicy API supportFull complianceFull complianceFull compliance + extended CRDsNo support
Packet drop behavioriptables DROP — silent timeouteBPF DROP — silent timeouteBPF DROP — silent timeout, Hubble shows itN/A — packets always pass
Policy update mechanismiptables-restore — brief packet drops possibleAtomic BPF program replacementAtomic BPF program replacementN/A
Production maturityBattle-tested since 2016Maturing — GA in Calico v3.13+Rapidly maturing — preferred for new clusters 2022+Legacy — not recommended for production with security requirements

🎯 Key Takeaways

  • The Kubernetes API server stores Network Policies but never enforces them — enforcement lives entirely in your CNI plugin. If your CNI doesn't support policies (Flannel), they are silently ignored.
  • An empty podSelector in a NetworkPolicy matches ALL Pods in the namespace — not zero Pods. This is how you write a namespace-wide default-deny baseline.
  • Multiple from/to entries are ORed together. Fields within a single entry are ANDed. This YAML indentation distinction determines your actual security posture and produces no errors when wrong.
  • Always include a DNS egress carve-out (UDP+TCP port 53 to kube-dns) before rolling out default-deny egress, or every service discovery call will silently time out after 30 seconds.
  • iptables enforcement scales O(n) with rule count. eBPF enforcement scales O(1). At 500+ Pods per node, the difference is measurable. Choose your CNI accordingly.
  • Label hygiene is non-negotiable. Use immutable labels for security selectors. Enforce with admission webhooks. Audit in CI.
  • Test Network Policies empirically with curl-based smoke tests. Timeout = CNI drop (correct). Refused = application rejection (policy not working).
  • Namespace-scoped default-deny plus per-workload additive rules is the production pattern. Automate namespace provisioning with base policies.

⚠ Common Mistakes to Avoid

    Specifying only 'Ingress' in policyTypes when you meant to lock down both directions. The Pod is half-locked: can't be reached but can freely dial any outbound destination including the internet. Fix: Always list both 'Ingress' and 'Egress' in policyTypes for default-deny baselines.
    Fix

    Always list both 'Ingress' and 'Egress' in policyTypes for default-deny baselines.

    Forgetting the DNS egress carve-out when enabling default-deny egress. App Pods appear healthy per readiness checks but fail on real traffic with 30-second delays. Fix: Always include an egress rule to kube-system Pods labeled k8s-app=kube-dns on UDP port 53 AND TCP port 53.
    Fix

    Always include an egress rule to kube-system Pods labeled k8s-app=kube-dns on UDP port 53 AND TCP port 53.

    Using AND-logic when OR-logic is needed (or vice versa) in from/to selectors. No error is produced — the policy applies cleanly but does the wrong thing. Fix: Fields within a single '-' entry are ANDed. Separate '-' entries are ORed. Draw the access matrix on paper first.
    Fix

    Fields within a single '-' entry are ANDed. Separate '-' entries are ORed. Draw the access matrix on paper first.

    Using Flannel as the CNI and assuming Network Policies are enforced. Flannel silently ignores all NetworkPolicy objects. Fix: Verify your CNI supports enforcement before writing any policy. Migrate to Calico or Cilium.
    Fix

    Verify your CNI supports enforcement before writing any policy. Migrate to Calico or Cilium.

    Using mutable labels (version, canary) in NetworkPolicy podSelectors. A deployment rollout changes the label and the policy silently stops matching. Fix: Use immutable labels (app, tier) for security selectors. Enforce with admission webhooks.
    Fix

    Use immutable labels (app, tier) for security selectors. Enforce with admission webhooks.

    Not testing both ingress and egress directions. A policy allowing egress from A to B does not automatically allow ingress to B from A. B needs its own ingress rule. Fix: Test both directions independently.
    Fix

    Test both directions independently.

    Applying default-deny egress without a carve-out for the Kubernetes API server (TCP 443 to the API server IP). Pods that need to interact with the API server (operators, controllers, kubectl inside Pods) will fail. Fix: Add an egress rule for the API server CIDR on port 443.
    Fix

    Add an egress rule for the API server CIDR on port 443.

    Not verifying enforcement empirically. The API server accepts syntactically valid policies regardless of CNI support. Fix: Use curl-based smoke tests from throwaway Pods to verify allowed and blocked paths.
    Fix

    Use curl-based smoke tests from throwaway Pods to verify allowed and blocked paths.

    Writing policies at the Pod level instead of the namespace level. This creates maintenance burden and label sprawl. Fix: Use namespace-scoped default-deny plus per-workload additive rules.
    Fix

    Use namespace-scoped default-deny plus per-workload additive rules.

    Not monitoring CNI controller health. If calico-kube-controllers or cilium-operator crashes, new policies are not applied. Fix: Monitor the CNI controller as critical infrastructure with liveness and readiness probes.
    Fix

    Monitor the CNI controller as critical infrastructure with liveness and readiness probes.

Interview Questions on This Topic

  • QA NetworkPolicy is applied to a namespace with default-deny-all. A developer reports their Pod can reach the database but can't resolve any hostnames. What's wrong and how do you fix it without relaxing security?
  • QExplain the difference between a podSelector and a namespaceSelector in a 'from' clause. What does it mean when both appear in the same list entry versus as separate entries? Give an example of when you'd need each.
  • QYour team applies a NetworkPolicy that looks correct but packets are still flowing between Pods that should be blocked. Walk me through how you'd diagnose whether this is a CNI issue, a policy syntax issue, or a label mismatch.
  • QExplain how Network Policy enforcement differs between Calico (iptables) and Cilium (eBPF). When would you choose one over the other?
  • QWhat happens if two Network Policies select the same Pod? Can one policy override another? How do you achieve deny semantics in the standard Kubernetes API?
  • QA Pod is receiving traffic from an unexpected namespace. The NetworkPolicy looks correct. What are the three most likely causes?
  • QHow would you design a Network Policy strategy for a multi-tenant cluster with 50 namespaces? What automation would you put in place?
  • QExplain the difference between timeout and connection refused in the context of Network Policy enforcement. Why does this distinction matter for debugging?
  • QHow do ipBlock rules interact with NAT gateways and externalTrafficPolicy? When might ipBlock rules not work as expected?
  • QDescribe how you would test Network Policies in CI/CD. What would your verification script check?

Frequently Asked Questions

Does a Kubernetes Network Policy affect traffic between Pods in the same namespace?

Yes, absolutely. Network Policies apply to all Pod-to-Pod traffic regardless of whether the source and destination are in the same namespace or different ones. A default-deny policy with an empty podSelector will block same-namespace traffic too, and you will need explicit ingress rules to re-permit it.

Can Kubernetes Network Policies block traffic from outside the cluster?

For traffic entering via a Service of type LoadBalancer or NodePort, the source IP seen by the Pod is typically the node's IP due to SNAT — not the original client IP. This means ipBlock rules targeting external IPs may not work as expected unless you set externalTrafficPolicy: Local on the Service. Network Policies work best for Pod-to-Pod east-west traffic. Perimeter security for north-south traffic belongs in a separate ingress controller or cloud firewall.

What happens if two Network Policies select the same Pod with conflicting rules?

There are no conflicts because Network Policies are purely additive whitelists — there is no deny rule in the standard API. If two policies both select the same Pod, their rules are unioned: a packet is allowed if it satisfies any matching rule from any policy. You can never use a second policy to override and deny something a first policy allows. For deny semantics, you need CNI-specific extensions like Calico's GlobalNetworkPolicy or Cilium's CiliumNetworkPolicy.

How do I verify that my Network Policies are actually being enforced?

Use empirical testing: run a throwaway Pod in the source namespace with kubectl run --rm -it and attempt a connection to the target. A timeout means the CNI is dropping packets (policy enforced). A connection refused means the packet reached the target (policy not enforced). For Cilium, use cilium monitor --type policy-verdict for real-time policy decision logging. For Calico, use iptables -L -n on the node to inspect enforced rules.

What is the DNS trap and how do I avoid it?

When you apply default-deny egress, all outbound traffic is blocked — including DNS resolution on UDP/TCP port 53 to CoreDNS. Every DNS lookup times out after 30 seconds, making services appear broken. The fix: always include an egress rule in your default-deny policy that allows traffic to kube-system Pods labeled k8s-app:kube-dns on both UDP and TCP port 53. Template this into your namespace provisioning pipeline.

Should I use Calico or Cilium for Network Policy enforcement?

Calico with iptables is battle-tested and works on any kernel but scales O(n) with rule count. Cilium with eBPF scales O(1) and supports L7 policies but requires kernel 4.9+. For new clusters in 2024+, Cilium is the preferred choice for its performance, observability (Hubble), and extended policy capabilities. For existing clusters on older kernels, Calico with the eBPF dataplane is a good middle ground.

🔥
Naren Founder & Author

Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.

← PreviousKubernetes Architecture ExplainedNext →Service Mesh — Istio Basics
Forged with 🔥 at TheCodeForge.io — Where Developers Are Forged