Skip to content
Home ML / AI Building a Neural Network in PyTorch

Building a Neural Network in PyTorch

Where developers are forged. · Structured learning · Free forever.
📍 Part of: PyTorch → Topic 3 of 7
A comprehensive guide to Building a Neural Network in PyTorch — learn to implement the nn.
⚙️ Intermediate — basic ML / AI knowledge assumed
In this tutorial, you'll learn
A comprehensive guide to Building a Neural Network in PyTorch — learn to implement the nn.
  • Building a neural network in PyTorch means subclassing nn.Module — understanding what that abstraction provides (automatic parameter tracking, GPU portability, optimizer integration, serialization) is more important than memorizing the syntax.
  • super().__init__() is mandatory and must be the first line of every __init__ — without it, no layers are registered, model.parameters() returns empty, and model.to(device) does nothing.
  • Define layers in __init__, data flow in forward — this separation is the entire contract of nn.Module and violating it produces bugs that are silent, expensive to debug, and easy to prevent.
✦ Plain-English analogy ✦ Real code with output ✦ Interview questions
Quick Answer
  • nn.Module is the base class for all PyTorch models — define layers in __init__, data flow in forward
  • super().__init__() is mandatory — without it, layers and parameters are not registered and model.parameters() returns empty
  • model.to(device) moves all parameters to GPU in one atomic call — never manually move individual weights
  • Defining layers inside forward() creates new untrained weights every pass — the optimizer updates weights that are immediately discarded
  • state_dict saves only learnable parameters — smaller, portable, and version-independent compared to saving the full model
  • model.eval() disables Dropout and freezes BatchNorm running statistics — always call it before inference or validation
🚨 START HERE
nn.Module Debug Cheat Sheet
Quick commands to diagnose model architecture and training issues without guessing
🟡Model has zero trainable parameters
Immediate ActionCheck if super().__init__() was called as the first line in __init__ — without it nothing is registered
Commands
python -c "import torch; from your_model import YourModel; m = YourModel(); print(sum(p.numel() for p in m.parameters()))"
python -c "from your_model import YourModel; m = YourModel(); print(list(m.named_parameters())[:5])"
Fix NowAdd super().__init__() as the absolute first line of __init__ — parameters, modules, and buffers cannot be registered without it. If you see an empty list from named_parameters(), this is almost always the cause.
🟡Weights do not change after a training step
Immediate ActionCheck if layers are defined inside forward() — if so, the optimizer is updating weights that are discarded every call
Commands
python -c "before = {n: p.clone() for n, p in m.named_parameters()}; train_one_step(m); changed = {n: not torch.equal(before[n], p) for n, p in m.named_parameters()}; print(changed)"
python -c "import inspect; print(inspect.getsource(m.forward)) # look for nn.Linear or nn.Conv2d inside forward"
Fix NowMove every nn.Linear, nn.Conv2d, nn.BatchNorm, and similar layer instantiation from forward() to __init__(). forward() should contain only tensor operations and calls to self.layer_name — never constructor calls.
🟡Dimension mismatch crash on first forward pass
Immediate ActionPrint tensor shape after every operation in forward() using a dummy tensor — do not wait for real training data
Commands
python -c "import torch; x = torch.randn(1, 784); print('Input:', x.shape); out = model.fc1(x); print('After fc1:', out.shape)"
python -c "print(model) # prints all registered layers with their expected in_features and out_features"
Fix NowAdd unsqueeze(0) when passing a single sample — PyTorch layers expect (batch_size, features) not (features,). For CNN inputs, shape should be (batch_size, channels, height, width). Use AdaptiveAvgPool2d before linear layers to decouple spatial dimensions from layer definitions.
Production IncidentModel trains for 100 epochs but never learns — layers defined inside forward()A classification model trained for 100 epochs with steadily decreasing training loss, but validation accuracy remained stuck at random chance — 10% for a 10-class problem. Root cause: layer definitions inside forward() created fresh, uninitialized weights on every call.
SymptomTraining loss decreases across epochs, giving the appearance of a healthy training run. Validation accuracy stays at 10% — exactly random chance for 10 classes. Inspecting model.parameters() shows tensors exist, but their values change by only a tiny amount after 100 epochs of training. The model memorizes nothing and generalizes nothing.
AssumptionThe learning rate is too low, or the dataset has too much label noise. Several learning rate adjustments were tried. None helped. The dataset was audited and found to be clean. The real cause was never in the training configuration.
Root causeThe developer defined nn.Linear layers inside the forward() method rather than in __init__. Every time forward() was called — once per batch — Python created entirely new nn.Linear instances with freshly randomized weights. The optimizer held references to the weights from the previous forward pass and updated those. On the next forward pass, those updated weights were garbage collected and replaced by new random ones. The model was running inference on different random weights every single batch. The loss decreased slightly in some epochs due to random variation in the new weights, which looked like learning. It was not.
FixMoved all nn.Linear layer definitions from forward() to __init__(). The layers are now instantiated once at model creation time and reused across every forward pass. The optimizer holds references to the same weight tensors that the model uses for prediction — updates persist, gradients accumulate correctly, and the model now converges to above 94% validation accuracy within 20 epochs on the same dataset.
Key Lesson
Always define layers in __init__, never in forward()forward() is called once per batch and should only describe data flow, not create structureLayers defined in forward() create new untrained weights every call — the optimizer updates weights that are immediately discarded on the next passThe symptom is training loss decreasing while validation accuracy stays at random chance — this combination almost always points to either this bug or a data pipeline issueVerify with model.named_parameters() — print parameter values before and after a training step — if they do not change meaningfully, the optimizer is not reaching the weights the model uses
Production Debug GuideCommon symptoms when nn.Module models fail to learn or fail to deploy
Training loss decreases but validation accuracy stays at random chanceCheck immediately whether any layers are defined inside forward() rather than __init__(). Move every nn.Linear, nn.Conv2d, nn.BatchNorm2d, and similar definition to __init__(). forward() should contain only the data flow logic — no layer construction. After fixing, verify by printing a parameter value before and after one optimizer step and confirming it changed.
AttributeError or RuntimeError: module must have its own parametersEnsure super().__init__() is the first line in your __init__ method. Without it, the internal _parameters, _modules, and _buffers dictionaries are never created. Any assignment of an nn.Module or nn.Parameter to self will raise an error or silently fail to register.
Model produces different predictions for the same input across callsCall model.eval() before inference. Without it, Dropout randomly zeroes activations and BatchNorm uses batch statistics instead of running statistics — both introduce randomness that should be disabled during prediction. Also verify you are not passing data through a training augmentation pipeline during inference.
RuntimeError: size mismatch or mat1 and mat2 cannot be multiplied on first forward passPrint tensor.shape after each operation in forward() to identify exactly where the mismatch occurs. For single samples, add unsqueeze(0) to add the batch dimension — PyTorch layers expect input shape (batch_size, features), not (features,). Use a dummy tensor at development time to verify shapes before training.
load_state_dict raises unexpected key or missing key errorsCompare model.state_dict().keys() with the keys in the saved checkpoint. Any architecture change — adding a layer, renaming a layer, changing depth — breaks state_dict compatibility. Use strict=False in load_state_dict() only as a diagnostic step to see which keys are mismatched, then fix the architecture to match.

Building a neural network in PyTorch revolves around one central idea: subclassing nn.Module. You define layers in __init__ and the data flow in forward. PyTorch automatically tracks all parameters, moves them to GPU with a single .to(device) call, and integrates cleanly with torch.optim for gradient-based training.

The nn.Module design solves parameter management at scale. Without it, you would manually track thousands of weight matrices, move each to GPU individually, and implement gradient updates by hand. The module system handles all of this through a unified interface: model.parameters() returns every learnable tensor, model.state_dict() serializes the full learnable state, and model.to(device) moves everything atomically — no risk of a weight matrix left behind on CPU while the rest of the model runs on GPU.

The production failure pattern I see most consistently: developers define layers inside forward() instead of __init__. This creates new uninitialized weights on every forward pass. The optimizer updates weights from the previous pass that no longer exist — they were replaced by fresh random tensors when forward() ran again. Training loss can decrease slightly due to random variation, which masks the bug entirely. Validation accuracy stays at random chance. No error is raised. The model trains for 100 epochs and learns nothing.

What Is Building a Neural Network in PyTorch and Why Does It Exist?

Building a neural network in PyTorch is the process of defining a model by subclassing nn.Module — PyTorch's foundational abstraction for everything that involves learnable parameters. It was designed to solve a specific problem: managing the lifecycle of thousands to billions of weight tensors without building that infrastructure yourself every time you train a model.

The architectural separation at the core of nn.Module is deliberate and meaningful. __init__ defines the static structure — which layers exist, their input and output sizes, how they are named. forward defines the dynamic behavior — how a tensor flows through those layers during each call. This separation is what makes the rest of the system work: PyTorch can inspect the model structure without running data through it, serialize only the parameters independently of the forward logic, and move the entire model to GPU atomically with model.to(device).

The key mechanism underneath all of this is Python's __setattr__ override in nn.Module. When you write self.fc1 = nn.Linear(784, 128) in __init__, PyTorch intercepts that assignment, detects that nn.Linear is itself an nn.Module, and registers it in an internal _modules dictionary. When you write self.weight = nn.Parameter(torch.randn(10, 5)), PyTorch detects nn.Parameter and registers it in _parameters. These dictionaries are what model.parameters(), model.state_dict(), and model.to(device) iterate over. None of this works if you skip super().__init__() — the dictionaries are never created, the __setattr__ override is never installed, and every layer you assign to self is just a plain Python attribute that PyTorch cannot see.

The practical consequence at production scale: a model with 100M parameters that is partially on GPU and partially on CPU produces wrong outputs without raising errors. Parameter groups that the optimizer cannot reach do not update. model.parameters() returning fewer tensors than expected is always a registration bug — not a configuration issue.

For 2026 deployments, the nn.Module contract also integrates with torch.compile() — PyTorch's graph compilation path introduced in 2.0 and stabilized through 2.2 and beyond. A properly structured nn.Module compiles cleanly with torch.compile(model), producing kernel fusion and operator overlap that can reduce training time by 30-50% on modern A100 and H100 hardware without changing a line of model code. Models with operations that break the graph — .numpy() calls inside forward, Python data structures used conditionally — either fail to compile or fall back to eager mode silently.

io/thecodeforge/ml/forge_network.py · PYTHON
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657
import torch
import torch.nn as nn
import torch.nn.functional as F

# io.thecodeforge: Production-grade MLP implementation with verification
# Demonstrates the correct nn.Module structure and parameter registration pattern
class ForgeClassifier(nn.Module):
    def __init__(self, input_size: int, hidden_size: int, num_classes: int):
        # MANDATORY: initializes _parameters, _modules, _buffers, _hooks dictionaries
        # Without this, no layer you assign to self will be registered with PyTorch
        super(ForgeClassifier, self).__init__()

        # Structure defined here once — layers are created and registered at init time
        # PyTorch intercepts these assignments via __setattr__ and adds them to _modules
        self.fc1 = nn.Linear(input_size, hidden_size)
        self.bn1 = nn.BatchNorm1d(hidden_size)   # running_mean/var stored as buffers
        self.dropout = nn.Dropout(p=0.3)          # disabled in eval mode automatically
        self.fc2 = nn.Linear(hidden_size, num_classes)

        # Verify the model structure at init time — catch shape bugs during development
        self._verify_forward(input_size, num_classes)

    def _verify_forward(self, input_size: int, num_classes: int):
        """Run a dummy forward pass at init to catch dimension mismatches immediately."""
        with torch.no_grad():
            dummy = torch.randn(2, input_size)  # batch_size=2 for BatchNorm1d compatibility
            out = self.forward(dummy)
            assert out.shape == (2, num_classes), (
                f"Output shape mismatch: expected (2, {num_classes}), got {out.shape}"
            )

    def forward(self, x: torch.Tensor) -> torch.Tensor:
        # Data flow only — no layer construction here
        # fc1 -> BatchNorm -> ReLU -> Dropout -> fc2
        x = self.fc1(x)         # (batch, input_size) -> (batch, hidden_size)
        x = self.bn1(x)         # normalize across batch dimension
        x = F.relu(x)           # element-wise activation
        x = self.dropout(x)     # zeroes 30% of activations during training, no-op in eval
        x = self.fc2(x)         # (batch, hidden_size) -> (batch, num_classes)
        return x                 # raw logits — apply softmax outside, or use CrossEntropyLoss


# Instantiate — _verify_forward runs immediately, catches shape bugs at construction time
model = ForgeClassifier(input_size=784, hidden_size=256, num_classes=10)

print(model)
print(f"Trainable parameters: {sum(p.numel() for p in model.parameters() if p.requires_grad):,}")
print(f"Total parameters (incl. buffers): {sum(p.numel() for p in model.parameters()):,}")

# Verify parameter registration is correct
registered_names = [n for n, _ in model.named_parameters()]
print(f"Registered parameter groups: {registered_names}")

# Move entire model to GPU atomically — all registered parameters and buffers move together
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = model.to(device)
print(f"Model device: {next(model.parameters()).device}")
▶ Output
ForgeClassifier(
(fc1): Linear(in_features=784, out_features=256, bias=True)
(bn1): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(dropout): Dropout(p=0.3, inplace=False)
(fc2): Linear(in_features=256, out_features=10, bias=True)
)
Trainable parameters: 203,530
Total parameters (incl. buffers): 203,530
Registered parameter groups: ['fc1.weight', 'fc1.bias', 'bn1.weight', 'bn1.bias', 'fc2.weight', 'fc2.bias']
Model device: cuda:0
Mental Model
The nn.Module Mental Model
nn.Module separates what the model is (structure defined in __init__) from what the model does (behavior defined in forward) — this separation is what enables automatic parameter management, GPU portability, and clean serialization.
  • __init__ defines the static structure — which layers exist, their sizes, and how they are named as attributes
  • forward defines the dynamic behavior — how a tensor flows through those pre-built layers on each call
  • super().__init__() installs PyTorch's __setattr__ override — without it, layer assignments to self are invisible to the framework
  • model.parameters() iterates all registered learnable tensors — you never maintain a manual list of weights
  • model.to(device) moves every registered parameter and buffer atomically — no risk of partial GPU placement causing silent type errors
📊 Production Insight
super().__init__() initializes _parameters, _modules, and _buffers dictionaries and installs the __setattr__ override that makes layer registration automatic.
Without it, every self.layer = nn.Linear(...) is just a Python attribute — invisible to model.parameters(), model.to(device), and model.state_dict().
Rule: super().__init__() is always the first line of every nn.Module subclass — no exceptions.
🎯 Key Takeaway
nn.Module separates structure (__init__) from behavior (forward) — this enables automatic parameter management, GPU portability, and clean serialization without any manual bookkeeping.
super().__init__() is mandatory and must be first — without it, the module cannot register layers, parameters, or buffers.
Add a dummy tensor forward pass at __init__ time to catch dimension mismatches during development rather than mid-training.
Model Architecture Decision
IfSimple linear stack of layers with no branching, skip connections, or conditional logic
UseUse nn.Sequential — it eliminates the boilerplate of writing forward() for linear pipelines and the output of each module automatically becomes the input of the next
IfResidual/skip connections, multiple inputs or outputs, conditional branching in forward, or operations between layers that are not nn.Module
UseSubclass nn.Module and implement a custom forward() — Sequential is architecturally incapable of expressing non-linear data flow
IfDynamic number of layers determined at construction time (variable-depth networks, hyperparameter search)
UseUse nn.ModuleList in __init__ — never a Python list, which is invisible to PyTorch's parameter tracking, GPU movement, and serialization
IfDynamic number of named sub-networks that need to be accessed by name at runtime
UseUse nn.ModuleDict in __init__ — provides dictionary-style access while fully registering all contained modules with PyTorch

Enterprise Persistence: Saving and Loading Forge Models

In a production environment, training a model is only part of the story. You need to persist it, version it, load it reliably six months later, and reproduce its inference behavior exactly. Getting this wrong has a specific failure mode that is not immediately obvious: you load a model, it runs inference without any errors, and it produces predictions — predictions that are quietly wrong because Dropout is still active or because you loaded weights into the wrong architecture without noticing.

The core persistence decision in PyTorch is between saving the full model object and saving only the state_dict. torch.save(model, path) uses Python's pickle to serialize the entire model — code, architecture, and weights together. torch.save(model.state_dict(), path) serializes only the learnable parameter tensors as an OrderedDict of name-to-tensor mappings. The state_dict approach is the production standard for three concrete reasons: the file is smaller because no Python code is embedded, it is portable because you can load weights into a model defined anywhere as long as the parameter names match, and it is safer because pickle can execute arbitrary code when deserializing, which is a real attack surface in shared model repositories.

The full checkpoint pattern extends this for training resumption. Saving only model.state_dict() is sufficient for inference deployment, but if you need to resume training from a checkpoint, you also need the optimizer state — Adam's moment estimates are not recomputed from scratch, and resuming without them produces different training dynamics than if training had never stopped. A complete checkpoint includes model state, optimizer state, epoch number, and the best validation metric so you know whether to update your best-model checkpoint.

One detail that bites teams in production: torch.load() defaults to weights_only=False in PyTorch versions before 2.4, which means it will execute arbitrary pickle code. In PyTorch 2.4+, the default changed to weights_only=True for state_dict loading, which is safer. If you are loading state_dicts — which you should be — explicitly pass weights_only=True regardless of version to future-proof your code and prevent security warnings in CI.

io/thecodeforge/ml/forge_persistence.py · PYTHON
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465
# io.thecodeforge: Production model persistence patterns
# Covers inference deployment, training resumption, and safe loading
import torch
import os
from pathlib import Path

MODEL_DIR = Path("io/thecodeforge/models")
MODEL_DIR.mkdir(parents=True, exist_ok=True)


# ─── Pattern 1: Inference deployment — save state_dict only ─────────────────
deployment_path = MODEL_DIR / "classifier_v1.pth"
torch.save(model.state_dict(), deployment_path)
print(f"Saved inference weights: {deployment_path} ({os.path.getsize(deployment_path) / 1e6:.1f} MB)")

# Load for inference — weights_only=True prevents arbitrary pickle execution
inference_model = ForgeClassifier(input_size=784, hidden_size=256, num_classes=10)
inference_model.load_state_dict(
    torch.load(deployment_path, map_location='cpu', weights_only=True)
)
inference_model.eval()  # MANDATORY: disables Dropout, freezes BatchNorm running stats
inference_model = inference_model.to(device)

# Verify loaded weights match the original
for (n1, p1), (n2, p2) in zip(model.named_parameters(), inference_model.named_parameters()):
    assert n1 == n2, f"Parameter name mismatch: {n1} vs {n2}"
    assert torch.equal(p1.cpu(), p2.cpu()), f"Value mismatch for {n1}"
print("Inference model: all parameters loaded and verified.")


# ─── Pattern 2: Training checkpoint — save full state for resumption ─────────
def save_checkpoint(model, optimizer, epoch: int, val_loss: float, path: Path):
    """Save everything needed to resume training exactly where it left off."""
    torch.save({
        'epoch':                epoch,
        'model_state_dict':     model.state_dict(),
        'optimizer_state_dict': optimizer.state_dict(),
        'val_loss':             val_loss,
    }, path)
    print(f"Checkpoint saved: epoch {epoch}, val_loss {val_loss:.4f}")


def load_checkpoint(model, optimizer, path: Path, device: torch.device):
    """Resume training from a checkpoint — restores model weights and optimizer state."""
    checkpoint = torch.load(path, map_location=device, weights_only=False)  # dict is safe here
    model.load_state_dict(checkpoint['model_state_dict'])
    optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
    start_epoch = checkpoint['epoch'] + 1
    best_val_loss = checkpoint['val_loss']
    print(f"Resumed from epoch {checkpoint['epoch']}, val_loss {best_val_loss:.4f}")
    return model, optimizer, start_epoch, best_val_loss


# Example checkpoint save during training loop
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
checkpoint_path = MODEL_DIR / "checkpoint_epoch_10.pth"
save_checkpoint(model, optimizer, epoch=10, val_loss=0.1823, path=checkpoint_path)

# Resume training from checkpoint
fresh_model = ForgeClassifier(input_size=784, hidden_size=256, num_classes=10).to(device)
fresh_optimizer = torch.optim.Adam(fresh_model.parameters(), lr=1e-3)
fresh_model, fresh_optimizer, start_epoch, best_val = load_checkpoint(
    fresh_model, fresh_optimizer, checkpoint_path, device
)
print(f"Training will resume from epoch {start_epoch}")
▶ Output
Saved inference weights: io/thecodeforge/models/classifier_v1.pth (0.8 MB)
Inference model: all parameters loaded and verified.
Checkpoint saved: epoch 10, val_loss 0.1823
Resumed from epoch 10, val_loss 0.1823
Training will resume from epoch 11
🔥Pro Tip:
Always call model.eval() immediately after loading weights for inference — before moving to device, before the first forward pass. Without it, Dropout randomly zeroes activations and BatchNorm uses batch statistics instead of its learned running statistics. The model will produce different predictions for the same input on every call, and the difference will not be small enough to ignore in production. Treat model.eval() after load_state_dict as a mandatory step in your inference initialization sequence, not an optional call.
📊 Production Insight
state_dict saves only learnable parameters and buffers — smaller, portable, and safer than pickling the full model object.
weights_only=True in torch.load() prevents arbitrary pickle execution — use it whenever loading a state_dict from any source you do not fully control.
Rule: save state_dict for deployment, save a full checkpoint dict (model + optimizer + epoch + metric) for training resumption — they serve different purposes and should not be conflated.
🎯 Key Takeaway
state_dict saves only learnable parameters — smaller and safer than saving the full model object via pickle.
model.eval() after loading is mandatory — missing it causes non-deterministic predictions due to active Dropout and batch-mode BatchNorm.
Save full checkpoints for training resumption — optimizer state is not optional if you want resumed training to behave identically to uninterrupted training.
Model Persistence Decision
IfSaving a final model for inference deployment only
UseSave model.state_dict() with torch.save() — smallest file, no code dependency, load with weights_only=True
IfSaving mid-training to resume later
UseSave a checkpoint dict containing model.state_dict(), optimizer.state_dict(), current epoch, and best validation metric — resuming without optimizer state produces different training dynamics
IfSharing a model with a team using a different codebase or deployment environment
UseSave state_dict and document the exact parameter names and shapes — the loading side must instantiate a model with a matching architecture before calling load_state_dict()
IfDeploying to a C++ runtime or mobile device without Python
UseExport with torch.jit.script() or torch.jit.trace() and save with torch.jit.save() — produces a self-contained ScriptModule that runs in LibTorch without Python

Containerizing the Forge Model Service

Getting a PyTorch model to run correctly on a developer workstation is step one. Getting it to run correctly in production — on a different machine, a different OS, a different GPU driver, possibly six months from now — is the actual engineering problem. Containerization with Docker is the standard answer, but the details matter more than most tutorials acknowledge.

The version pinning problem is where most teams make their first mistake. Pulling pytorch/pytorch:latest in production means your deployment environment changes every time a new PyTorch release ships. Changes between minor versions can affect numerical precision, change default behaviors for certain operations, and silently alter model outputs. Pin the full triple: PyTorch version, CUDA version, and cuDNN version. These three together determine the exact kernel implementations your model runs on. A mismatch between cuDNN versions on the same PyTorch base can produce numerically different outputs from the same weights.

The image size problem compounds quickly in multi-service deployments. A CUDA-enabled PyTorch runtime image is typically 5-7GB. A CPU-only image is under 1GB. If your inference service runs on CPU-optimized instances — which is common for cost efficiency in steady-state serving — you are pulling 5-7GB per node during deployments when 1GB would be sufficient. This is not a philosophical problem — it translates directly to longer deployment times, higher container registry egress costs, and slower autoscaling response.

The model weight inclusion problem is the third one. Baking a 500MB model file into a Docker image with COPY means every CI build, every image push, and every container pull moves that 500MB. For a team with 10 engineers committing multiple times a day, this accumulates. The correct pattern is to exclude model weights from the image and mount them from a volume, or download them at container startup from an object store like S3 or GCS. This keeps the image lean, makes weight updates independent of image rebuilds, and allows you to run canary deployments with different weight versions without rebuilding images.

Dockerfile · DOCKERFILE
12345678910111213141516171819202122232425262728293031323334353637383940
# io.thecodeforge: Production PyTorch inference container
# Pin the full version triple — never use 'latest' in production
# PyTorch 2.2.0 + CUDA 12.1 + cuDNN 8 is a tested, stable combination for 2026 deployments
FROM pytorch/pytorch:2.2.0-cuda12.1-cudnn8-runtime

WORKDIR /app

# Install system-level dependencies before pip — this layer caches independently
# libgl1-mesa-glx is required by OpenCV; libgomp1 is required by some PyTorch operations
RUN apt-get update \
    && apt-get install -y --no-install-recommends \
        libgl1-mesa-glx \
        libgomp1 \
        curl \
    && rm -rf /var/lib/apt/lists/*

# Separate requirements from source — requirements layer caches until requirements.txt changes
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy inference source code only
# Model weights are NOT copied here — they are mounted or downloaded at startup
COPY ./src /app/src

# Model path is configurable via environment variable
# In Kubernetes: mount a PVC at /app/models or use an init container to download from S3
ENV MODEL_PATH=/app/models/classifier_v1.pth
ENV MODEL_INPUT_SIZE=784
ENV MODEL_HIDDEN_SIZE=256
ENV MODEL_NUM_CLASSES=10

# Run as non-root user — required by most enterprise security policies
RUN useradd -m -u 1001 forge
USER forge

# Health check verifies the inference service starts and the model loads correctly
HEALTHCHECK --interval=30s --timeout=15s --start-period=30s --retries=3 \
    CMD curl -f http://localhost:8080/health || exit 1

ENTRYPOINT ["python", "src/inference_service.py"]
▶ Output
Successfully built image thecodeforge/forge-classifier:2.2.0-cuda12.1 (2.4GB)
# CPU-only variant would be ~800MB using pytorch/pytorch:2.2.0 base
⚠ Performance Warning:
If your inference service runs on CPU-only instances — which is common for cost-optimized serving workloads — use the CPU-specific PyTorch base image (pytorch/pytorch:2.2.0 without the CUDA suffix). The CUDA-enabled image carries 4-6GB of GPU libraries that are never loaded on a CPU instance. That weight increases pull times, container registry storage costs, and autoscaling latency. The runtime behavior on CPU is identical — the CUDA libraries simply go unused when no GPU is present, but they still get pulled and stored.
📊 Production Insight
Pin the full version triple — PyTorch version, CUDA version, and cuDNN version — in the Dockerfile. cuDNN version differences produce numerically different outputs from the same weights without raising any errors.
Exclude model weight files from the Docker image — mount them as volumes or download from object storage at startup to keep images lean and deployments fast.
Rule: CPU deployment gets the CPU image, GPU deployment gets the CUDA runtime image — never run a CUDA image on CPU-only infrastructure.
🎯 Key Takeaway
Pin the full PyTorch-CUDA-cuDNN version triple in the Dockerfile — 'latest' in production means your deployment environment changes without your control.
Use CPU-specific images for CPU-only inference — saves 4-6GB of unused CUDA libraries per container instance.
Exclude model weights from the image — mount or download at startup so image rebuilds and model updates are independent operations.
Docker Image Selection for PyTorch
IfDeploying on GPU instances (NVIDIA) — training or GPU inference
UseUse pytorch/pytorch:X.Y.Z-cudaVERSION-cudnnN-runtime — includes CUDA runtime and cuDNN, excludes compiler toolchain, smallest GPU-capable image
IfDeploying on CPU-only instances — cost-optimized steady-state serving
UseUse pytorch/pytorch:X.Y.Z — CPU-only image, no CUDA libraries, typically under 1GB versus 5-7GB for the CUDA variant
IfModel weight file is larger than 200MB
UseDo not COPY the weight file into the image — mount as a Kubernetes PVC, or use an init container to download from S3/GCS at startup. Keeps image size stable as models are retrained.
IfNeed to compile custom CUDA kernels or C++ extensions at build time
UseUse pytorch/pytorch:X.Y.Z-cudaVERSION-cudnnN-devel — includes NVCC compiler and development headers. Switch back to runtime image for the final production stage.

Common Mistakes and How to Avoid Them

Most nn.Module bugs fall into a small set of categories. They are not obscure — they appear consistently across codebases from beginners and experienced engineers alike, usually under deadline pressure when someone is focused on getting the model working and skips a step that seemed optional.

Forgetting super().__init__() is the most foundational mistake, and it has a particularly frustrating failure mode: the error often does not surface immediately. You define your model, assign layers to self, and nothing explodes. The failure comes later when model.parameters() returns an empty iterator, model.to(device) does nothing, or torch.save(model.state_dict()) produces a file with zero keys. By that point, the developer is often deep into debugging the training loop rather than looking at model initialization.

Using Python lists to store layers is the mistake that catches experienced developers. If you have used other frameworks or written Python professionally, using a list of layers feels completely natural — it is idiomatic Python. But a Python list of nn.Module instances is invisible to PyTorch. The parameters in those layers are not in model.parameters(), they are not moved by model.to(device), and the optimizer cannot update them. The model runs, the loss changes slightly due to the layers in the list processing data, and nothing indicates the optimizer is completely ignoring them. Use nn.ModuleList for any list of modules, and nn.ModuleDict for any dictionary of named modules.

The .numpy() inside forward() mistake is common in teams transitioning from NumPy-heavy workflows. It always produces a RuntimeError if the tensor requires gradients, or a silent gradient chain break if you call .detach() first. Both are wrong inside forward(). All computation in forward() must stay in PyTorch tensor operations. If you need NumPy for debugging, do it outside the computation graph after calling .detach().cpu().

One 2026-specific addition worth calling out: with torch.compile() becoming the standard path for production training, any Python-level control flow in forward() that depends on tensor values — not tensor shapes, but actual data values — will prevent the compiler from tracing the graph cleanly. This was always a theoretical concern; now it is a practical one because compile() is in the default training stack for many teams. Keep forward() deterministic in its control flow — conditional branches should depend on constructor arguments, not on runtime tensor contents.

io/thecodeforge/ml/common_mistakes.py · PYTHON
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106
# io.thecodeforge: Common nn.Module mistake patterns and their correct counterparts
import torch
import torch.nn as nn


# ─── MISTAKE 1: Missing super().__init__() ───────────────────────────────────
class BrokenInit(nn.Module):
    def __init__(self):
        # super().__init__() omitted — _parameters and _modules never created
        self.fc = nn.Linear(10, 2)  # assigned as a plain Python attribute, invisible to PyTorch

    def forward(self, x):
        return self.fc(x)  # AttributeError at runtime: 'BrokenInit' has no attribute 'training'


class CorrectInit(nn.Module):
    def __init__(self):
        super(CorrectInit, self).__init__()  # FIRST LINE — always
        self.fc = nn.Linear(10, 2)           # now registered in _modules

    def forward(self, x):
        return self.fc(x)


# ─── MISTAKE 2: Python list instead of nn.ModuleList ────────────────────────
class BrokenDynamicModel(nn.Module):
    def __init__(self, depth: int):
        super().__init__()
        # Python list: invisible to model.parameters(), model.to(device), state_dict()
        self.layers = [nn.Linear(64, 64) for _ in range(depth)]

    def forward(self, x):
        for layer in self.layers:
            x = torch.relu(layer(x))
        return x

# Verify the problem:
bad_model = BrokenDynamicModel(depth=3)
print(f"BrokenDynamicModel trainable params: {sum(p.numel() for p in bad_model.parameters())}")
# Output: BrokenDynamicModel trainable params: 0  <-- optimizer has nothing to update


class CorrectDynamicModel(nn.Module):
    def __init__(self, depth: int):
        super().__init__()
        # nn.ModuleList registers all contained modules — parameters are visible and trackable
        self.layers = nn.ModuleList([nn.Linear(64, 64) for _ in range(depth)])

    def forward(self, x):
        for layer in self.layers:
            x = torch.relu(layer(x))
        return x

good_model = CorrectDynamicModel(depth=3)
print(f"CorrectDynamicModel trainable params: {sum(p.numel() for p in good_model.parameters()):,}")
# Output: CorrectDynamicModel trainable params: 12,480


# ─── MISTAKE 3: Breaking the gradient chain with .numpy() in forward() ──────
class BrokenForward(nn.Module):
    def __init__(self):
        super().__init__()
        self.fc = nn.Linear(10, 2)

    def forward(self, x):
        x = self.fc(x)
        # WRONG: breaks the computational graph — gradients cannot flow past this point
        # x = x.detach().cpu().numpy()   # RuntimeError or silent gradient break
        # WRONG: also breaks it
        # x = x.cpu().numpy()            # RuntimeError: can't call numpy() on tensor requiring grad
        return x  # keep everything as PyTorch tensors inside forward()


# ─── MISTAKE 4: Layers defined inside forward() ─────────────────────────────
class BrokenLayerPlacement(nn.Module):
    def __init__(self):
        super().__init__()
        # No layers defined here — they appear in forward() instead

    def forward(self, x):
        # WRONG: creates a new nn.Linear with random weights on every call
        # The optimizer updates weights from the previous call that no longer exist
        fc = nn.Linear(784, 10)  # new random weights every batch — model never learns
        return fc(x)


# ─── Correct: all layers in __init__, only data flow in forward() ────────────
class CorrectLayerPlacement(nn.Module):
    def __init__(self):
        super().__init__()
        self.fc = nn.Linear(784, 10)  # created once, reused every forward call

    def forward(self, x):
        return self.fc(x)  # same weights every call — optimizer updates persist


# ─── Device placement verification ──────────────────────────────────────────
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = CorrectLayerPlacement().to(device)
print(f"All parameters on device: {next(model.parameters()).device}")

# Always call model(x), never model.forward(x)
# model.forward(x) bypasses __call__, which fires hooks, manages training mode, and tracks autograd
test_input = torch.randn(1, 784).to(device)
output = model(test_input)  # correct
print(f"Output shape: {output.shape}")
▶ Output
BrokenDynamicModel trainable params: 0
CorrectDynamicModel trainable params: 12,480
All parameters on device: cuda:0
Output shape: torch.Size([1, 10])
⚠ Watch Out:
The most expensive mistake I see teams make with nn.Module is defining layers inside forward(). It does not raise an error. The model runs. The loss changes. Everything looks like it is training. The bug only becomes visible when validation accuracy stays at random chance despite 50 epochs of training — at which point the GPU hours are already spent. The rule is simple and absolute: __init__ builds the structure, forward describes the data flow. Nothing that creates a layer or allocates a parameter belongs in forward().
📊 Production Insight
Python lists of nn.Module instances are completely invisible to PyTorch — model.parameters() returns zero from those layers, model.to(device) ignores them, and the optimizer cannot update them. Use nn.ModuleList.
Calling model.forward(x) directly bypasses the __call__ mechanism — forward hooks, autograd tracking setup, and training/eval mode management are all missed. Always use model(x).
Rule: verify trainable parameter count with sum(p.numel() for p in model.parameters() if p.requires_grad) immediately after model construction — any unexpected number indicates a registration bug.
🎯 Key Takeaway
Layers defined in forward() create new random weights every call — the optimizer cannot learn from them. Always define layers in __init__.
Python lists of modules are invisible to PyTorch — use nn.ModuleList for dynamic layer collections and nn.ModuleDict for named module collections.
Always call model(x), not model.forward(x) — the __call__ mechanism manages hooks, autograd tracking, and training/eval mode that forward() alone does not.
Debugging nn.Module Registration Issues
Ifmodel.parameters() returns empty or fewer parameters than expected
UseCheck two things in order: first, whether super().__init__() is present as the first line. Second, whether any layers are stored in a Python list or dict instead of nn.ModuleList or nn.ModuleDict.
IfRuntimeError: grad can be implicitly created only for scalar outputs
UseThe loss tensor is not a scalar. Call .mean() or .sum() on per-sample losses before .backward() — backward() requires a scalar starting point.
IfGradients are None for some parameters after backward()
UseThose parameters are not used in the forward pass — unused parameters produce no gradient. Check whether the parameter is actually called in forward(), or whether it is in a Python list that is not used.
IfModel works correctly in training but crashes or produces wrong output in inference
Usemodel.eval() was not called before inference. Dropout is randomly zeroing activations and BatchNorm is using batch statistics — both behaviors are wrong for inference. Call model.eval() after loading weights and before any inference call.
🗂 Manual Matrix Math vs PyTorch nn.Module
Understanding what the nn.Module abstraction actually buys you beyond syntactic convenience
AspectManual Matrix MathPyTorch nn.Module
Parameter TrackingManual — you maintain a dict or list of weight tensors and must not forget anyAutomatic — model.parameters() and model.named_parameters() iterate every registered tensor
GPU PortabilityManual — every tensor must be moved individually with .to(device), easy to miss oneAtomic — model.to(device) moves every registered parameter and buffer in a single call
Gradient ComputationManual — you must call .backward() on the right tensor and implement update logicAutomatic — Autograd tracks the computation graph; torch.optim handles parameter updates
Model SerializationCustom logic — you must know which tensors to save, in which order, and how to restore themBuilt-in — model.state_dict() and load_state_dict() handle serialization with named keys
Training / Eval ModeManual — you must track mode state and toggle Dropout and BatchNorm behavior yourselfBuilt-in — model.train() and model.eval() propagate recursively to all child modules
Compiler CompatibilityNone — manual tensor code has no structural guarantees for torch.compile() optimizationFull — properly structured nn.Module compiles cleanly with torch.compile() for 30-50% training speedup

🎯 Key Takeaways

  • Building a neural network in PyTorch means subclassing nn.Module — understanding what that abstraction provides (automatic parameter tracking, GPU portability, optimizer integration, serialization) is more important than memorizing the syntax.
  • super().__init__() is mandatory and must be the first line of every __init__ — without it, no layers are registered, model.parameters() returns empty, and model.to(device) does nothing.
  • Define layers in __init__, data flow in forward — this separation is the entire contract of nn.Module and violating it produces bugs that are silent, expensive to debug, and easy to prevent.
  • Use nn.ModuleList for lists of modules, nn.ModuleDict for named collections — Python lists and dicts are invisible to PyTorch's parameter tracking, serialization, and device management.
  • Call model(x), not model.forward(x) — the __call__ mechanism manages hooks, autograd tracking, and training/eval mode state that forward() alone does not.
  • model.eval() after loading weights is mandatory for inference — Dropout and BatchNorm behave fundamentally differently in training and eval mode, and the difference directly affects prediction quality.
  • Verify trainable parameter count immediately after model construction — any unexpected number indicates a registration bug that will waste training compute if left undetected.

⚠ Common Mistakes to Avoid

    Forgetting to call super().__init__() in nn.Module subclass
    Symptom

    model.parameters() returns an empty iterator. Layers assigned to self are not registered. model.to(device) moves nothing. model.state_dict() returns an empty OrderedDict. Training runs without errors but no weights are being updated. The failure is silent until you check parameter count.

    Fix

    Add super().__init__() as the absolute first line of every __init__ method in every nn.Module subclass. This initializes the _parameters, _modules, _buffers, and _hooks internal dictionaries and installs PyTorch's __setattr__ override, which is what makes layer registration automatic. Without it, self.fc = nn.Linear(...) is just a plain Python attribute assignment.

    Defining layers inside forward() instead of __init__()
    Symptom

    Training loss decreases slightly across epochs, giving the appearance of learning. Validation accuracy stays at random chance — 10% for 10 classes, 50% for binary classification. model.named_parameters() shows tensors, but their values change by only a negligible amount across training epochs. No errors are raised anywhere in the training loop.

    Fix

    Move every nn.Linear, nn.Conv2d, nn.BatchNorm, nn.Embedding, and any other learnable layer definition from forward() to __init__(). forward() should contain only tensor operations — calls to self.layer_name(x), activation functions, reshapes, and concatenations. Nothing that allocates parameters belongs there.

    Using a Python list instead of nn.ModuleList for dynamic layers
    Symptom

    model.parameters() returns fewer parameters than expected — specifically, zero from the layers stored in the list. Layers in the list are not moved to GPU when model.to(device) is called, causing a device mismatch error on the first forward pass. model.state_dict() does not include those layers, so saving and loading the model silently drops them.

    Fix

    Replace every Python list of nn.Module instances with nn.ModuleList. Replace every Python dict of nn.Module instances with nn.ModuleDict. Both register their contents with PyTorch, making parameters visible to model.parameters(), moveable by model.to(device), and serializable by model.state_dict(). Verify the fix by checking parameter count immediately after model construction.

    Calling model.forward(x) directly instead of model(x)
    Symptom

    Forward hooks registered with model.register_forward_hook() do not fire. Backward hooks registered with model.register_backward_hook() do not fire. Debugging and profiling tools that rely on hooks produce no output. In some configurations, autograd tracking setup is incomplete, causing subtle gradient computation issues.

    Fix

    Always call the model as a callable: output = model(input_tensor). Never call model.forward(input_tensor) directly. The nn.Module __call__ method triggers pre-forward hooks, manages autograd setup, invokes forward(), triggers post-forward hooks, and handles training/eval mode bookkeeping. Calling forward() directly bypasses all of this.

    Converting tensors to NumPy inside forward()
    Symptom

    RuntimeError: Can't call numpy() on a Tensor that requires grad if the tensor is in the computation graph. Or a silent gradient chain break if .detach() is called first — the model runs, computes a loss, calls backward(), but the gradients are zero or None for all layers that produced the detached tensor.

    Fix

    Keep every computation inside forward() as a PyTorch tensor operation. PyTorch has equivalents for nearly every NumPy function — use them. If you genuinely need NumPy values for debugging or post-processing, do the conversion outside the model: output = model(x).detach().cpu().numpy(). The .detach() call must happen outside forward(), after the backward pass is complete.

Interview Questions on This Topic

  • QExplain why super().__init__() is non-negotiable in PyTorch. What happens internally to the _parameters and _modules dictionaries?Mid-levelReveal
    nn.Module.__init__() initializes several internal dictionaries that are the foundation of the entire parameter management system: _parameters stores nn.Parameter objects (learnable weights and biases), _modules stores child nn.Module instances (sub-layers and sub-networks), _buffers stores non-parameter tensors like BatchNorm's running_mean and running_var, and _hooks stores registered forward and backward hook callbacks. When you write self.fc1 = nn.Linear(10, 5) in __init__, Python calls nn.Module's overridden __setattr__ method. This override inspects the assigned value — if it is an nn.Parameter, it goes into _parameters; if it is an nn.Module, it goes into _modules; otherwise it is a plain Python attribute. Without super().__init__(), these dictionaries are never created. The __setattr__ override is never installed. Every layer you assign to self becomes a plain Python attribute. model.parameters() returns an empty iterator, model.to(device) moves nothing, and model.state_dict() produces an empty dict — all silently, without errors.
  • QContrast nn.Module subclassing with nn.Sequential. In what specific architectural scenario is nn.Sequential technically impossible to use?SeniorReveal
    nn.Sequential is a container that enforces a strictly linear data flow — the output of module N is passed as the sole input to module N+1. It cannot express: (1) residual or skip connections where the block input is added to or concatenated with the block output — y = F(x) + x requires access to x after it has passed through F, which Sequential cannot provide; (2) multiple inputs where two tensors enter the same module; (3) multiple outputs where one module produces two tensors for different downstream modules; (4) conditional branching in forward where the execution path changes based on input properties; (5) non-module operations between layers like tensor reshaping, dimension permutation, or selective masking. The canonical example is any ResNet block: output = conv(x) + x requires holding the original x while also computing conv(x), which Sequential's linear chaining cannot express. You must subclass nn.Module and write custom forward() for any architecture with these characteristics.
  • QDescribe the vanishing gradient problem. How does the choice of activation function in forward() mitigate it, and what is the relationship between this problem and residual connections?SeniorReveal
    Vanishing gradients occur because backpropagation multiplies gradients through every layer via the chain rule. If each layer's local gradient is consistently less than 1.0 — which happens with saturating activations — the product shrinks exponentially with depth. After 20 layers with a per-layer gradient of 0.5, the gradient at the first layer is roughly 0.5^20, which is effectively zero. Early layers stop learning. Sigmoid saturates at both ends with gradient bounded by 0.25. Tanh saturates with gradient bounded by 1.0, but still shrinks. ReLU addresses this directly: its gradient is exactly 1.0 for positive inputs, so gradients pass through unchanged for active neurons. The failure mode of ReLU is dead neurons — units where the pre-activation is always negative get a gradient of zero and never recover. LeakyReLU uses a small gradient (0.01) for negative inputs to prevent permanent death. GELU, the smooth approximation used in transformers and modern MLPs, combines the non-saturation of ReLU with smooth gradients that train more stably. Residual connections solve the problem from a structural angle rather than an activation angle: y = F(x) + x means the gradient of y with respect to x is dF/dx + 1. The constant 1 ensures a gradient of at least 1.0 flows through the skip path regardless of how small dF/dx becomes — deep networks with residual connections do not vanish.
  • QWhat is the difference between model.parameters() and model.state_dict()? When would you use one over the other?Mid-levelReveal
    model.parameters() returns an iterator over every nn.Parameter object registered in the module and all its children — learnable weights and biases only. It is the right input for optimizers: optimizer = torch.optim.Adam(model.parameters(), lr=1e-3). The optimizer needs the actual tensor objects so it can update them in place and maintain per-parameter state like Adam's first and second moment estimates. model.state_dict() returns an OrderedDict mapping string parameter names to their tensor values. It includes both learnable parameters and registered buffers — BatchNorm's running_mean, running_var, and num_batches_tracked are buffers, not parameters, but they are part of the model's inference state and must be saved and restored for correct behavior. This is the right interface for serialization: you save the dict, load it back, and the model produces identical outputs. Use parameters() for optimization — the optimizer needs live tensor references. Use state_dict() for persistence and deployment — it captures the complete inference state including non-learnable buffers that parameters() omits.
  • QHow does PyTorch TorchScript interact with a standard nn.Module, and what constraints does it impose on the forward() method for production deployment?SeniorReveal
    TorchScript converts a dynamic nn.Module into a static, serializable graph that runs without a Python interpreter — enabling deployment to C++ runtimes, mobile devices, and latency-critical services. Two compilation paths exist: torch.jit.trace records the exact operations executed on a sample input. It is fast and produces an efficient graph, but it cannot capture Python control flow — any if/else or loop that branches differently for different inputs will be frozen to the branch taken during tracing. torch.jit.script analyzes Python source code and compiles it, handling control flow correctly, but requires type annotations on all inputs and outputs of forward() and restricts forward() to a subset of Python — no arbitrary Python objects, no dynamic attribute creation, no calls to non-TorchScript-compatible functions. The result in either case is a ScriptModule saved with torch.jit.save() and loaded anywhere LibTorch runs. For the nn.Module to compile cleanly: forward() must have static type annotations, all control flow must depend on types or constructor arguments rather than runtime tensor values, and all operations must be expressible in TorchScript's type system. With torch.compile() — the 2.0+ compilation path using Dynamo and Inductor — the constraints are looser: it handles more Python dynamism via graph breaks, falling back to eager for unsupported sections, while still compiling the static parts for kernel fusion and operator overlap gains.

Frequently Asked Questions

What is building a neural network in PyTorch in simple terms?

It is the process of defining a model's structure and behavior using PyTorch's nn.Module class. You write __init__ to declare which layers exist and how large they are. You write forward to describe how data moves through those layers to produce a prediction. PyTorch handles everything else: tracking the weights, computing gradients, moving parameters to GPU, and saving the trained model. You focus on the architecture. The framework handles the infrastructure.

Can I use multiple GPUs for my model?

Yes. PyTorch provides two approaches. nn.DataParallel wraps your model and splits each batch across multiple GPUs on a single machine — simpler to set up but has a known bottleneck at the parameter server on GPU 0 and does not scale well beyond 4 GPUs. DistributedDataParallel (DDP) runs a separate process per GPU, each with its own model replica, and synchronizes gradients via all-reduce after each backward pass — more setup required but scales linearly and is the production standard for multi-GPU training. For 2026 deployments, DDP with torch.compile() and mixed precision is the recommended training stack for serious model training on multi-GPU infrastructure.

What is the difference between a layer and a module in PyTorch?

Every layer in PyTorch — nn.Linear, nn.Conv2d, nn.BatchNorm1d, nn.Dropout — is itself a subclass of nn.Module. A module is the more general concept: it can be a single layer with a few parameters, or it can be a complex sub-network containing dozens of layers and other modules nested arbitrarily deep. When you build a model by subclassing nn.Module and assigning layers to self in __init__, your model is a module that contains other modules. The terms are used interchangeably in practice, but module is technically the correct term for any nn.Module subclass, while layer usually refers to a specific operation like a linear transformation or convolution.

Why do we use the forward method instead of just defining a __call__ method?

You define forward() because nn.Module's __call__ method calls forward() internally, but also wraps it with additional behavior that PyTorch needs: registering the forward pass with autograd for gradient tracking, firing any registered forward hooks (used by profilers, debuggers, and feature extraction tools), and managing training versus eval mode for layers like Dropout and BatchNorm. If you overrode __call__ directly, you would lose all of that. By defining forward() and calling the model as model(x), you get all the PyTorch infrastructure for free. This is why calling model.forward(x) directly — bypassing __call__ — is wrong even though it produces numerically identical output.

When should I use nn.ModuleList versus a Python list?

Use nn.ModuleList any time you have a collection of nn.Module instances that you want PyTorch to know about — which is essentially always. A Python list of layers is a plain Python object from PyTorch's perspective: the parameters inside those layers are not tracked by model.parameters(), not moved by model.to(device), not included in model.state_dict(), and not accessible to the optimizer. The model will run — Python will find the layers through the list — but the optimizer cannot update them and the weights are not saved when you checkpoint. Use nn.ModuleList for ordered collections of modules and nn.ModuleDict for named collections. If you only need to store hyperparameters or non-module configuration, a plain Python list or dict is fine.

🔥
Naren Founder & Author

Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.

← PreviousPyTorch Tensors ExplainedNext →Autograd and Backpropagation in PyTorch
Forged with 🔥 at TheCodeForge.io — Where Developers Are Forged