Building a Neural Network in PyTorch
- Building a neural network in PyTorch means subclassing nn.Module — understanding what that abstraction provides (automatic parameter tracking, GPU portability, optimizer integration, serialization) is more important than memorizing the syntax.
- super().
is mandatory and must be the first line of every __init__ — without it, no layers are registered,__init__()model.parameters()returns empty, and model.to(device) does nothing. - Define layers in __init__, data flow in forward — this separation is the entire contract of nn.Module and violating it produces bugs that are silent, expensive to debug, and easy to prevent.
- nn.Module is the base class for all PyTorch models — define layers in __init__, data flow in forward
- super().__init__() is mandatory — without it, layers and parameters are not registered and model.parameters() returns empty
- model.to(device) moves all parameters to GPU in one atomic call — never manually move individual weights
- Defining layers inside forward() creates new untrained weights every pass — the optimizer updates weights that are immediately discarded
- state_dict saves only learnable parameters — smaller, portable, and version-independent compared to saving the full model
- model.eval() disables Dropout and freezes BatchNorm running statistics — always call it before inference or validation
Model has zero trainable parameters
python -c "import torch; from your_model import YourModel; m = YourModel(); print(sum(p.numel() for p in m.parameters()))"python -c "from your_model import YourModel; m = YourModel(); print(list(m.named_parameters())[:5])"Weights do not change after a training step
python -c "before = {n: p.clone() for n, p in m.named_parameters()}; train_one_step(m); changed = {n: not torch.equal(before[n], p) for n, p in m.named_parameters()}; print(changed)"python -c "import inspect; print(inspect.getsource(m.forward)) # look for nn.Linear or nn.Conv2d inside forward"Dimension mismatch crash on first forward pass
python -c "import torch; x = torch.randn(1, 784); print('Input:', x.shape); out = model.fc1(x); print('After fc1:', out.shape)"python -c "print(model) # prints all registered layers with their expected in_features and out_features"Production Incident
model.parameters() shows tensors exist, but their values change by only a tiny amount after 100 epochs of training. The model memorizes nothing and generalizes nothing.forward() method rather than in __init__. Every time forward() was called — once per batch — Python created entirely new nn.Linear instances with freshly randomized weights. The optimizer held references to the weights from the previous forward pass and updated those. On the next forward pass, those updated weights were garbage collected and replaced by new random ones. The model was running inference on different random weights every single batch. The loss decreased slightly in some epochs due to random variation in the new weights, which looked like learning. It was not.forward() to __init__(). The layers are now instantiated once at model creation time and reused across every forward pass. The optimizer holds references to the same weight tensors that the model uses for prediction — updates persist, gradients accumulate correctly, and the model now converges to above 94% validation accuracy within 20 epochs on the same dataset.forward() — forward() is called once per batch and should only describe data flow, not create structureLayers defined in forward() create new untrained weights every call — the optimizer updates weights that are immediately discarded on the next passThe symptom is training loss decreasing while validation accuracy stays at random chance — this combination almost always points to either this bug or a data pipeline issueVerify with model.named_parameters() — print parameter values before and after a training step — if they do not change meaningfully, the optimizer is not reaching the weights the model usesProduction Debug GuideCommon symptoms when nn.Module models fail to learn or fail to deploy
forward() rather than __init__(). Move every nn.Linear, nn.Conv2d, nn.BatchNorm2d, and similar definition to __init__(). forward() should contain only the data flow logic — no layer construction. After fixing, verify by printing a parameter value before and after one optimizer step and confirming it changed.super().__init__() is the first line in your __init__ method. Without it, the internal _parameters, _modules, and _buffers dictionaries are never created. Any assignment of an nn.Module or nn.Parameter to self will raise an error or silently fail to register.model.eval() before inference. Without it, Dropout randomly zeroes activations and BatchNorm uses batch statistics instead of running statistics — both introduce randomness that should be disabled during prediction. Also verify you are not passing data through a training augmentation pipeline during inference.forward() to identify exactly where the mismatch occurs. For single samples, add unsqueeze(0) to add the batch dimension — PyTorch layers expect input shape (batch_size, features), not (features,). Use a dummy tensor at development time to verify shapes before training.model.state_dict().keys() with the keys in the saved checkpoint. Any architecture change — adding a layer, renaming a layer, changing depth — breaks state_dict compatibility. Use strict=False in load_state_dict() only as a diagnostic step to see which keys are mismatched, then fix the architecture to match.Building a neural network in PyTorch revolves around one central idea: subclassing nn.Module. You define layers in __init__ and the data flow in forward. PyTorch automatically tracks all parameters, moves them to GPU with a single .to(device) call, and integrates cleanly with torch.optim for gradient-based training.
The nn.Module design solves parameter management at scale. Without it, you would manually track thousands of weight matrices, move each to GPU individually, and implement gradient updates by hand. The module system handles all of this through a unified interface: model.parameters() returns every learnable tensor, model.state_dict() serializes the full learnable state, and model.to(device) moves everything atomically — no risk of a weight matrix left behind on CPU while the rest of the model runs on GPU.
The production failure pattern I see most consistently: developers define layers inside forward() instead of __init__. This creates new uninitialized weights on every forward pass. The optimizer updates weights from the previous pass that no longer exist — they were replaced by fresh random tensors when forward() ran again. Training loss can decrease slightly due to random variation, which masks the bug entirely. Validation accuracy stays at random chance. No error is raised. The model trains for 100 epochs and learns nothing.
What Is Building a Neural Network in PyTorch and Why Does It Exist?
Building a neural network in PyTorch is the process of defining a model by subclassing nn.Module — PyTorch's foundational abstraction for everything that involves learnable parameters. It was designed to solve a specific problem: managing the lifecycle of thousands to billions of weight tensors without building that infrastructure yourself every time you train a model.
The architectural separation at the core of nn.Module is deliberate and meaningful. __init__ defines the static structure — which layers exist, their input and output sizes, how they are named. forward defines the dynamic behavior — how a tensor flows through those layers during each call. This separation is what makes the rest of the system work: PyTorch can inspect the model structure without running data through it, serialize only the parameters independently of the forward logic, and move the entire model to GPU atomically with model.to(device).
The key mechanism underneath all of this is Python's __setattr__ override in nn.Module. When you write self.fc1 = nn.Linear(784, 128) in __init__, PyTorch intercepts that assignment, detects that nn.Linear is itself an nn.Module, and registers it in an internal _modules dictionary. When you write self.weight = nn.Parameter(torch.randn(10, 5)), PyTorch detects nn.Parameter and registers it in _parameters. These dictionaries are what model.parameters(), model.state_dict(), and model.to(device) iterate over. None of this works if you skip super().__init__() — the dictionaries are never created, the __setattr__ override is never installed, and every layer you assign to self is just a plain Python attribute that PyTorch cannot see.
The practical consequence at production scale: a model with 100M parameters that is partially on GPU and partially on CPU produces wrong outputs without raising errors. Parameter groups that the optimizer cannot reach do not update. model.parameters() returning fewer tensors than expected is always a registration bug — not a configuration issue.
For 2026 deployments, the nn.Module contract also integrates with torch.compile() — PyTorch's graph compilation path introduced in 2.0 and stabilized through 2.2 and beyond. A properly structured nn.Module compiles cleanly with torch.compile(model), producing kernel fusion and operator overlap that can reduce training time by 30-50% on modern A100 and H100 hardware without changing a line of model code. Models with operations that break the graph — .numpy() calls inside forward, Python data structures used conditionally — either fail to compile or fall back to eager mode silently.
import torch import torch.nn as nn import torch.nn.functional as F # io.thecodeforge: Production-grade MLP implementation with verification # Demonstrates the correct nn.Module structure and parameter registration pattern class ForgeClassifier(nn.Module): def __init__(self, input_size: int, hidden_size: int, num_classes: int): # MANDATORY: initializes _parameters, _modules, _buffers, _hooks dictionaries # Without this, no layer you assign to self will be registered with PyTorch super(ForgeClassifier, self).__init__() # Structure defined here once — layers are created and registered at init time # PyTorch intercepts these assignments via __setattr__ and adds them to _modules self.fc1 = nn.Linear(input_size, hidden_size) self.bn1 = nn.BatchNorm1d(hidden_size) # running_mean/var stored as buffers self.dropout = nn.Dropout(p=0.3) # disabled in eval mode automatically self.fc2 = nn.Linear(hidden_size, num_classes) # Verify the model structure at init time — catch shape bugs during development self._verify_forward(input_size, num_classes) def _verify_forward(self, input_size: int, num_classes: int): """Run a dummy forward pass at init to catch dimension mismatches immediately.""" with torch.no_grad(): dummy = torch.randn(2, input_size) # batch_size=2 for BatchNorm1d compatibility out = self.forward(dummy) assert out.shape == (2, num_classes), ( f"Output shape mismatch: expected (2, {num_classes}), got {out.shape}" ) def forward(self, x: torch.Tensor) -> torch.Tensor: # Data flow only — no layer construction here # fc1 -> BatchNorm -> ReLU -> Dropout -> fc2 x = self.fc1(x) # (batch, input_size) -> (batch, hidden_size) x = self.bn1(x) # normalize across batch dimension x = F.relu(x) # element-wise activation x = self.dropout(x) # zeroes 30% of activations during training, no-op in eval x = self.fc2(x) # (batch, hidden_size) -> (batch, num_classes) return x # raw logits — apply softmax outside, or use CrossEntropyLoss # Instantiate — _verify_forward runs immediately, catches shape bugs at construction time model = ForgeClassifier(input_size=784, hidden_size=256, num_classes=10) print(model) print(f"Trainable parameters: {sum(p.numel() for p in model.parameters() if p.requires_grad):,}") print(f"Total parameters (incl. buffers): {sum(p.numel() for p in model.parameters()):,}") # Verify parameter registration is correct registered_names = [n for n, _ in model.named_parameters()] print(f"Registered parameter groups: {registered_names}") # Move entire model to GPU atomically — all registered parameters and buffers move together device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model = model.to(device) print(f"Model device: {next(model.parameters()).device}")
(fc1): Linear(in_features=784, out_features=256, bias=True)
(bn1): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(dropout): Dropout(p=0.3, inplace=False)
(fc2): Linear(in_features=256, out_features=10, bias=True)
)
Trainable parameters: 203,530
Total parameters (incl. buffers): 203,530
Registered parameter groups: ['fc1.weight', 'fc1.bias', 'bn1.weight', 'bn1.bias', 'fc2.weight', 'fc2.bias']
Model device: cuda:0
- __init__ defines the static structure — which layers exist, their sizes, and how they are named as attributes
- forward defines the dynamic behavior — how a tensor flows through those pre-built layers on each call
- super().
installs PyTorch's __setattr__ override — without it, layer assignments to self are invisible to the framework__init__() - model.parameters() iterates all registered learnable tensors — you never maintain a manual list of weights
- model.to(device) moves every registered parameter and buffer atomically — no risk of partial GPU placement causing silent type errors
__init__() initializes _parameters, _modules, and _buffers dictionaries and installs the __setattr__ override that makes layer registration automatic.model.parameters(), model.to(device), and model.state_dict().super().__init__() is always the first line of every nn.Module subclass — no exceptions.__init__() is mandatory and must be first — without it, the module cannot register layers, parameters, or buffers.forward() for linear pipelines and the output of each module automatically becomes the input of the nextforward() — Sequential is architecturally incapable of expressing non-linear data flowEnterprise Persistence: Saving and Loading Forge Models
In a production environment, training a model is only part of the story. You need to persist it, version it, load it reliably six months later, and reproduce its inference behavior exactly. Getting this wrong has a specific failure mode that is not immediately obvious: you load a model, it runs inference without any errors, and it produces predictions — predictions that are quietly wrong because Dropout is still active or because you loaded weights into the wrong architecture without noticing.
The core persistence decision in PyTorch is between saving the full model object and saving only the state_dict. torch.save(model, path) uses Python's pickle to serialize the entire model — code, architecture, and weights together. torch.save(model.state_dict(), path) serializes only the learnable parameter tensors as an OrderedDict of name-to-tensor mappings. The state_dict approach is the production standard for three concrete reasons: the file is smaller because no Python code is embedded, it is portable because you can load weights into a model defined anywhere as long as the parameter names match, and it is safer because pickle can execute arbitrary code when deserializing, which is a real attack surface in shared model repositories.
The full checkpoint pattern extends this for training resumption. Saving only model.state_dict() is sufficient for inference deployment, but if you need to resume training from a checkpoint, you also need the optimizer state — Adam's moment estimates are not recomputed from scratch, and resuming without them produces different training dynamics than if training had never stopped. A complete checkpoint includes model state, optimizer state, epoch number, and the best validation metric so you know whether to update your best-model checkpoint.
One detail that bites teams in production: torch.load() defaults to weights_only=False in PyTorch versions before 2.4, which means it will execute arbitrary pickle code. In PyTorch 2.4+, the default changed to weights_only=True for state_dict loading, which is safer. If you are loading state_dicts — which you should be — explicitly pass weights_only=True regardless of version to future-proof your code and prevent security warnings in CI.
# io.thecodeforge: Production model persistence patterns # Covers inference deployment, training resumption, and safe loading import torch import os from pathlib import Path MODEL_DIR = Path("io/thecodeforge/models") MODEL_DIR.mkdir(parents=True, exist_ok=True) # ─── Pattern 1: Inference deployment — save state_dict only ───────────────── deployment_path = MODEL_DIR / "classifier_v1.pth" torch.save(model.state_dict(), deployment_path) print(f"Saved inference weights: {deployment_path} ({os.path.getsize(deployment_path) / 1e6:.1f} MB)") # Load for inference — weights_only=True prevents arbitrary pickle execution inference_model = ForgeClassifier(input_size=784, hidden_size=256, num_classes=10) inference_model.load_state_dict( torch.load(deployment_path, map_location='cpu', weights_only=True) ) inference_model.eval() # MANDATORY: disables Dropout, freezes BatchNorm running stats inference_model = inference_model.to(device) # Verify loaded weights match the original for (n1, p1), (n2, p2) in zip(model.named_parameters(), inference_model.named_parameters()): assert n1 == n2, f"Parameter name mismatch: {n1} vs {n2}" assert torch.equal(p1.cpu(), p2.cpu()), f"Value mismatch for {n1}" print("Inference model: all parameters loaded and verified.") # ─── Pattern 2: Training checkpoint — save full state for resumption ───────── def save_checkpoint(model, optimizer, epoch: int, val_loss: float, path: Path): """Save everything needed to resume training exactly where it left off.""" torch.save({ 'epoch': epoch, 'model_state_dict': model.state_dict(), 'optimizer_state_dict': optimizer.state_dict(), 'val_loss': val_loss, }, path) print(f"Checkpoint saved: epoch {epoch}, val_loss {val_loss:.4f}") def load_checkpoint(model, optimizer, path: Path, device: torch.device): """Resume training from a checkpoint — restores model weights and optimizer state.""" checkpoint = torch.load(path, map_location=device, weights_only=False) # dict is safe here model.load_state_dict(checkpoint['model_state_dict']) optimizer.load_state_dict(checkpoint['optimizer_state_dict']) start_epoch = checkpoint['epoch'] + 1 best_val_loss = checkpoint['val_loss'] print(f"Resumed from epoch {checkpoint['epoch']}, val_loss {best_val_loss:.4f}") return model, optimizer, start_epoch, best_val_loss # Example checkpoint save during training loop optimizer = torch.optim.Adam(model.parameters(), lr=1e-3) checkpoint_path = MODEL_DIR / "checkpoint_epoch_10.pth" save_checkpoint(model, optimizer, epoch=10, val_loss=0.1823, path=checkpoint_path) # Resume training from checkpoint fresh_model = ForgeClassifier(input_size=784, hidden_size=256, num_classes=10).to(device) fresh_optimizer = torch.optim.Adam(fresh_model.parameters(), lr=1e-3) fresh_model, fresh_optimizer, start_epoch, best_val = load_checkpoint( fresh_model, fresh_optimizer, checkpoint_path, device ) print(f"Training will resume from epoch {start_epoch}")
Inference model: all parameters loaded and verified.
Checkpoint saved: epoch 10, val_loss 0.1823
Resumed from epoch 10, val_loss 0.1823
Training will resume from epoch 11
model.eval() immediately after loading weights for inference — before moving to device, before the first forward pass. Without it, Dropout randomly zeroes activations and BatchNorm uses batch statistics instead of its learned running statistics. The model will produce different predictions for the same input on every call, and the difference will not be small enough to ignore in production. Treat model.eval() after load_state_dict as a mandatory step in your inference initialization sequence, not an optional call.torch.load() prevents arbitrary pickle execution — use it whenever loading a state_dict from any source you do not fully control.model.state_dict() with torch.save() — smallest file, no code dependency, load with weights_only=Truemodel.state_dict(), optimizer.state_dict(), current epoch, and best validation metric — resuming without optimizer state produces different training dynamicsload_state_dict()torch.jit.script() or torch.jit.trace() and save with torch.jit.save() — produces a self-contained ScriptModule that runs in LibTorch without PythonContainerizing the Forge Model Service
Getting a PyTorch model to run correctly on a developer workstation is step one. Getting it to run correctly in production — on a different machine, a different OS, a different GPU driver, possibly six months from now — is the actual engineering problem. Containerization with Docker is the standard answer, but the details matter more than most tutorials acknowledge.
The version pinning problem is where most teams make their first mistake. Pulling pytorch/pytorch:latest in production means your deployment environment changes every time a new PyTorch release ships. Changes between minor versions can affect numerical precision, change default behaviors for certain operations, and silently alter model outputs. Pin the full triple: PyTorch version, CUDA version, and cuDNN version. These three together determine the exact kernel implementations your model runs on. A mismatch between cuDNN versions on the same PyTorch base can produce numerically different outputs from the same weights.
The image size problem compounds quickly in multi-service deployments. A CUDA-enabled PyTorch runtime image is typically 5-7GB. A CPU-only image is under 1GB. If your inference service runs on CPU-optimized instances — which is common for cost efficiency in steady-state serving — you are pulling 5-7GB per node during deployments when 1GB would be sufficient. This is not a philosophical problem — it translates directly to longer deployment times, higher container registry egress costs, and slower autoscaling response.
The model weight inclusion problem is the third one. Baking a 500MB model file into a Docker image with COPY means every CI build, every image push, and every container pull moves that 500MB. For a team with 10 engineers committing multiple times a day, this accumulates. The correct pattern is to exclude model weights from the image and mount them from a volume, or download them at container startup from an object store like S3 or GCS. This keeps the image lean, makes weight updates independent of image rebuilds, and allows you to run canary deployments with different weight versions without rebuilding images.
# io.thecodeforge: Production PyTorch inference container # Pin the full version triple — never use 'latest' in production # PyTorch 2.2.0 + CUDA 12.1 + cuDNN 8 is a tested, stable combination for 2026 deployments FROM pytorch/pytorch:2.2.0-cuda12.1-cudnn8-runtime WORKDIR /app # Install system-level dependencies before pip — this layer caches independently # libgl1-mesa-glx is required by OpenCV; libgomp1 is required by some PyTorch operations RUN apt-get update \ && apt-get install -y --no-install-recommends \ libgl1-mesa-glx \ libgomp1 \ curl \ && rm -rf /var/lib/apt/lists/* # Separate requirements from source — requirements layer caches until requirements.txt changes COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt # Copy inference source code only # Model weights are NOT copied here — they are mounted or downloaded at startup COPY ./src /app/src # Model path is configurable via environment variable # In Kubernetes: mount a PVC at /app/models or use an init container to download from S3 ENV MODEL_PATH=/app/models/classifier_v1.pth ENV MODEL_INPUT_SIZE=784 ENV MODEL_HIDDEN_SIZE=256 ENV MODEL_NUM_CLASSES=10 # Run as non-root user — required by most enterprise security policies RUN useradd -m -u 1001 forge USER forge # Health check verifies the inference service starts and the model loads correctly HEALTHCHECK --interval=30s --timeout=15s --start-period=30s --retries=3 \ CMD curl -f http://localhost:8080/health || exit 1 ENTRYPOINT ["python", "src/inference_service.py"]
# CPU-only variant would be ~800MB using pytorch/pytorch:2.2.0 base
Common Mistakes and How to Avoid Them
Most nn.Module bugs fall into a small set of categories. They are not obscure — they appear consistently across codebases from beginners and experienced engineers alike, usually under deadline pressure when someone is focused on getting the model working and skips a step that seemed optional.
Forgetting super(). is the most foundational mistake, and it has a particularly frustrating failure mode: the error often does not surface immediately. You define your model, assign layers to self, and nothing explodes. The failure comes later when __init__()model.parameters() returns an empty iterator, model.to(device) does nothing, or torch.save(model.state_dict()) produces a file with zero keys. By that point, the developer is often deep into debugging the training loop rather than looking at model initialization.
Using Python lists to store layers is the mistake that catches experienced developers. If you have used other frameworks or written Python professionally, using a list of layers feels completely natural — it is idiomatic Python. But a Python list of nn.Module instances is invisible to PyTorch. The parameters in those layers are not in model.parameters(), they are not moved by model.to(device), and the optimizer cannot update them. The model runs, the loss changes slightly due to the layers in the list processing data, and nothing indicates the optimizer is completely ignoring them. Use nn.ModuleList for any list of modules, and nn.ModuleDict for any dictionary of named modules.
The .numpy() inside forward() mistake is common in teams transitioning from NumPy-heavy workflows. It always produces a RuntimeError if the tensor requires gradients, or a silent gradient chain break if you call .detach() first. Both are wrong inside forward(). All computation in forward() must stay in PyTorch tensor operations. If you need NumPy for debugging, do it outside the computation graph after calling .detach().cpu().
One 2026-specific addition worth calling out: with torch.compile() becoming the standard path for production training, any Python-level control flow in forward() that depends on tensor values — not tensor shapes, but actual data values — will prevent the compiler from tracing the graph cleanly. This was always a theoretical concern; now it is a practical one because compile() is in the default training stack for many teams. Keep forward() deterministic in its control flow — conditional branches should depend on constructor arguments, not on runtime tensor contents.
# io.thecodeforge: Common nn.Module mistake patterns and their correct counterparts import torch import torch.nn as nn # ─── MISTAKE 1: Missing super().__init__() ─────────────────────────────────── class BrokenInit(nn.Module): def __init__(self): # super().__init__() omitted — _parameters and _modules never created self.fc = nn.Linear(10, 2) # assigned as a plain Python attribute, invisible to PyTorch def forward(self, x): return self.fc(x) # AttributeError at runtime: 'BrokenInit' has no attribute 'training' class CorrectInit(nn.Module): def __init__(self): super(CorrectInit, self).__init__() # FIRST LINE — always self.fc = nn.Linear(10, 2) # now registered in _modules def forward(self, x): return self.fc(x) # ─── MISTAKE 2: Python list instead of nn.ModuleList ──────────────────────── class BrokenDynamicModel(nn.Module): def __init__(self, depth: int): super().__init__() # Python list: invisible to model.parameters(), model.to(device), state_dict() self.layers = [nn.Linear(64, 64) for _ in range(depth)] def forward(self, x): for layer in self.layers: x = torch.relu(layer(x)) return x # Verify the problem: bad_model = BrokenDynamicModel(depth=3) print(f"BrokenDynamicModel trainable params: {sum(p.numel() for p in bad_model.parameters())}") # Output: BrokenDynamicModel trainable params: 0 <-- optimizer has nothing to update class CorrectDynamicModel(nn.Module): def __init__(self, depth: int): super().__init__() # nn.ModuleList registers all contained modules — parameters are visible and trackable self.layers = nn.ModuleList([nn.Linear(64, 64) for _ in range(depth)]) def forward(self, x): for layer in self.layers: x = torch.relu(layer(x)) return x good_model = CorrectDynamicModel(depth=3) print(f"CorrectDynamicModel trainable params: {sum(p.numel() for p in good_model.parameters()):,}") # Output: CorrectDynamicModel trainable params: 12,480 # ─── MISTAKE 3: Breaking the gradient chain with .numpy() in forward() ────── class BrokenForward(nn.Module): def __init__(self): super().__init__() self.fc = nn.Linear(10, 2) def forward(self, x): x = self.fc(x) # WRONG: breaks the computational graph — gradients cannot flow past this point # x = x.detach().cpu().numpy() # RuntimeError or silent gradient break # WRONG: also breaks it # x = x.cpu().numpy() # RuntimeError: can't call numpy() on tensor requiring grad return x # keep everything as PyTorch tensors inside forward() # ─── MISTAKE 4: Layers defined inside forward() ───────────────────────────── class BrokenLayerPlacement(nn.Module): def __init__(self): super().__init__() # No layers defined here — they appear in forward() instead def forward(self, x): # WRONG: creates a new nn.Linear with random weights on every call # The optimizer updates weights from the previous call that no longer exist fc = nn.Linear(784, 10) # new random weights every batch — model never learns return fc(x) # ─── Correct: all layers in __init__, only data flow in forward() ──────────── class CorrectLayerPlacement(nn.Module): def __init__(self): super().__init__() self.fc = nn.Linear(784, 10) # created once, reused every forward call def forward(self, x): return self.fc(x) # same weights every call — optimizer updates persist # ─── Device placement verification ────────────────────────────────────────── device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model = CorrectLayerPlacement().to(device) print(f"All parameters on device: {next(model.parameters()).device}") # Always call model(x), never model.forward(x) # model.forward(x) bypasses __call__, which fires hooks, manages training mode, and tracks autograd test_input = torch.randn(1, 784).to(device) output = model(test_input) # correct print(f"Output shape: {output.shape}")
CorrectDynamicModel trainable params: 12,480
All parameters on device: cuda:0
Output shape: torch.Size([1, 10])
forward(). It does not raise an error. The model runs. The loss changes. Everything looks like it is training. The bug only becomes visible when validation accuracy stays at random chance despite 50 epochs of training — at which point the GPU hours are already spent. The rule is simple and absolute: __init__ builds the structure, forward describes the data flow. Nothing that creates a layer or allocates a parameter belongs in forward().model.parameters() returns zero from those layers, model.to(device) ignores them, and the optimizer cannot update them. Use nn.ModuleList.p.numel() for p in model.parameters() if p.requires_grad) immediately after model construction — any unexpected number indicates a registration bug.forward() create new random weights every call — the optimizer cannot learn from them. Always define layers in __init__.forward() alone does not.super().__init__() is present as the first line. Second, whether any layers are stored in a Python list or dict instead of nn.ModuleList or nn.ModuleDict.backward() requires a scalar starting point.backward()forward(), or whether it is in a Python list that is not used.model.eval() after loading weights and before any inference call.| Aspect | Manual Matrix Math | PyTorch nn.Module |
|---|---|---|
| Parameter Tracking | Manual — you maintain a dict or list of weight tensors and must not forget any | Automatic — model.parameters() and model.named_parameters() iterate every registered tensor |
| GPU Portability | Manual — every tensor must be moved individually with .to(device), easy to miss one | Atomic — model.to(device) moves every registered parameter and buffer in a single call |
| Gradient Computation | Manual — you must call .backward() on the right tensor and implement update logic | Automatic — Autograd tracks the computation graph; torch.optim handles parameter updates |
| Model Serialization | Custom logic — you must know which tensors to save, in which order, and how to restore them | Built-in — model.state_dict() and load_state_dict() handle serialization with named keys |
| Training / Eval Mode | Manual — you must track mode state and toggle Dropout and BatchNorm behavior yourself | Built-in — model.train() and model.eval() propagate recursively to all child modules |
| Compiler Compatibility | None — manual tensor code has no structural guarantees for torch.compile() optimization | Full — properly structured nn.Module compiles cleanly with torch.compile() for 30-50% training speedup |
🎯 Key Takeaways
- Building a neural network in PyTorch means subclassing nn.Module — understanding what that abstraction provides (automatic parameter tracking, GPU portability, optimizer integration, serialization) is more important than memorizing the syntax.
- super().
is mandatory and must be the first line of every __init__ — without it, no layers are registered,__init__()model.parameters()returns empty, and model.to(device) does nothing. - Define layers in __init__, data flow in forward — this separation is the entire contract of nn.Module and violating it produces bugs that are silent, expensive to debug, and easy to prevent.
- Use nn.ModuleList for lists of modules, nn.ModuleDict for named collections — Python lists and dicts are invisible to PyTorch's parameter tracking, serialization, and device management.
- Call model(x), not model.forward(x) — the __call__ mechanism manages hooks, autograd tracking, and training/eval mode state that
forward()alone does not. - model.eval() after loading weights is mandatory for inference — Dropout and BatchNorm behave fundamentally differently in training and eval mode, and the difference directly affects prediction quality.
- Verify trainable parameter count immediately after model construction — any unexpected number indicates a registration bug that will waste training compute if left undetected.
⚠ Common Mistakes to Avoid
Interview Questions on This Topic
- QExplain why
super().is non-negotiable in PyTorch. What happens internally to the _parameters and _modules dictionaries?Mid-levelReveal__init__() - QContrast nn.Module subclassing with nn.Sequential. In what specific architectural scenario is nn.Sequential technically impossible to use?SeniorReveal
- QDescribe the vanishing gradient problem. How does the choice of activation function in
forward()mitigate it, and what is the relationship between this problem and residual connections?SeniorReveal - QWhat is the difference between
model.parameters()andmodel.state_dict()? When would you use one over the other?Mid-levelReveal - QHow does PyTorch TorchScript interact with a standard nn.Module, and what constraints does it impose on the
forward()method for production deployment?SeniorReveal
Frequently Asked Questions
What is building a neural network in PyTorch in simple terms?
It is the process of defining a model's structure and behavior using PyTorch's nn.Module class. You write __init__ to declare which layers exist and how large they are. You write forward to describe how data moves through those layers to produce a prediction. PyTorch handles everything else: tracking the weights, computing gradients, moving parameters to GPU, and saving the trained model. You focus on the architecture. The framework handles the infrastructure.
Can I use multiple GPUs for my model?
Yes. PyTorch provides two approaches. nn.DataParallel wraps your model and splits each batch across multiple GPUs on a single machine — simpler to set up but has a known bottleneck at the parameter server on GPU 0 and does not scale well beyond 4 GPUs. DistributedDataParallel (DDP) runs a separate process per GPU, each with its own model replica, and synchronizes gradients via all-reduce after each backward pass — more setup required but scales linearly and is the production standard for multi-GPU training. For 2026 deployments, DDP with torch.compile() and mixed precision is the recommended training stack for serious model training on multi-GPU infrastructure.
What is the difference between a layer and a module in PyTorch?
Every layer in PyTorch — nn.Linear, nn.Conv2d, nn.BatchNorm1d, nn.Dropout — is itself a subclass of nn.Module. A module is the more general concept: it can be a single layer with a few parameters, or it can be a complex sub-network containing dozens of layers and other modules nested arbitrarily deep. When you build a model by subclassing nn.Module and assigning layers to self in __init__, your model is a module that contains other modules. The terms are used interchangeably in practice, but module is technically the correct term for any nn.Module subclass, while layer usually refers to a specific operation like a linear transformation or convolution.
Why do we use the forward method instead of just defining a __call__ method?
You define forward() because nn.Module's __call__ method calls forward() internally, but also wraps it with additional behavior that PyTorch needs: registering the forward pass with autograd for gradient tracking, firing any registered forward hooks (used by profilers, debuggers, and feature extraction tools), and managing training versus eval mode for layers like Dropout and BatchNorm. If you overrode __call__ directly, you would lose all of that. By defining forward() and calling the model as model(x), you get all the PyTorch infrastructure for free. This is why calling model.forward(x) directly — bypassing __call__ — is wrong even though it produces numerically identical output.
When should I use nn.ModuleList versus a Python list?
Use nn.ModuleList any time you have a collection of nn.Module instances that you want PyTorch to know about — which is essentially always. A Python list of layers is a plain Python object from PyTorch's perspective: the parameters inside those layers are not tracked by model.parameters(), not moved by model.to(device), not included in model.state_dict(), and not accessible to the optimizer. The model will run — Python will find the layers through the list — but the optimizer cannot update them and the weights are not saved when you checkpoint. Use nn.ModuleList for ordered collections of modules and nn.ModuleDict for named collections. If you only need to store hyperparameters or non-module configuration, a plain Python list or dict is fine.
Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.