0/1 Knapsack Problem: Dynamic Programming Deep Dive with Java
- 0/1 Knapsack selects each item fully or not at all. DP solves it in O(n × W) time by caching the optimal solution to every unique (items 1..i, capacity w) subproblem exactly once.
- dp[i][w] is the maximum value achievable using items 1 through i with weight capacity w. The recurrence is max(skip item i, take item i) — take is only valid when the item's weight does not exceed w.
- The 1D space optimization iterates capacity right-to-left to prevent each item from being used more than once. Left-to-right iteration silently solves unbounded knapsack with no error signal.
- 0/1 Knapsack: choose items to maximize value within a weight limit, each item taken at most once
- dp[i][w] = max value using first i items with capacity w
- Recurrence: max(skip: dp[i-1][w], take: val[i] + dp[i-1][w — wt[i]])
- 1D optimization reduces space to O(W) but you must iterate capacity RIGHT-TO-LEFT — this is not optional
- Forward iteration in 1D silently becomes unbounded knapsack — no crash, no exception, just wrong answers that look plausible
- O(n*W) is pseudo-polynomial: fast when W is small, completely infeasible when W exceeds roughly 10^7 with a 2D table
Wrong max value that is suspiciously high — possibly 2x or 3x the expected result
grep -n 'for.*w.*capacity\|for.*w.*W\|for.*w =\|w++\|w--' io/thecodeforge/dp/KnapsackSolver.javagrep -c 'w--\|w >=.*weights\|w >= wt' io/thecodeforge/dp/KnapsackSolver.javaNegative value or overflow in the returned maximum value
grep -n 'int\[\]\[\] dp\|int\[\] dp\|int\[\] values\|int\[\] val' io/thecodeforge/dp/KnapsackSolver.javagrep -n 'long\[\]\[\] dp\|long\[\] dp\|long\[\] values' io/thecodeforge/dp/KnapsackSolver.javaArrayIndexOutOfBoundsException during DP table computation at runtime
grep -n 'new.*\[.*\]\[.*\]\|new long\[\|new int\[' io/thecodeforge/dp/KnapsackSolver.javagrep -n 'n + 1\|n+1\|capacity + 1\|W + 1' io/thecodeforge/dp/KnapsackSolver.javaProduction Incident
Production Debug GuideCommon symptoms when knapsack-based optimizers produce wrong results in production. Most of these have no stack trace — the algorithm runs cleanly and returns a wrong number.
Resource allocation is one of the most common problems in software engineering, and it shows up in more places than most engineers expect. Budget planning tools, cloud VM packing algorithms, cargo loading systems, ad auction bidding engines, compiler register allocation — all of these are flavors of the same underlying constraint: given a fixed budget of some resource, which items do you select to extract maximum value? The 0/1 Knapsack Problem is the canonical formulation of this class of challenge, and mastering it gives you a reusable mental template for an entire family of optimization problems.
The naive recursive approach explodes into O(2^n) time because every item forces a binary branch — take it or leave it — and those branches multiply exponentially. With 40 items that is over a trillion subsets. Dynamic Programming tames this by recognizing that subproblems overlap: the best way to fill a 7 kg bag using items 1 through 4 gets recomputed dozens of times in the recursive tree. DP caches each unique subproblem result so it is solved exactly once, collapsing exponential work into a pseudo-polynomial O(n × W) solution.
By the end of this article you will understand not just the DP table mechanics but why each cell means what it means, how to reconstruct which items were actually chosen rather than just the maximum value, how to reduce space from O(n × W) to O(W) without losing correctness, and the subtle iteration direction bug that causes silent wrong answers in production optimizers. You will also be ready for the four hardest interview questions that trip up engineers who have memorized the algorithm without understanding it.
What is 0/1 Knapsack — Plain English Before the Math
The 0/1 Knapsack problem: given n items each with a weight and a value, and a knapsack with a fixed weight capacity W, choose a subset of items that fits within W and maximizes total value. The 0/1 constraint means each item can be taken at most once — you either include it fully or exclude it entirely. There is no fractional option.
Real-world framing: you are packing for a camping trip with a bag that holds 10 kg. Each piece of gear has a weight and a usefulness score. You want to maximize usefulness without overloading the bag. The catch is that taking the single heaviest most-useful item might block you from taking three lighter items whose combined usefulness is higher. Greedy selection by value-to-weight ratio does not work here — that is fractional knapsack, a different problem.
The brute-force approach tries every possible subset — all 2^n combinations. For n=20 that is one million subsets, manageable. For n=40 that is over one trillion — completely infeasible. Dynamic Programming reduces this to O(n × W) by observing that the same subproblem — 'what is the best I can do with items 1 through k and capacity c?' — appears hundreds of times in the recursive tree. DP solves each unique subproblem exactly once and stores the result.
The key insight that makes DP work here is optimal substructure: the optimal solution to the full problem contains optimal solutions to its subproblems. If you decided optimally on item k, the remaining items 1 through k-1 filling the remaining capacity must also be chosen optimally. This property, combined with overlapping subproblems, is what separates problems where DP works from problems where it does not.
- Row i represents the universe of available items — items 1 through i exist; items i+1 through n do not.
- Column w represents the remaining weight budget — how much capacity is available for this decision.
- Each cell inherits the best result from the row above (skip item i) or combines item i's value with the best result from a smaller capacity (take item i).
- The answer accumulates at the bottom-right corner: dp[n][W] — all items considered, full capacity available.
- The table is a complete record of all optimal sub-solutions. Every cell is correct given its row and column constraints. You never need to recompute anything.
How 0/1 Knapsack Works — The Recurrence Built from First Principles
Rather than memorizing the recurrence, derive it from the decision you face at each cell.
Define dp[i][w] as the maximum value achievable using only items 1 through i with weight capacity exactly w available.
Base cases are immediate: dp[0][w] = 0 for all w — with zero items, no value is possible regardless of capacity. dp[i][0] = 0 for all i — with zero capacity, nothing can fit regardless of how many items exist.
For any other cell dp[i][w], you face exactly one decision about item i:
Option A — Skip item i: you decide not to include item i. The best you can do is whatever was optimal for items 1 through i-1 with the same capacity w. That value is already computed: dp[i-1][w].
Option B — Take item i: this is only possible if item i's weight wt[i] does not exceed current capacity w. If you take item i, you gain val[i] and you have w - wt[i] capacity remaining for items 1 through i-1. The best outcome for the remaining capacity is dp[i-1][w - wt[i]], already computed.
The recurrence combines both options: If wt[i] > w: dp[i][w] = dp[i-1][w] (cannot take item i) If wt[i] <= w: dp[i][w] = max(dp[i-1][w], val[i] + dp[i-1][w - wt[i]])
The answer after considering all n items with full capacity W is dp[n][W].
Time complexity: O(n × W) — n items, W+1 capacity values, one O(1) computation per cell. Space complexity: O(n × W) for the full 2D table, or O(W) with the 1D rolling array optimization.
Worked Example — Tracing the Full DP Table by Hand
Items: item 1 (weight=2, value=6), item 2 (weight=2, value=10), item 3 (weight=3, value=12). Capacity W=5.
This example is small enough to trace completely but large enough to show all interesting decisions — specifically the case at w=4 in row 3 where skipping yields a better result than taking, and w=5 in row 3 where taking produces the global optimum.
Initial table — row 0 (base case, no items): w=0 w=1 w=2 w=3 w=4 w=5 i=0: [ 0, 0, 0, 0, 0, 0 ]
Row i=1 (item 1: weight=2, value=6): w=0: wt[1]=2 > 0. Skip. dp[1][0] = dp[0][0] = 0. w=1: wt[1]=2 > 1. Skip. dp[1][1] = dp[0][1] = 0. w=2: wt[1]=2 <= 2. Take? 6 + dp[0][0] = 6. Skip? dp[0][2] = 0. Max = 6. dp[1][2] = 6. w=3: wt[1]=2 <= 3. Take? 6 + dp[0][1] = 6. Skip? dp[0][3] = 0. Max = 6. dp[1][3] = 6. w=4: wt[1]=2 <= 4. Take? 6 + dp[0][2] = 6. Skip? dp[0][4] = 0. Max = 6. dp[1][4] = 6. w=5: wt[1]=2 <= 5. Take? 6 + dp[0][3] = 6. Skip? dp[0][5] = 0. Max = 6. dp[1][5] = 6. i=1: [ 0, 0, 6, 6, 6, 6 ]
Row i=2 (item 2: weight=2, value=10): w=0,1: wt[2]=2 > w. Skip. dp[2][0]=0, dp[2][1]=0. w=2: Take? 10 + dp[1][0] = 10. Skip? dp[1][2] = 6. Max = 10. dp[2][2] = 10. w=3: Take? 10 + dp[1][1] = 10. Skip? dp[1][3] = 6. Max = 10. dp[2][3] = 10. w=4: Take? 10 + dp[1][2] = 10 + 6 = 16. Skip? dp[1][4] = 6. Max = 16. dp[2][4] = 16. (Taking both item 1 and item 2 uses exactly weight 4.) w=5: Take? 10 + dp[1][3] = 10 + 6 = 16. Skip? dp[1][5] = 6. Max = 16. dp[2][5] = 16. i=2: [ 0, 0, 10, 10, 16, 16 ]
Row i=3 (item 3: weight=3, value=12): w=0,1,2: wt[3]=3 > w. Skip. dp[3][0]=0, dp[3][1]=0, dp[3][2]=10. w=3: Take? 12 + dp[2][0] = 12. Skip? dp[2][3] = 10. Max = 12. dp[3][3] = 12. w=4: Take? 12 + dp[2][1] = 12 + 0 = 12. Skip? dp[2][4] = 16. Max = 16. dp[3][4] = 16. (Skipping is better here — items 1 and 2 together are worth more than item 3 alone.) w=5: Take? 12 + dp[2][2] = 12 + 10 = 22. Skip? dp[2][5] = 16. Max = 22. dp[3][5] = 22. (Taking item 3 and item 2 together: weight 5, value 22 — the global optimum.) i=3: [ 0, 0, 10, 12, 16, 22 ]
Answer: dp[3][5] = 22. Items selected: item 2 (value=10, weight=2) and item 3 (value=12, weight=3). Total weight=5=W. Total value=22.
Notice that item 1 (value=6, weight=2) was not selected — item 2 is strictly better with the same weight and higher value, so item 1 never participates in the optimal solution for any capacity where item 2 fits.
Complete Implementation — 2D Table and 1D Optimization with Item Reconstruction
The implementation below provides three things: the 2D table version that enables item reconstruction, the 1D space-optimized version for memory-constrained environments, and the backtracking function that reconstructs which items were actually selected from the 2D table. All three are production-quality with input validation and long arithmetic throughout.
package io.thecodeforge.dp; import java.util.ArrayList; import java.util.List; /** * Production-grade 0/1 Knapsack solver. * * Two implementations: * solve2D — full (n+1) x (W+1) table. Enables item reconstruction. * Use when you need to know WHICH items were selected. * * solve1D — single rolling array of size (W+1). O(W) space. * Use only when you need the max value and memory is tight. * Cannot reconstruct which items were chosen. * * Both use long arithmetic throughout to prevent silent integer overflow * when individual item values or their sum exceeds Integer.MAX_VALUE. */ public class KnapsackSolver { // ---------------------------------------------------------------- // 2D DP — full table, O(n * W) space, enables reconstruction // ---------------------------------------------------------------- /** * Solves 0/1 knapsack using a full 2D DP table. * * @param weights item weights, 0-indexed, length n * @param values item values, 0-indexed, length n (must match weights) * @param capacity maximum total weight the knapsack can hold (>= 0) * @return maximum achievable value without exceeding capacity * @throws IllegalArgumentException if inputs are null, mismatched, or capacity < 0 */ public static long solve2D(int[] weights, long[] values, int capacity) { validateInputs(weights, values, capacity); int n = weights.length; // (n+1) rows: row 0 = base case (no items) // (W+1) cols: col 0 = base case (zero capacity) // Java zero-initializes long arrays — base cases are already set long[][] dp = new long[n + 1][capacity + 1]; for (int i = 1; i <= n; i++) { int wt = weights[i - 1]; // map 1-indexed row to 0-indexed array long val = values[i - 1]; for (int w = 0; w <= capacity; w++) { // Option A: skip item i dp[i][w] = dp[i - 1][w]; // Option B: take item i (only if it fits) if (wt <= w) { long takeValue = val + dp[i - 1][w - wt]; if (takeValue > dp[i][w]) { dp[i][w] = takeValue; } } } } return dp[n][capacity]; } /** * Reconstructs which items were selected in the optimal 0/1 knapsack solution. * * Backtracks through the 2D DP table starting at dp[n][W]. * At each row i, if dp[i][w] != dp[i-1][w], item i was taken. * * @param weights item weights (same array passed to solve2D) * @param values item values (same array passed to solve2D) * @param capacity knapsack capacity (same value passed to solve2D) * @return 0-indexed positions of items selected in the optimal solution */ public static List<Integer> reconstruct(int[] weights, long[] values, int capacity) { validateInputs(weights, values, capacity); int n = weights.length; long[][] dp = new long[n + 1][capacity + 1]; // Rebuild the table — identical to solve2D for (int i = 1; i <= n; i++) { int wt = weights[i - 1]; long val = values[i - 1]; for (int w = 0; w <= capacity; w++) { dp[i][w] = dp[i - 1][w]; if (wt <= w) { long takeValue = val + dp[i - 1][w - wt]; if (takeValue > dp[i][w]) { dp[i][w] = takeValue; } } } } // Backtrack from dp[n][capacity] to find selected items List<Integer> selected = new ArrayList<>(); int w = capacity; for (int i = n; i >= 1; i--) { // If this row's value differs from the row above, // item i was taken — it changed the optimal value if (dp[i][w] != dp[i - 1][w]) { selected.add(0, i - 1); // convert to 0-indexed, maintain order w -= weights[i - 1]; // reduce remaining capacity } // If equal, item i was skipped — move up without changing w } // Invariant: selected items must not exceed capacity int totalWeight = selected.stream().mapToInt(idx -> weights[idx]).sum(); assert totalWeight <= capacity : "Reconstruction invariant violated: totalWeight=" + totalWeight + " exceeds capacity=" + capacity; return selected; } // ---------------------------------------------------------------- // 1D space-optimized — O(W) space, RIGHT-TO-LEFT iteration // ---------------------------------------------------------------- /** * Solves 0/1 knapsack using a single rolling DP array. * * CRITICAL: the inner loop iterates capacity RIGHT-TO-LEFT (high to low). * Left-to-right iteration silently converts this into unbounded knapsack, * allowing items to be selected multiple times. There is no runtime error. * * Use this variant only when: * - You need only the maximum value (not which items were chosen), AND * - Memory is a constraint (W > ~10^6 makes 2D infeasible). * * @param weights item weights, 0-indexed * @param values item values, 0-indexed * @param capacity maximum total weight * @return maximum achievable value */ public static long solve1D(int[] weights, long[] values, int capacity) { validateInputs(weights, values, capacity); long[] dp = new long[capacity + 1]; // dp[w] is initialized to 0 — represents 'no items considered yet' for (int i = 0; i < weights.length; i++) { int wt = weights[i]; long val = values[i]; // RIGHT-TO-LEFT: ensures dp[w - wt] still reflects the state // BEFORE item i was considered (previous row in the 2D table). // // If we iterated left-to-right: // dp[wt] gets updated to include item i. // Then dp[2*wt] reads the already-updated dp[wt] and adds item i again. // Item i gets selected twice — unbounded knapsack, not 0/1. for (int w = capacity; w >= wt; w--) { long takeValue = val + dp[w - wt]; if (takeValue > dp[w]) { dp[w] = takeValue; } } } return dp[capacity]; } // ---------------------------------------------------------------- // Input validation — shared by all variants // ---------------------------------------------------------------- private static void validateInputs(int[] weights, long[] values, int capacity) { if (weights == null || values == null) { throw new IllegalArgumentException("weights and values arrays must not be null"); } if (weights.length != values.length) { throw new IllegalArgumentException( "weights and values must have the same length: " + "weights.length=" + weights.length + ", values.length=" + values.length); } if (capacity < 0) { throw new IllegalArgumentException( "capacity must be non-negative, got: " + capacity); } for (int i = 0; i < weights.length; i++) { if (weights[i] < 0) { throw new IllegalArgumentException( "Item weight must be non-negative. weights[" + i + "]=" + weights[i]); } if (values[i] < 0) { throw new IllegalArgumentException( "Item value must be non-negative. values[" + i + "]=" + values[i]); } } } // ---------------------------------------------------------------- // Demo // ---------------------------------------------------------------- public static void main(String[] args) { int[] weights = {2, 2, 3}; long[] values = {6L, 10L, 12L}; int capacity = 5; System.out.println("=== 0/1 Knapsack Demo ==="); System.out.println("Items: weight=" + java.util.Arrays.toString(weights)); System.out.println(" value= " + java.util.Arrays.toString(values)); System.out.println("Capacity: " + capacity); System.out.println(); long max2D = solve2D(weights, values, capacity); System.out.println("2D DP max value: " + max2D); List<Integer> chosen = reconstruct(weights, values, capacity); System.out.print("Selected items (0-indexed): "); for (int idx : chosen) { System.out.printf("item[%d](w=%d, v=%d) ", idx, weights[idx], values[idx]); } System.out.println(); long max1D = solve1D(weights, values, capacity); System.out.println("1D DP max value: " + max1D); System.out.println(); // Demonstrate the iteration direction bug System.out.println("=== Iteration Direction Bug Demo ==="); int[] singleItemWeights = {3}; long[] singleItemValues = {100L}; int bugCapacity = 9; System.out.println("Single item: weight=3, value=100. Capacity=9."); System.out.println("Correct (0/1): should select item at most once. Max = 100."); System.out.println("Correct 1D result: " + solve1D(singleItemWeights, singleItemValues, bugCapacity)); // Forward iteration would return 300 (item used 3 times) — do not ship that } }
Items: weight=[2, 2, 3]
value= [6, 10, 12]
Capacity: 5
2D DP max value: 22
Selected items (0-indexed): item[1](w=2, v=10) item[2](w=3, v=12)
1D DP max value: 22
=== Iteration Direction Bug Demo ===
Single item: weight=3, value=100. Capacity=9.
Correct (0/1): should select item at most once. Max = 100.
Correct 1D result: 100
Why the 1D Iteration Direction Is the Most Important Line in the Algorithm
The single most dangerous line in any 0/1 knapsack implementation is the inner loop of the 1D version. Not the recurrence, not the base case, not the array sizing — the loop direction. It has no compiler warning, no runtime exception, and no obvious symptom. It just silently computes the wrong answer.
To understand why, trace through what happens with one item (weight=3, value=100) and capacity=9.
Correct right-to-left pass: Start: dp = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0] w=9: dp[9] = max(dp[9], 100 + dp[9-3]) = max(0, 100 + dp[6]) = max(0, 100 + 0) = 100 w=8: dp[8] = max(0, 100 + dp[5]) = 100 w=7: dp[7] = max(0, 100 + dp[4]) = 100 w=6: dp[6] = max(0, 100 + dp[3]) = 100 w=5: dp[5] = max(0, 100 + dp[2]) = 100 w=4: dp[4] = max(0, 100 + dp[1]) = 100 w=3: dp[3] = max(0, 100 + dp[0]) = 100 Final dp[9] = 100. Item used once. Correct.
Broken left-to-right pass: Start: dp = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0] w=3: dp[3] = max(0, 100 + dp[0]) = 100 w=4: dp[4] = max(0, 100 + dp[1]) = 100 w=5: dp[5] = max(0, 100 + dp[2]) = 100 w=6: dp[6] = max(0, 100 + dp[3]) = 100 + 100 = 200 ← dp[3] already includes item w=7: dp[7] = max(0, 100 + dp[4]) = 200 w=8: dp[8] = max(0, 100 + dp[5]) = 200 w=9: dp[9] = max(0, 100 + dp[6]) = 100 + 200 = 300 ← item used three times Final dp[9] = 300. Item used three times. Wrong.
The left-to-right version returns 300 when the correct answer is 100. The item was used three times because each earlier update was already visible when a higher capacity position was evaluated. This is exactly unbounded knapsack — and it is exactly what happened in the production incident that over-provisioned VMs by 40%.
Pseudo-Polynomial Time — Why O(n*W) Is Not As Fast As It Looks
The time complexity O(n × W) looks polynomial — two variables multiplied together. But it is actually pseudo-polynomial, and this distinction matters enormously when inputs scale.
Polynomial time means the runtime grows polynomially in the size of the input, where input size is the number of bits required to represent the input. For an algorithm that takes n items and a capacity W, the input size in bits is roughly n × log(max_weight) + n × log(max_value) + log(W). The key observation is that W itself is a value — a number — and the number of bits needed to represent W is log(W), not W.
When W = 10^9, the input needs about 30 bits to represent the capacity. But the DP table has 10^9 columns. That is exponential in the number of bits representing W, not polynomial. A truly polynomial algorithm would run in time polynomial in log(W) — roughly proportional to 30, not one billion.
The practical consequence: O(n × W) is fast when W is small. For n=100 items and W=10,000, the table has one million cells — runs in milliseconds. For n=100 items and W=10^9, the table has 100 billion cells — runs out of memory and time entirely. The algorithm that seemed fast completely collapses as W grows, not because of an implementation problem but because of a fundamental property of the algorithm.
This is why 0/1 knapsack is NP-complete. No polynomial-time algorithm is known (and finding one would prove P=NP). The DP approach is the best known exact algorithm, and it is viable only when W is bounded to a manageable value. When W is large, the practical alternatives are: approximation algorithms that trade exactness for speed (FPTAS can get within 1+ε of optimal in polynomial time), branch-and-bound with aggressive pruning, or problem reformulation that reduces the effective W.
- W=1,000: table has 1,000 columns. Perfectly fine for any n.
- W=100,000: table has 100,000 columns. Fine for most n values.
- W=10,000,000: 2D table with n=100 rows uses roughly 8GB of memory for long values. Reaches memory limits.
- W=10^9: completely infeasible with either 2D or 1D DP. No machine has 8 billion longs of RAM for a single solver.
- W=10^18: the number of columns exceeds the addressable memory of any current hardware by orders of magnitude.
- The size of W in bits is log2(W) — roughly 30 bits for W=10^9. An algorithm polynomial in the input size would run in time proportional to 30^k for some constant k, not 10^9.
| Aspect | 2D DP Table — (n+1) × (W+1) | 1D Space-Optimized DP — (W+1) |
|---|---|---|
| Time Complexity | O(n × W) — n outer iterations, W+1 inner iterations | O(n × W) — identical runtime, different memory pattern |
| Space Complexity | O(n × W) — stores all n+1 rows simultaneously | O(W) — stores only the current row, overwriting previous |
| Item Reconstruction | Yes — backtrack from dp[n][W] upward through the table to find which items were taken | No — the previous row is overwritten during each pass and cannot be recovered |
| Iteration Direction | Inner loop can go left-to-right or right-to-left — row isolation prevents contamination | Inner loop MUST go right-to-left — left-to-right silently solves unbounded knapsack |
| Silent Bug Risk | Low — each row is computed from the previous row, which is never modified during the current row's computation | High — the left-to-right bug produces wrong answers with no error signal and plausible-looking output |
| When to Use | Default choice. Required when the caller needs to know which items were selected, not just the maximum value. | When only the maximum value is needed AND W is large enough that n × W memory is a real constraint. |
| Overflow Risk | Use long throughout — int silently overflows when total value exceeds 2.1 billion | Same — use long for all value arithmetic regardless of implementation variant |
| Large W Feasibility | Infeasible when W exceeds roughly 10^7 with n=100 rows due to memory — approximately 8GB for long values | Feasible up to roughly W=10^9 with 8GB RAM for the single array, but runtime is still proportional to n × W |
| Debuggability | High — print the full table to diagnose incorrect cells during development | Low — the rolling array state at any given moment does not correspond to a human-interpretable snapshot |
🎯 Key Takeaways
- 0/1 Knapsack selects each item fully or not at all. DP solves it in O(n × W) time by caching the optimal solution to every unique (items 1..i, capacity w) subproblem exactly once.
- dp[i][w] is the maximum value achievable using items 1 through i with weight capacity w. The recurrence is max(skip item i, take item i) — take is only valid when the item's weight does not exceed w.
- The 1D space optimization iterates capacity right-to-left to prevent each item from being used more than once. Left-to-right iteration silently solves unbounded knapsack with no error signal.
- Item reconstruction requires the full 2D table. The 1D version overwrites the previous row and cannot be backtracked. If you need to know which items were selected, you must use 2D.
- Use long for all DP arrays and value inputs. Java silently overflows int arithmetic at 2.1 billion with no exception.
- O(n × W) is pseudo-polynomial — exponential in the bits of W, not polynomial in input size. 0/1 Knapsack is NP-complete. The DP algorithm is the best known exact algorithm for bounded W, not a polynomial-time solution.
⚠ Common Mistakes to Avoid
Interview Questions on This Topic
- QWhat is the recurrence relation for 0/1 knapsack and what does each term represent?JuniorReveal
- QWhy must the 1D knapsack DP iterate capacity in reverse, and what happens if you iterate forward?Mid-levelReveal
- QWhat is the difference between 0/1 knapsack and fractional knapsack, and why does one require DP while the other does not?Mid-levelReveal
- QIs O(n*W) polynomial time? What are the implications for scalability?SeniorReveal
Frequently Asked Questions
Why must the 1D knapsack iterate capacity in reverse?
In the 1D version, dp[w] represents the best value for capacity w using items seen so far in the current pass. If you iterate forward (small w to large w), dp[w - wt[i]] gets updated to include item i earlier in the same pass. When you later compute dp[w] and read dp[w - wt[i]], you are reading a value that already includes item i — effectively adding item i a second time. The item ends up being used as many times as it fits in the capacity, silently solving unbounded knapsack instead of 0/1 knapsack.
Reverse iteration (large w to small w) ensures that when you compute dp[w] for any w, dp[w - wt[i]] has not yet been updated in the current pass. It still holds the value from before item i was considered. Item i is therefore counted at most once per pass.
What is the difference between 0/1 knapsack and fractional knapsack?
In 0/1 knapsack, each item is either taken completely or not at all. In fractional knapsack, you can take any fraction of an item. Fractional knapsack is solvable greedily in O(n log n) — sort by value/weight ratio, take items in ratio order, take a fraction of the last item if needed to fill exactly. Greedy is optimal for fractional because partial items let you always use remaining capacity perfectly.
0/1 knapsack requires dynamic programming because greedy fails — taking the best-ratio item can block a more valuable combination of lower-ratio items. The two problems look similar but have fundamentally different solution strategies.
What is the time complexity and is it polynomial?
The time complexity is O(n × W), which looks polynomial but is actually pseudo-polynomial. The key distinction: W is an input value, not the number of input elements. Representing W in binary requires log2(W) bits. An algorithm that runs in time proportional to W itself is running in time exponential in the number of bits representing W — that is pseudo-polynomial. A truly polynomial algorithm for 0/1 knapsack does not exist unless P=NP, since the problem is NP-complete. This means the DP approach becomes completely infeasible as W grows large — W=10^9 produces a billion-column table that no reasonable machine can hold.
When should I use the 2D version versus the 1D space-optimized version?
Use the 2D version by default. It is safer (no iteration-direction bug), debuggable (you can print the table), and supports item reconstruction. The only reason to switch to 1D is when the 2D table genuinely does not fit in memory — specifically when n × W exceeds available RAM as a long array (approximately 8 bytes per cell). For n=100 and W=10^6, the 2D table is 800MB — borderline. For W=10^7, it is 8GB — infeasible on most instances. If you are in that regime and need only the maximum value, use 1D. If you are in that regime and need which items were selected, reconsider whether DP is the right algorithm at all.
Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.