Two Pointer Pattern: Solve Interview Problems Faster With Less Code
Every coding interview has a handful of patterns that show up again and again. Two pointers is one of the most common — and one of the most misunderstood. Candidates often brute-force their way through problems that a senior dev would knock out in O(n) time because they don't recognise the signal that two pointers is the right tool. Missing that signal doesn't just cost you efficiency points; it costs you the job.
The two pointer technique exists to eliminate the need for nested loops. Any time you find yourself writing a loop inside a loop to compare or relate two positions in the same collection, there's a good chance you're doing more work than you need to. Two pointers lets you move through the data from two positions simultaneously — either from opposite ends converging toward the middle, or from the same starting point racing at different speeds — and cut your time complexity from O(n²) down to O(n).
By the end of this article you'll be able to identify which of the three two-pointer variants to reach for (opposite ends, same direction, or fast-slow), implement the three most interview-common problems from scratch without hints, and explain your reasoning out loud — which is the part that actually gets you hired.
Why Two Pointers Beat Brute Force — The Core Intuition
Let's get the intuition rock solid before touching code. The brute-force approach to 'find two numbers that sum to a target' is to check every pair: index 0 with index 1, index 0 with index 2, and so on. That's n*(n-1)/2 comparisons — O(n²). It works, but it's slow and it signals to an interviewer that you haven't spotted the pattern.
The key insight is this: if your data is sorted, you already know something about every element's relationship to every other element without checking them all. If the number at the left pointer plus the number at the right pointer is too large, you don't need to check the right pointer against anything to its right — they'd all be even larger. So you move the right pointer one step inward. If the sum is too small, you move the left pointer one step inward. Each step eliminates an entire class of possibilities.
This is why two pointers almost always requires a sorted array for the opposite-ends variant. The sort itself costs O(n log n), but the pointer walk is O(n), so your overall complexity is O(n log n) — still far better than O(n²) for large inputs. When the input is already sorted, you get pure O(n).
public class TwoSumSorted { /** * Given a SORTED array of integers and a target sum, * return the 1-based indices of the two numbers that add up to the target. * Assumes exactly one solution exists. * * Time: O(n) — single pass with two pointers * Space: O(1) — no extra data structures */ public static int[] findPairWithTargetSum(int[] sortedNumbers, int targetSum) { int leftIndex = 0; // Start at the smallest element int rightIndex = sortedNumbers.length - 1; // Start at the largest element while (leftIndex < rightIndex) { int currentSum = sortedNumbers[leftIndex] + sortedNumbers[rightIndex]; if (currentSum == targetSum) { // Found it — return 1-based indices as the problem specifies return new int[]{leftIndex + 1, rightIndex + 1}; } if (currentSum < targetSum) { // Sum is too small — we need a bigger left value, so move left pointer right leftIndex++; } else { // Sum is too large — we need a smaller right value, so move right pointer left rightIndex--; } } // Problem guarantees a solution exists, but good practice to handle this return new int[]{-1, -1}; } public static void main(String[] args) { int[] prices = {2, 7, 11, 15}; int budget = 9; int[] result = findPairWithTargetSum(prices, budget); System.out.println("Indices (1-based): [" + result[0] + ", " + result[1] + "]"); System.out.println("Values: " + prices[result[0]-1] + " + " + prices[result[1]-1] + " = " + budget); // Second test case int[] scores = {1, 3, 4, 5, 7, 10, 11}; int targetScore = 9; int[] scoreResult = findPairWithTargetSum(scores, targetScore); System.out.println("\nIndices (1-based): [" + scoreResult[0] + ", " + scoreResult[1] + "]"); System.out.println("Values: " + scores[scoreResult[0]-1] + " + " + scores[scoreResult[1]-1] + " = " + targetScore); } }
Values: 2 + 7 = 9
Indices (1-based): [3, 5]
Values: 4 + 5 = 9
Removing Duplicates In-Place — The Same-Direction Variant
Not every two-pointer problem has the pointers moving toward each other. The same-direction variant uses a slow pointer and a fast pointer both starting at the left — the fast pointer scouts ahead while the slow pointer marks where the next valid element should land.
This pattern is perfect for in-place array modification problems: removing duplicates, removing a specific value, or filtering elements without allocating a new array. The slow pointer essentially tracks 'the last confirmed valid position', and the fast pointer asks 'is this element worth keeping?'
Think of it like editing a manuscript. The fast-reading editor races through the draft, and every time they find a sentence worth keeping, they hand it to the slow typist who places it into the clean copy. The typist never skips ahead; the editor never looks back.
This approach runs in O(n) time and O(1) space — a big deal in interviews where the constraint 'do it in-place without extra memory' is explicitly stated. If you reach for a HashSet here, you've violated that constraint even though you'd get the right answer.
public class RemoveDuplicatesInPlace { /** * Remove duplicates from a sorted array IN-PLACE. * Returns the count of unique elements. * Elements beyond the returned count don't matter. * * Time: O(n) — single pass * Space: O(1) — no extra arrays or sets * * Key constraint: the array is SORTED, so duplicates are always adjacent. * That's what makes the two-pointer approach valid here. */ public static int removeDuplicates(int[] sortedArray) { if (sortedArray.length == 0) return 0; // slowWriter points to where the next unique element should be written int slowWriter = 0; // fastReader scouts ahead to find the next unique element for (int fastReader = 1; fastReader < sortedArray.length; fastReader++) { // If fastReader found a new unique value (different from what slowWriter last wrote)... if (sortedArray[fastReader] != sortedArray[slowWriter]) { slowWriter++; // Advance slowWriter to the next available slot sortedArray[slowWriter] = sortedArray[fastReader]; // Write the unique value there } // If they're equal, fastReader keeps moving — we skip the duplicate } return slowWriter + 1; // +1 because slowWriter is an index, not a count } public static void main(String[] args) { int[] temperatures = {1, 1, 2, 3, 3, 3, 4, 5, 5}; System.out.println("Before: " + java.util.Arrays.toString(temperatures)); int uniqueCount = removeDuplicates(temperatures); System.out.println("Unique count: " + uniqueCount); System.out.print("Unique elements: "); for (int i = 0; i < uniqueCount; i++) { System.out.print(temperatures[i] + " "); } System.out.println(); // Edge case: all elements are the same int[] allSame = {7, 7, 7, 7}; int allSameCount = removeDuplicates(allSame); System.out.println("\nAll-same array unique count: " + allSameCount); System.out.println("First unique element: " + allSame[0]); // Edge case: no duplicates at all int[] noDupes = {1, 2, 3, 4}; int noDupesCount = removeDuplicates(noDupes); System.out.println("\nNo-duplicate array unique count: " + noDupesCount); } }
Unique count: 5
Unique elements: 1 2 3 4 5
All-same array unique count: 1
First unique element: 7
No-duplicate array unique count: 4
Detecting a Cycle — The Fast-Slow (Floyd's) Variant
The third variant is the most elegant: the fast pointer moves two steps for every one step the slow pointer takes. If there's a cycle in a linked list, the fast pointer will eventually lap the slow pointer and they'll meet inside the cycle. If there's no cycle, the fast pointer will fall off the end of the list.
Why does this work? Think of two runners on a circular track — one running twice as fast as the other. They'll always meet eventually, no matter how long the track is. If the track is a straight road with a dead end, the faster runner hits the wall first and you know it's not circular.
This is Floyd's Cycle Detection algorithm. It's O(n) time and O(1) space — compare that to the HashSet approach, which is also O(n) time but O(n) space. In an interview, the follow-up 'can you do it without extra memory?' is almost guaranteed, and this is your answer.
This pattern extends beyond linked lists. It appears in problems about finding the duplicate number in an array, detecting infinite loops in graph traversal, and even in cryptography.
public class LinkedListCycleDetector { // Standard linked list node static class ListNode { int value; ListNode next; ListNode(int value) { this.value = value; this.next = null; } } /** * Detect whether a linked list contains a cycle. * Uses Floyd's fast-slow pointer algorithm. * * Time: O(n) — in the worst case we traverse the list once * Space: O(1) — only two pointer variables, no extra storage */ public static boolean hasCycle(ListNode head) { if (head == null || head.next == null) { return false; // A list with 0 or 1 nodes can't have a cycle } ListNode slowRunner = head; // Moves 1 step at a time ListNode fastRunner = head; // Moves 2 steps at a time while (fastRunner != null && fastRunner.next != null) { slowRunner = slowRunner.next; // 1 step fastRunner = fastRunner.next.next; // 2 steps // If they ever point to the same node, a cycle exists if (slowRunner == fastRunner) { return true; // They've met inside the cycle } } // fastRunner reached the end — no cycle return false; } /** * Bonus: find WHERE the cycle starts (common interview follow-up). * After detection, reset slow to head and move both one step at a time. * They meet at the cycle entry point. */ public static ListNode findCycleStart(ListNode head) { ListNode slowRunner = head; ListNode fastRunner = head; // Phase 1: detect the cycle while (fastRunner != null && fastRunner.next != null) { slowRunner = slowRunner.next; fastRunner = fastRunner.next.next; if (slowRunner == fastRunner) { // Phase 2: find the entry point // Reset slow to head; fast stays at meeting point ListNode entryFinder = head; while (entryFinder != slowRunner) { entryFinder = entryFinder.next; slowRunner = slowRunner.next; } return entryFinder; // This is the start of the cycle } } return null; // No cycle } public static void main(String[] args) { // Build: 3 -> 2 -> 0 -> -4 // ^ | // |_________| (cycle back to node with value 2) ListNode node1 = new ListNode(3); ListNode node2 = new ListNode(2); ListNode node3 = new ListNode(0); ListNode node4 = new ListNode(-4); node1.next = node2; node2.next = node3; node3.next = node4; node4.next = node2; // Creates the cycle — tail points back to node2 System.out.println("Has cycle: " + hasCycle(node1)); // true ListNode cycleStart = findCycleStart(node1); System.out.println("Cycle starts at node with value: " + cycleStart.value); // 2 // Now test a list with no cycle ListNode linearHead = new ListNode(1); linearHead.next = new ListNode(2); linearHead.next.next = new ListNode(3); System.out.println("\nLinear list has cycle: " + hasCycle(linearHead)); // false System.out.println("Cycle start (should be null): " + findCycleStart(linearHead)); // null } }
Cycle starts at node with value: 2
Linear list has cycle: false
Cycle start (should be null): null
| Aspect | Opposite-Ends Variant | Same-Direction (Slow/Fast) Variant |
|---|---|---|
| Pointer starting positions | One at index 0, one at index n-1 | Both start at index 0 (or head) |
| Pointer movement direction | Converging toward the middle | Both move right; fast moves faster |
| Requires sorted input? | Yes — sorting gives the logical ordering needed | No — works on unsorted arrays and linked lists |
| Primary use cases | Two-sum, three-sum, container with most water | Remove duplicates, cycle detection, finding middle |
| Time complexity | O(n) after sorting — O(n log n) overall if you sort | O(n) — single pass regardless |
| Typical interview trigger | 'Find a pair that satisfies condition X in sorted array' | 'Detect cycle', 'remove in-place', 'find duplicate' |
| Data structure | Arrays almost exclusively | Arrays and linked lists both |
🎯 Key Takeaways
- Two pointers reduce O(n²) nested-loop solutions to O(n) by eliminating impossible pairs at each step rather than checking them — the 'eliminating a class of possibilities' insight is the whole pattern.
- Pick your variant before you code: opposite-ends for 'find a pair/triplet in a sorted array', same-direction slow/fast for 'modify in-place or remove without extra memory', fast-slow at different speeds for 'detect a cycle or find the middle'.
- The fast pointer null check (
fastRunner != null && fastRunner.next != null) is not boilerplate — it's load-bearing logic that prevents NullPointerExceptions on every cycle-detection problem. - Naming your pointers descriptively ('slowRunner', 'fastReader', 'leftBound', 'rightBound') signals seniority in an interview and eliminates the need to explain your variable semantics out loud.
⚠ Common Mistakes to Avoid
- ✕Mistake 1: Applying two pointers to an unsorted array for the opposite-ends variant — The algorithm silently returns wrong answers because the converging logic depends entirely on sorted order. Always sort first (or confirm it's sorted), then apply. If sorting isn't allowed, use a HashMap instead.
- ✕Mistake 2: Not guarding the fast pointer's second step — Writing
fastRunner = fastRunner.next.nextwithout first checkingfastRunner.next != nullcauses a NullPointerException on odd-length or cycle-free lists. Always write the condition asfastRunner != null && fastRunner.next != nullin the while loop guard. - ✕Mistake 3: Moving both pointers simultaneously when only one should move — In the two-sum opposite-ends variant, candidates sometimes increment both pointers after a non-match, skipping valid pairs entirely. Only move the pointer whose side caused the imbalance: sum too small means move left pointer right; sum too large means move right pointer left.
Interview Questions on This Topic
- QGiven an array of integers sorted in ascending order, write a function that finds all unique triplets that sum to zero. Walk me through your approach and complexity before you code.
- QYou have a linked list and you suspect it might contain a cycle. How would you detect it in O(1) space? What would change in your approach if I asked you to also return the node where the cycle begins?
- QI've given you a solution using a HashSet to remove duplicates from a sorted array. It passes all test cases. Why might an interviewer still reject it, and how would you rewrite it to address that concern?
Frequently Asked Questions
When should I use two pointers instead of a HashMap in an interview?
Use two pointers when the input is sorted and you need O(1) space — the sorted order gives you the directional logic to converge efficiently. Use a HashMap when the input is unsorted and you can't or shouldn't sort it, and when O(n) extra space is acceptable. If an interviewer explicitly says 'constant space', two pointers is almost always the answer.
Can the two pointer technique work on strings?
Absolutely. Strings are just arrays of characters, so opposite-ends two pointers are the standard approach for palindrome checking — start one pointer at index 0 and one at the last index, compare characters, and move inward. Problems like 'reverse a string in-place' and 'valid palindrome with character filtering' both use this variant.
What's the difference between two pointers and the sliding window technique?
Both use two boundary markers on an array, but their intent differs. Two pointers typically looks for a specific pair or relationship between two elements, and the pointers move based on a comparison result. Sliding window maintains a contiguous subarray of variable or fixed size and expands or contracts it to satisfy a condition — think 'longest substring without repeating characters'. If you're comparing elements at two specific positions, think two pointers. If you care about everything between two positions as a group, think sliding window.
Written and reviewed by senior developers with real-world experience across enterprise, startup and open-source projects. Every article on TheCodeForge is written to be clear, accurate and genuinely useful — not just SEO filler.