Skip to content
Home DSA Dijkstra's Shortest Path Algorithm — Deep Dive with Java Code, Edge Cases & Interview Prep

Dijkstra's Shortest Path Algorithm — Deep Dive with Java Code, Edge Cases & Interview Prep

Where developers are forged. · Structured learning · Free forever.
📍 Part of: Graphs → Topic 4 of 17
Dijkstra's shortest path algorithm explained deeply — min-heap internals, negative edge gotchas, O(E log V) complexity, full Java implementation, and real interview questions.
🔥 Advanced — solid DSA foundation required
In this tutorial, you'll learn
Dijkstra's shortest path algorithm explained deeply — min-heap internals, negative edge gotchas, O(E log V) complexity, full Java implementation, and real interview questions.
  • Dijkstra finds shortest paths from a single source in O((V+E) log V) using a min-heap.
  • It requires all edge weights to be non-negative — use Bellman-Ford for negative weights.
  • Always check for stale heap entries by comparing the popped distance to the current known distance.
✦ Plain-English analogy ✦ Real code with output ✦ Interview questions
Quick Answer

Imagine you're driving across a city and your GPS needs to find the fastest route from your house to the airport. There are dozens of roads, each with different travel times — some are highways, some are school zones. Dijkstra's algorithm is exactly what your GPS runs: it always tries the road with the lowest travel time first, crossing off places it's already visited, and updating estimated arrival times as it discovers faster routes. It never backtracks to a finished stop, and it always guarantees you the shortest possible trip.

Every time Google Maps reroutes you around traffic, every time a network router decides which packet path costs the least, and every time an airline computes the cheapest connecting flights, Dijkstra's algorithm — or something built directly on top of it — is doing the heavy lifting. It was published by Edsger W. Dijkstra in 1959, and it remains one of the most practically deployed algorithms in computer science history. That kind of longevity isn't accidental.

The core problem Dijkstra solves is this: given a weighted graph where edge weights are non-negative, find the minimum-cost path from a single source vertex to every other reachable vertex. A naive approach — trying every possible path — explodes combinatorially. Dijkstra's insight was that you can build the solution greedily: if you always expand the unvisited vertex with the currently known lowest cost, you're guaranteed never to find a cheaper path to it later (provided weights are non-negative). That greedy guarantee is the algorithm's heartbeat.

By the end of this article you'll understand not just how to implement Dijkstra with a priority queue in Java, but why the min-heap is mandatory for the efficient version, what happens internally when you relax an edge, why negative weights break the algorithm at a fundamental level (not just practically), how to handle common production gotchas like disconnected graphs and duplicate heap entries, and exactly how to answer the curveball Dijkstra questions interviewers love to throw.

What is Dijkstra's Algorithm? — Plain English

Dijkstra's algorithm finds the shortest path from one source node to all other nodes in a weighted graph where all edge weights are non-negative. Think of a GPS: your phone knows the travel time on every road and wants the fastest route. It never backtracks to a road it already 'settled' because it always picks the closest unsettled destination next — and since weights are non-negative, no future road could make a settled destination cheaper.

The key insight is greedy correctness: if you always expand the node with the current minimum distance, the first time you reach a node is guaranteed to be the shortest path. This breaks down with negative weights because a future negative edge could provide a cheaper route to an already-settled node.

How Dijkstra's Algorithm Works — Step by Step

Dijkstra uses a min-heap (priority queue) to always process the nearest unsettled node next.

  1. Initialize dist[source] = 0, dist[all others] = infinity.
  2. Push (0, source) into the min-heap.
  3. While the heap is non-empty:
  4. a. Pop the node u with the smallest current distance d.
  5. b. If d > dist[u], skip — this is a stale entry (we already found a shorter path to u).
  6. c. For each neighbour v of u with edge weight w:
  7. - If dist[u] + w < dist[v], update dist[v] = dist[u] + w and push (dist[v], v) to the heap.
  8. After the heap is empty, dist[v] holds the shortest distance from source to every reachable node v.

Time complexity: O((V + E) log V) with a binary min-heap. Space: O(V) for distances and the heap.

Worked Example — Tracing the Algorithm

Graph (5 nodes 0-4, directed weighted edges): 0->1 w=2, 0->2 w=4, 1->2 w=1, 1->3 w=7, 2->4 w=3, 3->4 w=1

Start: dist=[0, INF, INF, INF, INF]. Heap: [(0,0)]

Iteration 1: pop (0,0). Process node 0. Edge 0->1 w=2: dist[1] = min(INF, 0+2) = 2. Push (2,1). Edge 0->2 w=4: dist[2] = min(INF, 0+4) = 4. Push (4,2). dist=[0,2,4,INF,INF]. Heap: [(2,1),(4,2)]

Iteration 2: pop (2,1). Process node 1. Edge 1->2 w=1: dist[2] = min(4, 2+1) = 3. Push (3,2). Edge 1->3 w=7: dist[3] = min(INF, 2+7) = 9. Push (9,3). dist=[0,2,3,9,INF]. Heap: [(3,2),(4,2),(9,3)]

Iteration 3: pop (3,2). Process node 2. Edge 2->4 w=3: dist[4] = min(INF, 3+3) = 6. Push (6,4). dist=[0,2,3,9,6]. Heap: [(4,2),(6,4),(9,3)]

Iteration 4: pop (4,2). d=4 > dist[2]=3, stale entry. Skip.

Iteration 5: pop (6,4). Process node 4. No outgoing edges.

Iteration 6: pop (9,3). Process node 3. Edge 3->4 w=1: dist[4] = min(6, 9+1) = 6. No improvement.

Final distances from node 0: [0, 2, 3, 9, 6]

Implementation

The heap stores (distance, node) pairs. heapq.heappop returns the minimum-distance entry. Stale entries (where the popped distance exceeds the current best) are skipped immediately. For each neighbour v of u, the algorithm checks whether going through u improves v's known distance. Unreachable nodes retain float('inf') in the dist dictionary. The algorithm terminates when the heap is empty — all reachable nodes are settled.

dijkstra.py · PYTHON
123456789101112131415161718192021222324252627282930
import heapq

def dijkstra(graph, source):
    """
    graph: dict mapping node -> list of (neighbour, weight)
    Returns dist dict mapping node -> shortest distance from source.
    """
    dist = {node: float('inf') for node in graph}
    dist[source] = 0
    heap = [(0, source)]  # (distance, node)

    while heap:
        d, u = heapq.heappop(heap)
        if d > dist[u]:  # stale entry — skip
            continue
        for v, w in graph[u]:
            if dist[u] + w < dist[v]:
                dist[v] = dist[u] + w
                heapq.heappush(heap, (dist[v], v))

    return dist

graph = {
    0: [(1,2),(2,4)],
    1: [(2,1),(3,7)],
    2: [(4,3)],
    3: [(4,1)],
    4: []
}
print(dijkstra(graph, 0))
▶ Output
{0: 0, 1: 2, 2: 3, 3: 9, 4: 6}
ConceptUse CaseExample
Dijkstra Shortest Path AlgorithmCore usageSee code above

🎯 Key Takeaways

  • Dijkstra finds shortest paths from a single source in O((V+E) log V) using a min-heap.
  • It requires all edge weights to be non-negative — use Bellman-Ford for negative weights.
  • Always check for stale heap entries by comparing the popped distance to the current known distance.
  • The first time a node is popped from the heap, its distance is final and optimal.
  • Use an adjacency list for sparse graphs; Dijkstra on dense graphs may be better served by Floyd-Warshall.

⚠ Common Mistakes to Avoid

    Memorising syntax before understanding the concept
    Skipping practice and only reading theory

Interview Questions on This Topic

  • QWhat is the time complexity of Dijkstra's algorithm with a min-heap?
  • QWhy does Dijkstra not work with negative edge weights?
  • QWhat is a stale heap entry and how do you handle it?

Frequently Asked Questions

Why does Dijkstra fail with negative edge weights?

Dijkstra's greedy guarantee relies on the fact that once a node is settled (popped from the heap with the minimum distance), no future path can be cheaper. With negative weights, a node settled later could have a negative edge that makes a previously-settled node's distance shorter — but we never revisit settled nodes. Use Bellman-Ford for graphs with negative weights.

What is a stale heap entry and why does it matter?

When we update dist[v] and push a new (dist[v], v) entry to the heap, the old entry for v remains. When that old entry is popped, its distance is larger than the current dist[v]. We check 'd > dist[u]' to detect and skip these stale entries. Without this check, the algorithm still produces correct results but wastes time re-processing settled nodes.

What is the difference between O(V^2) and O(E log V) Dijkstra?

The O(V^2) version uses a plain array to find the minimum-distance unsettled node, scanning all V nodes each time. It is better for dense graphs (E close to V^2). The O((V+E) log V) version uses a binary min-heap and is better for sparse graphs. In most interview and competitive programming contexts, the heap version is expected.

🔥
Naren Founder & Author

Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.

← PreviousDFS — Depth First SearchNext →Bellman-Ford Algorithm
Forged with 🔥 at TheCodeForge.io — Where Developers Are Forged