Depth-First Search

Exploring as far as possible before backtracking

The Maze Runner

Imagine you are exploring a maze. You come to a fork in the path. Which way do you go?

One strategy is to pick a direction and keep going. If you hit a dead end, backtrack to the last fork and try a different path. Keep doing this until you find the exit. This is depth-first search (DFS).

Where BFS spreads outward like ripples in a pond, DFS dives deep. It follows a single path as far as it can go before retreating. Think of a mole burrowing through the earth—it tunnels forward relentlessly, only turning around when it hits rock.

BFS spreads like ripples. DFS burrows like a mole.

The difference is not just philosophical. It emerges from a single, concrete choice: the data structure we use to track where to go next.

The Stack

In BFS, we used a queue—first in, first out (FIFO). The first cell we discovered was the first one we explored. This creates the expanding wavefront.

In DFS, we use a stack—last in, first out (LIFO). The most recent discovery gets explored first. This creates the deep, narrow exploration pattern.

Interactive: The Stack

Stack (LIFO)

A
B
C

Top: C

Last In, First Out: The most recently added item is always removed first. In DFS, this means the most recently discovered cell gets explored next.

A stack works like a pile of plates. You can only add to the top (push) and remove from the top (pop). The last plate placed is the first one taken. This simple reversal of order completely transforms how the algorithm explores.

When DFS visits a cell, it pushes all unvisited neighbors onto the stack. But instead of exploring them in order, it immediately pops the most recently added neighbor and dives deeper. It keeps going until it hits a dead end—a cell with no unvisited neighbors. Then it pops the next item from the stack and tries again.

You can implement DFS with an explicit stack, or you can use recursion—the call stack itself becomes your data structure. The recursive version is often cleaner:

function dfs(node):
    if node is goal:
        return path
    mark node as visited
    for each neighbor of node:
        if neighbor not visited:
            dfs(neighbor)
plaintext

Each recursive call pushes onto the call stack. When a call returns (backtracks), we pop back to the previous decision point.

DFS in Action

Watch how DFS explores. Notice the snake-like pattern—the algorithm commits to a direction and goes deep before backtracking.

Interactive: Depth-First Search

DFS
SE
Pathfinding visualization grid. Start position at column 2, row 2. End position at column 11, row 8.Use arrow keys to navigate, Enter or Space to toggle cells.
Visited
0
Stack
1
Path
StackLIFO: Last In, First Out
top →
1,1
1 cell in stack
Step 11 / 23
Space play/pause←→ stepR reset
Start
End
Wall
Visited
Frontier
Path

Click cells to toggle walls. Drag the S and E markers to move start and end. Notice the snake-like exploration pattern—DFS goes deep before backtracking.

Compare this to BFS. Where BFS expands uniformly in all directions, DFS might explore the entire bottom of the grid before ever looking at the top. The frontier (yellow cells) stays small—usually just a thin line along the current path.

DFS Does NOT Guarantee Shortest Paths

This is the critical distinction: DFS does not find shortest paths.

When DFS stumbles upon the goal, it returns the path it happened to take—which might be wildly suboptimal. It found a path, not the path.

Consider a simple example. If the goal is one step to the right of the start, but DFS first explores left, it might traverse the entire grid before finally turning right. The path it returns could be hundreds of steps long when the optimal path is just one.

BFS guarantees the shortest path in unweighted graphs because it explores by distance from the start. Cells at distance 1 are visited before cells at distance 2, which are visited before distance 3, and so on. The first time BFS reaches the goal, it has taken the shortest possible route.

DFS makes no such guarantee. It explores by recency, not by distance.

So when is DFS still useful?

  • Maze generation: Randomized DFS creates mazes with long, winding corridors—perfect for games.
  • Cycle detection: DFS can detect cycles in a graph by tracking which nodes are currently in the recursion stack.
  • Topological sorting: For scheduling tasks with dependencies, DFS naturally produces a valid ordering.
  • Memory efficiency: DFS uses memory proportional to the depth of the search, not the breadth. In very wide graphs, this can be a significant advantage.
  • Finding any path: Sometimes you just need a path, not the best one.

BFS vs DFS: A Dramatic Comparison

The difference becomes visceral when you see both algorithms side by side on the same grid.

Interactive: BFS vs DFS Comparison

BFS
SE
Pathfinding visualization grid. Start position at column 2, row 5. End position at column 11, row 5.Use arrow keys to navigate, Enter or Space to toggle cells.
Visited:0
Path:
DFS
SE
Pathfinding visualization grid. Start position at column 2, row 5. End position at column 11, row 5.Use arrow keys to navigate, Enter or Space to toggle cells.
Visited:0
Path:
Step 11 / 82
Start
End
Wall
Visited
Frontier
Path

Click cells to toggle walls. Both algorithms start from the same position and search for the same goal. Watch how BFS expands uniformly while DFS dives deep along corridors.

Watch closely:

  • BFS expands in circles, visiting cells methodically by distance
  • DFS darts around, following corridors and backtracking

Count the cells explored. Count the path length found. The numbers tell the story.

Key Takeaways

  • DFS explores as deep as possible before backtracking
  • The stack reverses exploration order—most recent first
  • DFS does NOT find shortest paths
  • DFS uses less memory than BFS (depth vs breadth)
  • Use DFS for maze generation, cycle detection, and topological sorting
  • Use BFS when you need the shortest path