You can download the PDF file of the presentation here. Also, watch the presentation as a Gif here:

Table of Contents

## What is Dijkstra’s Algorithm?

Dijkstra’s algorithm solves the **single-source shortest path (SSSP)** problem. Generally, it enables finding the shortest route between two vertices in a graph. Its author is dr. Edsger W. Dijkstra, a pioneering contributor to computer science.

Dijkstra’s original algorithm is an uninformed **greedy algorithm**. Although it uses information in a form of weights of the edges, these weights are exact and inherent to the network, so no heuristic estimation function is used. In a most common example, Dijkstra’s algorithm finds the shortest path between any two cities in a graph.

### What is Its Purpose?

Common applications of Dijkstra’s algorithm are in domains of optimal pathfinding for various distribution networks, such as oil, gas, electricity, road, or computer networks. Computer network equipment employs Dijkstra’s algorithm as a decision-making algorithm for optimal packet routing between network nodes (see the *Open-Shortest Path First protocol*).

## Algorithm Overview: How Does Dijkstra Work?

Dijkstra’s algorithm assumes the cost of all vertices except the starting one as infinite. It sets the cost of the starting vertex to 0 and updates the costs of all adjoining, unexplored vertices, according to the weights (distances) associated with the connecting edges. After being visited, each adjoining vertex is added to the priority queue. Finally, the starting vertex is marked as explored and does not participate in any further algorithm calculations.

In each following iteration, the vertex with the lowest cost is taken out of the priority queue and its exploration starts by visiting and conditionally updating all adjoining, non-explored vertices. The update operation implies two steps: assignment of the lower cost to the adjoining node and association with the ancestor vertex for later reconstruction of the shortest path.

The update condition is determined by comparing each adjoining vertex’s current cost with its new, potentially lower cost. Its new cost is calculated as the cost of the vertex being explored + the weight of the adjoining edge (the between the vertex being explored and the adjoining vertex).

If the current cost of the adjoining vertex is still lower than the potential new cost, the vertex will not be updated. Otherwise, it will assume the new cost (its cost will decrease) and the vertex in focus will become its ancestor vertex. Vertex cost reduction is also referred to as a *relaxation procedure*.

After visiting and conditionally updating all the adjoining, non-explored vertices, the vertex being explored will be marked as explored and will not participate in any further algorithm calculations. The described process continues until there are no unexplored vertices left in the priority queue.

When the algorithm ends, all vertices are assigned with the lowest possible costs, and the traversal algorithm yields the shortest possible path between the starting and target vertices. For comparison with the previously described best-first search algorithm: if its heuristic function would give the same relative cost for all vertices as the Dijkstra’s algorithm, it would also traverse the vertices in the same order and yield the same shortest path.

## What Are Properties of Dijkstra?

Dijkstra’s algorithm doesn’t use a heuristic function and doesn’t estimate the costs of the graph’s vertices. Instead, it relies on the exact information represented by the edge’s weights. As the initial costs of non-starting vertices are set to infinity, the algorithm successively lowers their costs until they reach their minimum cost.

This behavior yields its optimality property: ** minimum costs assigned to vertices enable the algorithm to always find the shortest path between the starting vertex and any other vertex in the graph.** As the shortest paths always start from the starting vertex, the algorithm is attributed as the “single-source” algorithm.

Besides being **optimal**, the algorithm is also **complete**, i.e., it will always take a finite time to find a solution.

However, Dijkstra’s algorithm **cannot** handle edges with **negative weights**.

## Implementation Python Dijkstra

The implementation of Dijkstra’s algorithm is achieved by function `dijkstra()`

and a modification of the underlying class Graph.

The `dijkstra()`

function takes three parameters:

- The
`graph`

parameter takes an initialized Graph object (see the blog on the*breadth-first search algorithm*, the section on*graphs*). - The
`start_vertex`

parameter takes the starting vertex, which we choose freely (remember, a graph is not a tree, there is no absolute root). - The
`target`

parameter is the entity we want to find in the graph, enclosed in a vertex.

For a better understanding of the algorithm and its implementation, each step is precisely described in the code below.

As we introduced several changes to the `Graph`

class, the most practical approach is to show the entire class:

class Graph: def __init__(self, directed=False): self._outgoing = {} # If the graph is undirected, 'self._outgoing' # is the universal storage. self._incoming = {} if directed else self._outgoing # If the graph is directed, the 'self._incoming' # dictionary differs from the 'self._outgoing'. def is_directed(self): return self._incoming is not self._outgoing # The function returns a generator of incoming # or outgoing (default) edges of a vertex. def adjacent_edges(self, vertex, outgoing=True): # References the corresponding outer dictionary # (dictionary of dictionaries) adj_edges = self._outgoing if outgoing else self._incoming # Access each of the edges for this endpoint vertex. for edge in adj_edges[vertex].values(): yield edge def add_vertex(self, entity=None, h=None): # Constructs a new vertex from the entity. vertex = self.Vertex(entity, h) # The vertex becomes a key in the outer dictionary, # but the value is an internal dictionary (as we model # both dimensions for each edge: origin and destination). # e.g. {vertex_1a:{vertex_b:edge_a_b}, vertex_b:{vertex_c:edge_b_c}}. self._outgoing[vertex] = {} if self.is_directed(): self._incoming[vertex] = {} def add_edge(self, origin, destination, weight=None): # Constructs a new edge from the vertices. edge = self.Edge(origin, destination, weight) # Adds the edge to the dictionary (dictionaries are # the same if the graph is undirected). The outer key # represents the origin, i.e. the component 'a' of # the edge-defining pair (a, b). The inner key stands # for the component 'b' of the edge-defining pair (a, b). self._outgoing[origin][destination] = edge # Even if the graph is undirected, each edge has to # be added twice, i.e. once for each of its endpoints. self._incoming[destination][origin] = edge def vertices(self): return self._outgoing.keys() def edges(self): # All the edges are collected into a set. result = set() for inner_dict in self._outgoing.values(): result.update(inner_dict.values()) return result class Vertex: __slots__ = '_entity', '_h' def __init__(self, entity, h=None): self.entity = entity self.h = h # The real-world entity is represented by the Vertex object. @property def entity(self): return self._entity @entity.setter def entity(self, entity): self._entity = entity # The real-world entity has a cost of h. @property def h(self): return self._h @h.setter def h(self, h): self._h = h # We have to implement __hash__ to use the object as a dictionary key. def __hash__(self): return hash(id(self)) def __lt__(self, other): if self.h is None: return False elif other.h is None: return True else: return self.h < other.h class Edge: __slots__ = '_origin', '_destination', '_weight' def __init__(self, origin, destination, weight=None): self._origin = origin self._destination = destination self._weight = weight def endpoints(self): return self._origin, self._destination # Returns the other component of the edge-defining pair (a, b) # for a given component a or b, respectively. def opposite(self, vertex): return self._destination if self._origin is vertex \ else self._origin # Returns the weight of the edge. @property def weight(self): return self._weight # Sets the weight of the edge @weight.setter def weight(self, weight): self._weight = weight def __hash__(self): return hash((self._origin, self._destination))

A significant difference to the previous version of the Graph class is the introduction of the *property* decorator and the `weight`

attribute, as highlighted in the code.

With these changes in place, implementation of the core function, `dijkstra()`

is:

from graph import Graph from queue import PriorityQueue def dijkstra(graph, start_vertex, target): # Create the priority queue for open vertices. vertices_pq = PriorityQueue() # Initialize the starting vertex to cost 0. start_vertex.h = 0 # Adds the start vertex to the priority queue. print(f'Visiting/queueing vertex {start_vertex.entity}') vertices_pq.put(start_vertex) print('Prioritized vertices (v, h(v)):', *((vert.entity, vert.h) for vert in vertices_pq.queue), end=2 * '\n') # The starting vertex is visited first and has no leading edges. # If we did not put it into 'visited' in the first iteration, # it would end up in 'visited' during the second iteration, pointed to # by one of its children vertices as a previously unvisited vertex. visited[start_vertex] = None # Loops until the priority list gets empty. while not vertices_pq.empty(): # Gets the vertex with the lowest cost. vertex = vertices_pq.get() # If the vertex being explored is a target vertex, ends the algorithm. print(f'Exploring vertex {vertex.entity}') if vertex.entity == target: return vertex # Examines each non-visited adjoining edge/vertex. for edge in graph.adjacent_edges(vertex): # Gets the second endpoint. v_2nd_endpoint = edge.opposite(vertex) # Skips the explored vertices. if v_2nd_endpoint in explored: continue # Checks if the endpoint has a cost and is the cost the cheapest one. if v_2nd_endpoint.h is None or vertex.h + edge.weight < v_2nd_endpoint.h: # Adds the second endpoint to 'visited' and maps # the leading edge for the search path reconstruction. v_2nd_endpoint.h = vertex.h + edge.weight # Prevents reinsertion to the priority queue. The # endpoint distance value will be updated. if v_2nd_endpoint not in visited: print(f'Visiting/queueing vertex {v_2nd_endpoint.entity}') vertices_pq.put(v_2nd_endpoint) # Forces the priority queue to reorder in case of an # inner vertex update resulting with the highest priority vertices_pq.put(vertices_pq.get()) # Replaces the previous vertex' ancestor with a cheaper one. visited[v_2nd_endpoint] = edge print('Prioritized vertices (v, h(v)):', *((vert.entity, vert.h) for vert in vertices_pq.queue), end=2 * '\n') # The vertex is used for update and put aside. explored.append(vertex) return None

Before we can test the algorithm, we have to initialize a graph and build it by adding vertices and edges to it:

# Initializes an empty graph (object). g = Graph() # Loads the graph with the first seven vertices. for i in range(0, 7): g.add_vertex(i) # Constructs the 'vertices' dictionary for a more # convenient access during the graph construction. vertices = {k.entity: k for k in g.vertices()} # Constructs an arbitrary graph from # the existing vertices and edges. g.add_edge(vertices[0], vertices[1], 4) g.add_edge(vertices[0], vertices[2], 2) g.add_edge(vertices[2], vertices[4], 1) g.add_edge(vertices[4], vertices[3], 3) g.add_edge(vertices[3], vertices[5], 2) g.add_edge(vertices[0], vertices[5], 5) g.add_edge(vertices[2], vertices[6], 5) # Initializes the search path and a dictionary of visited vertices. path = [] explored = [] visited = {}

Now that we have prepared everything, we can test `dijkstra()`

and see how it works. Here is the part of the code that runs the algorithm, constructs the search path (if there is one), and shows in a step-by-step manner how it proceeds through the graph:

# Starts the search. result = dijkstra(g, vertices[5], 6) # If the entity is found... if result is not None: # The search path ends with the found vertex (entity). # Each vertex is a container for its real-world entity. path_vertex = result # The entity is added to the 'path'. path.append(path_vertex.entity) # Constructs the rest of the search path (if it exists)... while True: # Gets a discovery edge leading to the vertex. path_edge = visited.get(path_vertex) # If the path vertex is the root, it has no discovery edge... if path_edge is None: break # Otherwise, gets the second (parent vertex) endpoint. path_vertex = path_edge.opposite(path_vertex) # The entity is added to the 'path'. path.append(path_vertex.entity) print('Search path found:', end=' ') # The path is reversed and starts with the root vertex. print(*reversed(path), sep=' -> ') # Otherwise... else: print('\nEntity is not found') The test run gave us the output: Visiting/queueing vertex 5 Prioritized vertices (v, h(v)): (5, 0) Exploring vertex 5 Visiting/queueing vertex 3 Visiting/queueing vertex 0 Prioritized vertices (v, h(v)): (3, 2) (0, 5) Exploring vertex 3 Visiting/queueing vertex 4 Prioritized vertices (v, h(v)): (0, 5) (4, 5) Exploring vertex 0 Visiting/queueing vertex 1 Visiting/queueing vertex 2 Prioritized vertices (v, h(v)): (4, 5) (1, 9) (2, 7) Exploring vertex 4 Prioritized vertices (v, h(v)): (2, 6) (1, 9) Exploring vertex 2 Visiting/queueing vertex 6 Prioritized vertices (v, h(v)): (1, 9) (6, 11) Exploring vertex 1 Prioritized vertices (v, h(v)): (6, 11) Exploring vertex 6 Search path found: 5 -> 3 -> 4 -> 2 -> 6

Based on the output, we can see that the search started from vertex 5 and that the `dijkstra()`

has found the entity vertex 6. The entire search path is also displayed, and we should note that the search path will always be the shortest one:

`5 -> 3 -> 4 -> 2 -> 6`

.

However, a modification of just one weight might lead to a different solution, as we will demonstrate with the next example. With that in mind, lets tweak the weight on one of our edges:

# Constructs an arbitrary graph from # the existing vertices and edges. g.add_edge(vertices[0], vertices[1], 4) g.add_edge(vertices[0], vertices[2], 2) g.add_edge(vertices[2], vertices[4], 1) g.add_edge(vertices[4], vertices[3], 3) g.add_edge(vertices[3], vertices[5], 2) g.add_edge(vertices[0], vertices[5], 4) g.add_edge(vertices[2], vertices[6], 5) The re-run gave us the output: Visiting/queueing vertex 5 Prioritized vertices (v, h(v)): (5, 0) Exploring vertex 5 Visiting/queueing vertex 3 Visiting/queueing vertex 0 Prioritized vertices (v, h(v)): (3, 2) (0, 4) Exploring vertex 3 Visiting/queueing vertex 4 Prioritized vertices (v, h(v)): (0, 4) (4, 5) Exploring vertex 0 Visiting/queueing vertex 1 Visiting/queueing vertex 2 Prioritized vertices (v, h(v)): (4, 5) (1, 8) (2, 6) Exploring vertex 4 Prioritized vertices (v, h(v)): (2, 6) (1, 8) Exploring vertex 2 Visiting/queueing vertex 6 Prioritized vertices (v, h(v)): (1, 8) (6, 11) Exploring vertex 1 Prioritized vertices (v, h(v)): (6, 11) Exploring vertex 6 Search path found: 5 -> 0 -> 2 -> 6

After a re-run, we got a different solution without modifying the algorithm, but only by changing one of our edges’ weights. Our simple demonstration just pointed out the dependence of Dijkstra’s algorithm on the edge weights.

## Efficiency Analysis

The algorithm’s worst-case **time complexity** depends on the implementation choice of data structure as storage for visited vertices, which in turn depends on the number of vertices *v* and edges *e*.

A heap implementation is more appropriate when the number of edges *e* in the graph is small, i.e. when *e* < *v*^{2}/log *v*. In this case, the time complexity is *O((e+v) *log* v)*.

On the contrary, the sequence implementation is more appropriate when the number of edges *e* in the graph is large, i.e. when *e* > *v*^{2}/log *v.* In this case, the time complexity is *O(v ^{2})*.

On another note, a more advanced approach to implementation of the priority queue, such as a *Fibonacci heap*, can yield time complexity of *O(e+v* log *v)*.

**Space complexity** of the Dijkstra’s algorithm is *O(v+e)*.

Dijkstra’s algorithm is *optimal*, as it will always yield an optimal search path. Furthermore, Dijkstra’s algorithm will always find a solution if there is one, so it is also *complete*.

## Conclusion

In this article, we learned about Dijkstra’s search algorithm.

- First, we explained what Dijkstra’s algorithm is.
- Second, we took a look at what are its common purposes and applications.
- Third, we went through an explanation of how the algorithm works.
- Fourth, we examined the algorithm’s main properties.
- Fifth, we went through the implementation of the algorithm, which is based on the Graph
`Graph`

class implementation is given above). We also tested the algorithm by calling its main function,`dijkstra()`

, and analyzed its steps of execution for two slightly different edge weight scenarios. - Sixth, we analyzed the algorithm efficiency.

In the end, we concluded that the algorithm efficiency is optimal, and if the solution exists, Dijkstra’s algorithm will always find it in its optimal form. The algorithm always takes finite time in reaching the solution and is driven solely by the edges’ weights and the graph structure.