As you watch the video, you can have a look at the slides as PDF: download slides in a new tab here.

Before we’ll dive into the algorithm and the Python implementation, let’s first skim over some related graph tutorials you may enjoy and that may help your understanding!

Table of Contents

## Related Graph Tutorials

This algorithm is part of our graph algorithm tutorials:

- Breadth-First Search (BFS) Algorithm in Python
- Python Depth-First Search (DFS) Algorithm
- Iterative Deepening Depth-First Search (DFS) Algorithm in Python
- Python Best-First Search Algorithm
- Python Dijkstra Algorithm
- Python A* Algorithm
- Jump Search Algorithm in Python
- Python Backtracking
- Python Beam Search Algorithm

Each of these tutorial links opens in a new browser tab.

## What is the Jump Point Search Algorithm?

The **jump point search algorithm** — or shorter **JPS** — is a variant of the A* search algorithm, optimized for pathfinding on grid maps, i.e., in computer games.

The jump point search algorithm is invented and published by **Daniel Harabor and Alban Grastien in 2011**.

It exploits *path symmetry* which is manifested as a path (or path segments) which share the same start and goal vertex, have the same length, and are otherwise identical, but differ in the scan order.

In the authors’ words, the algorithm is ** fast, optimal, and requires no memory overhead**.

It identifies and selectively expands only vertices of interest, which are called *jump points*. This means that vertices between any two jump points are not expanded, or in our familiar terms, they remain unexplored.

This is a property that enables much faster algorithm operation by reducing the number of vertices that would otherwise have to be processed, e.g., by applying the original A* algorithm.

**Related Article**: The A* Algorithm in Python

## What is Its Purpose?

A common application of the jump point search algorithm is * pathfinding*, such as in computer games or possibly in navigation systems.

In general, whenever there is a search space that can be generalized to a * uniform cost grid* (all the paths to the neighbouring vertices are of equal cost), the jump point search algorithm is highly applicable.

## How Does It Work?

The jump point search algorithm builds on the A* algorithm, which means that we still have a heuristic estimation function, and `visited`

(open) and `explored`

(closed) lists. We also get the same optimality properties of the result under the same conditions.

The algorithm operation differs in the data, specifically, in the `visited`

and `explored`

lists, and how a vertex gets expanded.

A* does not perform very well with big open fields, i.e. uniform cost grids, because it takes most of the time to process updates of the visited vertices.

On the other hand, the jump point search algorithm **improves on the A* algorithm** by exploiting the regularity of the grid, i.e., it does not need to search every possible path, since all paths are known to have equal costs.

💡 In other words, most nodes in the grid are not interesting enough to be processed.

As a result, the jump point search algorithm spends much less time updating the `visited`

and `explored`

lists. It is sufficient to only scan the cells to check if there is a jump point nearby.

Instead of checking every neighbor from every possible vertex, the goal of each step in the scan is to decide whether the next point is interesting enough to create a new entry in the `visited`

list.

If it is not, the algorithm continues the scanning iteration.

If a position is interesting enough, new jump points are recorded in the `visited`

list, and the current scan iteration ends.

There are two cases of scan directions: **horizontal/vertical** and **diagonal**. Both cases are terminated when the scan runs into an obstacle (a *non-passable vertex*) or reaches the end of the search space.

Similarly, the scans yield a jump point if its search path passes beside an obstacle (black), as shown in the second and fourth pictures:

White numerated vertices represent valid neighbors of `x`

, while grey vertices represent invalid neighbors of `x`

.

We are simulating undirected uniform-cost grid maps by having each pair of vertices connected by the two edges of opposite directions. Each vertex has up to 8 neighbors and is either traversable or not.

Each horizontal or vertical scan, from a traversable vertex to one of its neighbors, has a cost of 1 and diagonal moves cost of √2.

To make it easier on the algorithm, we will use 10x whole numbers as a measure of distance. Therefore, instead of the cost √2 = 1.41421… we will use the cost of 14/2 = 7, and instead of the cost 1, we will use the cost of 10/2 = 5.

Moves involving obstacles (non-traversable) vertices are not allowed. Each direction is defined by a number from 0 to 7:

- Each
**double arrow**represents two edges in opposite directions. - The
**numbers**are indicating the direction each edge is oriented in. - The
**edge numeration**enables us to use the`prune()`

function for neighbours pruning and forced neighbours detection, regardless of the movement direction.

## What Are Its Properties?

As the jump point search algorithm builds on the A* algorithm, it also uses the exact information represented by the successive edges’ weights connecting the jump points and a heuristic function for distance estimation between the goal vertex and the jump points in a graph.

⭐ As the initial costs for all the jump points are set to infinity, the algorithm successively decreases jump points’ costs until they reach their minimum.

**(1) Optimal**: This behavior leads to a property of being * optimal*: minimal costs assigned to jump points enable the jump point search algorithm to always find the shortest path between the starting vertex and any jump point in the graph.

As the shortest paths always start from the starting vertex, the algorithm also belongs to the “single-source” algorithm.

**(2) Complete**: Besides being optimal, the algorithm is also * complete*, i.e. it will always take a finite time to find a solution.

**(3) Optimal Efficiency**: The third important property is the * optimal efficiency*, reflected in the fact that jump points positioned further from the goal vertex may not be explored at all, as their heuristic function distinguishes and delays the exploration of such jump points among those with equally weighted paths.

The heuristic functions used in the jump point search algorithm also inherit two notable properties from the A* algorithm: *admissibility* and *consistency*.

- Admissibility implies that the heuristic function cost estimation is at most as high as the lowest possible cost from the current point in a path towards the goal vertex.
- The consistent or monotone heuristic function is constrained by a requirement that its cost estimation is always less than or equal to the estimated distance from any adjoining, successor vertex to the goal, plus the cost of reaching that vertex.

## How Is It Implemented?

The implementation of the jump point search algorithm is achieved by the function `jps()`

and a modification of the underlying class `Graph`

.

The `jps()`

function takes three parameters:

- The
`graph`

parameter as an initialized`Graph`

object (see the blog on the*A* search algorithm*, the section on*graphs*). - The
`start_vertex`

parameter takes the starting vertex, which we choose freely (remember, a graph is not a tree, there is no absolute root). - The
`goal_vertex`

parameter is the entity we want to find in the graph, enclosed in a vertex.

For a better understanding of the algorithm and its implementation, each step is precisely described in the code below.

There have been some further upgrades on the `Graph`

class, in terms of the ** obstacle** and

**properties, so its entire listing follows:**

*direction*class Graph: def __init__(self, directed=False): self._outgoing = {} # If the graph is undirected, 'self._outgoing' # is the universal storage. self._incoming = {} if directed else self._outgoing # If the graph is directed, the 'self._incoming' # dictionary differs from the 'self._outgoing'. def is_directed(self): return self._incoming is not self._outgoing # The function returns a generator of incoming # or outgoing (default) edges of a vertex. def adjacent_edges(self, vertex, outgoing=True): # References the corresponding outer dictionary # (dictionary of dictionaries) adj_edges = self._outgoing if outgoing else self._incoming # Access each of the edges for this endpoint vertex. for edge in adj_edges[vertex].values(): yield edge def add_vertex(self, entity=None, h=None, cost=None): # Constructs a new vertex from the entity. vertex = self.Vertex(entity, h, cost) # The vertex becomes a key in the outer dictionary, # but the value is an internal dictionary (as we model # both dimensions for each edge: origin and destination). # e.g. {vertex_1a:{vertex_b:edge_a_b}, vertex_b:{vertex_c:edge_b_c}}. self._outgoing[vertex] = {} if self.is_directed(): self._incoming[vertex] = {} def add_edge(self, origin, destination, weight=None, direction=None): # Constructs a new edge from the vertices. edge = self.Edge(origin, destination, weight, direction) # Adds the edge to the dictionary (dictionaries are # the same if the graph is undirected). The outer key # represents the origin, i.e. the component 'a' of # the edge-defining pair (a, b). The inner key stands # for the component 'b' of the edge-defining pair (a, b). self._outgoing[origin][destination] = edge # Even if the graph is undirected, each edge has to # be added twice, i.e. once for each of its endpoints. edge = self.Edge(origin, destination, weight, (direction + 4) % 8) self._incoming[destination][origin] = edge def vertices(self): return self._outgoing.keys() def edges(self): # All the edges are collected into a set. result = set() for inner_dict in self._outgoing.values(): result.update(inner_dict.values()) return result class Vertex: __slots__ = '_entity', '_h', '_cost', '_obstacle', '_direction' def __init__(self, entity, h=None, cost=None): self.entity = entity self.h = h self.cost = cost self.obstacle = False self.direction = None # The real-world entity is represented by the Vertex object. @property def entity(self): return self._entity @entity.setter def entity(self, entity): self._entity = entity # The real-world entity has a heuristic value of 'h'. @property def h(self): return self._h @h.setter def h(self, h): self._h = h # The real-world entity has a cost of 'cost'. @property def cost(self): return self._cost @cost.setter def cost(self, cost): self._cost = cost @property def obstacle(self): return self._obstacle @obstacle.setter def obstacle(self, obstacle): self._obstacle = obstacle @property def direction(self): return self._direction @direction.setter def direction(self, direction): self._direction = direction # We have to implement __hash__ to use the object as a dictionary key. def __hash__(self): return hash(id(self)) def __lt__(self, other): if self.cost is None: return False elif other.cost is None: return True else: return self.cost < other.cost class Edge: __slots__ = '_origin', '_destination', '_weight', '_direction' def __init__(self, origin, destination, weight=None, direction=None): self._origin = origin self._destination = destination self.weight = weight self.direction = direction def endpoints(self): return self._origin, self._destination # Returns the other component of the edge-defining pair (a, b) # for a given component a or b, respectively. def opposite(self, vertex): return self._destination if self._origin is vertex \ else self._origin # Returns the weight of the edge. @property def weight(self): return self._weight # Sets the weight of the edge @weight.setter def weight(self, weight): self._weight = weight # Returns the direction of the edge. @property def direction(self): return self._direction # Sets the direction of the edge @direction.setter def direction(self, direction): self._direction = direction def __hash__(self): return hash((self._origin, self._destination))

The most significant differences to the previous version of the Graph class are highlighted.

With these changes in place, implementation of the core function, `jps()`

is:

from graph import Graph from queue import PriorityQueue from collections import namedtuple # Just a dummy object attached with a dummy function... class Object: pass def prune(graph, vertex, direction=None): neighbours = {} forced_neighbours_exist = False if direction is None: direction = vertex.direction # Enables us to return both the neighbouring vertices and the # indicator of forced neighbours existence pruned = namedtuple('pruned', 'vertices forced') # ...right here :-) We use it to ensure availability of the property # when the None object gets returned from the 'neighbours' dictionary. o = Object() o.obstacle = lambda: True # Collects all the surrounding vertices in a dictionary and # marks them with the search direction for edge in graph.adjacent_edges(vertex): neighbours[edge.direction] = edge.opposite(vertex) # Applies exclusively to the non-starting vertices if direction is not None: # Determines if the movement direction is horizontal/vertical (0) or diagonal (1) is_diagonal = direction % 2 leftmost_dir = change_dir(direction, 2 + is_diagonal) rightmost_dir = change_dir(direction, -2 - is_diagonal) # Parent is never a neighbour candidate neighbours.pop(change_dir(direction, 4), None) # Trivial case - check if some of natural neighbours are obstacles for idx in range(-is_diagonal, is_diagonal + 1): if neighbours.get(change_dir(direction, idx), o).obstacle: neighbours.pop(change_dir(direction, idx), None) # Non-trivial case of potentially forced neighbours (left) if neighbours.get(change_dir(leftmost_dir, -1), o).obstacle \ or not neighbours.get(leftmost_dir, o).obstacle: # Discards the forced neighbour candidate neighbours.pop(change_dir(leftmost_dir, -1), None) else: forced_neighbours_exist = True neighbours.pop(leftmost_dir, None) # Non-trivial case of potentially forced neighbours (right) if neighbours.get(change_dir(rightmost_dir, 1), o).obstacle \ or not neighbours.get(rightmost_dir, o).obstacle: # Discards the forced neighbour candidate neighbours.pop(change_dir(rightmost_dir, 1), None) else: forced_neighbours_exist = True neighbours.pop(rightmost_dir, None) # Back vertices are never neighbour candidates neighbours.pop(change_dir(direction, 4 + 1), None) neighbours.pop(change_dir(direction, 4 - 1), None) return pruned(neighbours, forced_neighbours_exist) def step(graph, vertex, direction, cost_so_far): # Defines a fail-safe result. next_vertex = None cost = 0 # Searches among the available edges and follows the right one. for edge in graph.adjacent_edges(vertex): if edge.direction == direction and not edge.opposite(vertex).obstacle: next_vertex = edge.opposite(vertex) cost = cost_so_far + edge.weight break else: continue # If the edge is not found, the equivalent of "nothing" is returned. return next_vertex, cost # Changes the direction within the defined eight directions def change_dir(direction, amount): return (direction + amount) % 8 def jump(graph, vertex, direction, cost_so_far, goal_vertex): jump_point, cost = step(graph, vertex, direction, cost_so_far) if jump_point is None: return None, None # If a vertex is the goal vertex: if jump_point.entity == goal_vertex: return jump_point, cost # Checks if forced neighbours exist if prune(graph, jump_point, direction).forced: return jump_point, cost # Activates if the direction is diagonal. if direction % 2: for direction_l_r in (change_dir(direction, 1), change_dir(direction, -1)): # Tests if the next jump point exists (its cost is irrelevant in the context of the check alone). next_jump_point, _ = jump(graph, jump_point, direction_l_r, 0, goal_vertex) if next_jump_point is not None: return jump_point, cost # Proceed in the same direction. jump_point, cost = jump(graph, jump_point, direction, cost, goal_vertex) return jump_point, cost def jps(graph, start_vertex, goal_vertex): # Create the priority queue for open vertices. jump_points_pq = PriorityQueue() start_vertex = vertices[start_vertex] start_vertex.cost = 0 # start_vertex is the only one in the first round of queueing, # so it doesn't need a distance to goal estimation. start_vertex.h = 0 # Adds the start vertex to the priority queue. print(f'Visiting/queueing vertex {start_vertex.entity}') jump_points_pq.put(start_vertex) print('Prioritized vertices (v, cost, dir):', *((vert.entity, vert.cost, vert.direction) for vert in jump_points_pq.queue), end=2 * '\n') # The starting vertex is visited first and has no leading edges. visited[start_vertex.entity] = None # Loops until the priority list gets empty. while not jump_points_pq.empty(): # Gets the previously calculated jump_point with the lowest cost. jpoint_prev = jump_points_pq.get() print(f'Exploring vertex {jpoint_prev.entity}') # If the vertex being explored is a goal vertex, the algorithm ends. if jpoint_prev.entity == goal_vertex: return jpoint_prev # Finds the vertex neighbours (natural and forced). neighbours = prune(graph, jpoint_prev) for direction in neighbours.vertices: jpoint, cost = jump(graph, jpoint_prev, direction, jpoint_prev.cost, goal_vertex) if jpoint is None or vertices.get(jpoint, None) in explored: continue # Calculates the jump point's heuristic value. if jpoint.h is None: jpoint.h = tuple(map(lambda x, y: abs(x - y), jpoint.entity, goal_vertex)) jpoint.h = abs(jpoint.h[0] - jpoint.h[1]) * cost_hv \ + min(jpoint.h[0], jpoint.h[1]) * cost_di # Prevents reinsertion to the priority queue. The endpoint distance value will be updated. if jpoint.entity not in visited: print(f'Visiting/queueing vertex {jpoint.entity}.') visited[jpoint.entity] = jpoint_prev.entity jump_points_pq.put(jpoint) if jpoint.cost is None or jpoint.cost - jpoint.h > cost - jpoint_prev.h: jpoint.cost = cost - jpoint_prev.h + jpoint.h jpoint.direction = direction # Forces the priority queue to recalculate in case of an # inner vertex update resulting with the highest priority. if not jump_points_pq.empty(): jump_points_pq.put(jump_points_pq.get()) print('Prioritized vertices (v, cost, dir):', *((vert.entity, vert.cost, vert.direction) for vert in jump_points_pq.queue), end=2 * '\n') # The vertex is used for update and put aside. explored.append(jpoint_prev) # Initializes an empty graph (object). g = Graph() # Initializes the grid dimensions. m, n = 7, 9 # Defines horizontal/vertical and diagonal costs. cost_hv = 5 cost_di = 7 # Loads the graph with [m x n] vertices. for i in range(m): for j in range(n): g.add_vertex((i, j)) # Constructs the 'vertices' dictionary for a more # convenient access during the graph construction. vertices = {k.entity: k for k in g.vertices()} # Generates the edges. for i in range(m): for j in range(n): # Horizontal connections if j < n - 1: g.add_edge(vertices[i, j], vertices[i, j + 1], weight=cost_hv, direction=0) # Vertical connections if i < m - 1: g.add_edge(vertices[i, j], vertices[i + 1, j], weight=cost_hv, direction=6) # Left diagonal connections if i < m - 1 and j < n - 1: g.add_edge(vertices[i, j], vertices[i + 1, j + 1], weight=cost_di, direction=7) # Right diagonal connections if i < m - 1 and j > 0: g.add_edge(vertices[i, j], vertices[i + 1, j - 1], weight=cost_di, direction=5) # Initializes the search path and a dictionary of visited vertices. path = [] explored = [] visited = {} # Initializes the obstacles. vertices[(0, 2)].obstacle = True vertices[(1, 2)].obstacle = True vertices[(2, 2)].obstacle = True vertices[(1, 4)].obstacle = True vertices[(2, 4)].obstacle = True vertices[(3, 4)].obstacle = True vertices[(2, 6)].obstacle = True vertices[(3, 6)].obstacle = True vertices[(4, 6)].obstacle = True

Now that we have prepared everything, we can test `jps()`

and see how it works. Here is the part of the code that runs the algorithm, constructs the search path (if there is one), and shows in a step-by-step manner how it proceeds through the graph:

# Starts the search. result = jps(g, (0, 0), (1, 8)) # If the entity is found... if result is not None: # The search path ends with the found vertex (entity). # Each vertex is a container for its real-world entity. path_vertex = result.entity # Constructs the rest of the search path (if it exists)... while path_vertex is not None: # The entity is added to the 'path'. path.append(path_vertex) path_vertex = visited[path_vertex] print('Search path found:', end=' ') # The path is reversed and starts with the root vertex. print(*reversed(path), sep=' -> ') # Otherwise... else: print('\nEntity is not found')

The test run gave us the output:

Visiting/queueing vertex (0, 0) Prioritized vertices (v, cost, dir): ((0, 0), 0, None) Exploring vertex (0, 0) Visiting/queueing vertex (1, 1) Prioritized vertices (v, cost, dir): ((1, 1), 42, 7) Exploring vertex (1, 1) Visiting/queueing vertex (2, 1) Prioritized vertices (v, cost, dir): ((2, 1), 49, 6) Exploring vertex (2, 1) Visiting/queueing vertex (3, 2) Prioritized vertices (v, cost, dir): ((3, 2), 53, 7) Exploring vertex (3, 2) Visiting/queueing vertex (2, 3) Visiting/queueing vertex (4, 3) Prioritized vertices (v, cost, dir): ((2, 3), 53, 1) ((4, 3), 57, 7) Exploring vertex (2, 3) Visiting/queueing vertex (1, 3) Prioritized vertices (v, cost, dir): ((1, 3), 56, 2) ((4, 3), 57, 7) Exploring vertex (1, 3) Visiting/queueing vertex (0, 4) Prioritized vertices (v, cost, dir): ((4, 3), 57, 7) ((0, 4), 60, 1) Exploring vertex (4, 3) Visiting/queueing vertex (4, 4) Visiting/queueing vertex (5, 4) Prioritized vertices (v, cost, dir): ((4, 4), 57, 0) ((5, 4), 61, 7) ((0, 4), 60, 1) Exploring vertex (4, 4) Visiting/queueing vertex (3, 5) Prioritized vertices (v, cost, dir): ((3, 5), 57, 1) ((5, 4), 61, 7) ((0, 4), 60, 1) Exploring vertex (3, 5) Visiting/queueing vertex (2, 5) Prioritized vertices (v, cost, dir): ((2, 5), 60, 2) ((5, 4), 61, 7) ((0, 4), 60, 1) Exploring vertex (2, 5) Visiting/queueing vertex (1, 5) Visiting/queueing vertex (1, 6) Prioritized vertices (v, cost, dir): ((1, 6), 60, 1) ((0, 4), 60, 1) ((5, 4), 61, 7) ((1, 5), 63, 2) Exploring vertex (1, 6) Visiting/queueing vertex (1, 8) Visiting/queueing vertex (2, 7) Prioritized vertices (v, cost, dir): ((0, 4), 60, 1) ((1, 8), 60, 0) ((5, 4), 61, 7) ((2, 7), 64, 7) ((1, 5), 63, 2 Exploring vertex (0, 4) Prioritized vertices (v, cost, dir): ((1, 5), 60, 7) ((1, 8), 60, 0) ((5, 4), 61, 7) ((2, 7), 64, 7) Exploring vertex (1, 5) Prioritized vertices (v, cost, dir): ((1, 8), 60, 0) ((2, 7), 64, 7) ((5, 4), 61, 7) Exploring vertex (1, 8) Search path found: (0, 0) -> (1, 1) -> (2, 1) -> (3, 2) -> (4, 3) -> (4, 4) -> (3, 5) -> (2, 5) -> (1, 6) -> (1, 8)

## Efficiency Analysis

The jump point search algorithm builds on the A* search algorithm, but when applied to uniform-cost search space, it can be up to an order of magnitude faster.

## Conclusion

In this article, we learned about the jump point search algorithm.

- First, we explained what the jump point search algorithm is.
- Second, we took a look at what are its common purposes and applications.
- Third, we went through an explanation of how the algorithm works.
- Fourth, we examined the algorithm’s main properties.
- Fifth, we went through the implementation of the algorithm, which is based on the
`Graph`

`Graph`

class implementation is given above). We also tested the algorithm by calling its main function,`jps()`

, and analyzed its steps of execution for two slightly different edge weight scenarios. - Sixth, we analyzed the algorithm efficiency.

In the end, we concluded that the algorithm efficiency is optimal and much faster than the original A* search algorithm, and if the solution exists, the jump point search algorithm will always find it in its optimal form and with optimal efficiency.

The algorithm always takes finite time in reaching the solution and is driven by the edges’ weights, edges’ directions, vertices’ heuristic function, and the uniformity of the graph structure.

## Academy Course – Mastering the Top 10 Graph Algorithms

If you want to improve your fundamental computer science skills, there’s nothing more effective than **studying algorithms**.

To help you master the **most important graph algorithms**, we’ve just launched the “Top 10 Algorithms” course at the Finxter Computer Science Academy. This great course from Finxter Star Creator Matija ⭐ teaches you the most important graph algorithms such as BFS, DFS, A*, and Dijkstra.

Understanding these algorithms will not only make you a better coder, but it’ll also lay a strong foundation on which you can build your whole career as a computer scientist.

Click the screenshot to find out more: