In Python, the `numpy.gradient()`

function **approximates the gradient of an N-dimensional array**. It uses the second-order accurate central differences in the interior points and either first or second-order accurate one-sided differences at the boundaries for gradient approximation. The returned gradient hence has the same shape as the input array.Β

Here is the argument table of `numpy.gradient()`

.

Argument | Accept | Description |

`f` | `array_like` | An N-dimensional input array containing samples of a scalar function. |

`varargs` | list of scalar or array, optional | Spacing between `f` values. Default unitary spacing for all dimensions. |

`edge_order` | `{1, 2}` , optional | Gradient is calculated using N-th order real differences at the boundaries. Default: 1. |

`axis` | `None` or `int` or tuple of ints, optional | Gradient is calculated only along the given axis or axes. The default (`axis = None` ) is to calculate the gradient for all the axes of the input array. axis may be negative, in which case it counts from the last to the first axis. |

If it sounds great to you, please continue reading, and you will fully understand the `numpy.gradient()`

function through Python NumPy code snippets and vivid visualization.

- First, I will introduce its underlying concepts,
`numpy.gradient()`

syntax and arguments. - Then, you will learn some basic examples of this function.
- Finally, I will address two top questions about
`numpy.gradient()`

, including`np.gradient edge_order`

and`np.gradient axis`

.

You can find all codes in this tutorial here.

Besides, I explained the difference between `numpy.diff()`

and `numpy.gradient()`

in another exciting guide to `numpy.diff()`

method here.

## Underlying Concepts: Gradient and Finite Difference

For this part, if you are familiar with gradient and finite difference, feel free to skip it and head over to its syntax and arguments!

π **Definition gradient**: In vector calculus, the gradient of a scalar-valued differentiable function *f* of several variables is the vector field whose value at a point *p* is the vector whose components are the partial derivatives of *f* at *p*. (Wikipedia)

For example, the blue arrows in the following graph depict the gradient of the function *f(x,y) = β(cos2x + cos2y)^2* as a projected vector field on the bottom plane.

Intuitively, you can consider gradient as an indicator of the fastest increase or decrease direction at a point. Computationally, the gradient is a vector containing all partial derivatives at a point.

Since the `numpy.gradient()`

function uses the finite difference to approximate gradient under the hood, we also need to understand some basics of finite difference.

π **Definition Finite Difference**: A finite difference is a mathematical expression of the form *f(x + b) β f(x + a)*. If a finite difference is divided by *b β a*, one gets a difference quotient. (Wikipedia)

Donβt panic! Here are my hand-written explanation and deduction for first and second-order forward, backward, and central differences. These formulas are used by `numpy.gradient`

under the hood.

## Syntax and Arguments

Here is the **syntax** of `numpy.gradient()`

:

# Syntax numpy.gradient(f[, *varargs[, axis=None[, edge_order=1]]])

Here is the **argument table** of `numpy.gradient()`

:

Later, I will delve more into the arguments, `edge_order`

and `axis`

.

As for the argument `varargs`

, you can leave it right now and resort to it when you have non-unitary spacing dimensions π

The **output** of `numpy.gradient()`

function is a list of `ndarrays`

(or a single `ndarray`

if there is only one dimension) corresponding to the derivatives of input `f`

with respect to each dimension. Each derivative has the same shape as input `f`

.

## Basic Examples

Seen pictorially, here is an illustration of the gradient computation in a one-dimensional array.

Here is a one-dimensional array code example:

import numpy as np one_dim = np.array([1, 2, 4, 8, 16], dtype=float) gradient = np.gradient(one_dim) print(gradient) ''' # * Underlying Gradient Calculation: # Default edge_order = 1 gradient[0] = (one_dim[1] - one_dim[0])/1 = (2. - 1.)/1 = 1. # Interior points gradient[1] = (one_dim[2] - one_dim[0])/2 = (4. - 1.)/2 = 1.5 gradient[2] = (one_dim[3] - one_dim[1])/2 = (8. - 2.)/2 = 3. gradient[3] = (one_dim[4] - one_dim[2])/2 = (16. - 4.)/2 = 6. # Default edge_order = 1 gradient[4] = (one_dim[4] - one_dim[3])/1 = (16. - 8.)/1 = 8. '''

Output:

## np.gradient() edge_order

In our basic example, we did not pass any parameters to the `numpy.gradient()`

function.

In this section, I will show you how to deploy the argument `edge_order`

and set different order differences for boundary elements.

Just to refresh your memory, here is the argument table of `numpy.gradient()`

:

We can set the argument `edge_order`

to be 1 or 2. Its default value is 1.

First, our previous basic example uses its default value, 1.

import numpy as np # edge_order = 1 one_dim = np.array([1, 2, 4, 8, 16], dtype=float) gradient = np.gradient(one_dim, edge_order=1) print(gradient) ''' # * Underlying Gradient Calculation: # Default edge_order = 1 gradient[0] = (one_dim[1] - one_dim[0])/1 = (2. - 1.)/1 = 1. # Interior points gradient[1] = (one_dim[2] - one_dim[0])/2 = (4. - 1.)/2 = 1.5 gradient[2] = (one_dim[3] - one_dim[1])/2 = (8. - 2.)/2 = 3. gradient[3] = (one_dim[4] - one_dim[2])/2 = (16. - 4.)/2 = 6. # Default edge_order = 1 gradient[4] = (one_dim[4] - one_dim[3])/1 = (16. - 8.)/1 = 8. '''

Output:

Second, we can set the `edge_order`

to be 2 and calculate the second-order differences for the boundary elements.

import numpy as np # edge_order = 2 one_dim = np.array([1, 2, 4, 8, 16], dtype=float) gradient = np.gradient(one_dim, edge_order=2) print(f'edge_order = 2 -> {gradient}') ''' # * Underlying Gradient Calculation: # edge_order = 2 gradient[0] = (4*one_dim[0+1] - one_dim[0+2*1] - 3*one_dim[0])/(2*1) = (4*2. - 4. + 3*1.)/2 = 0.5 # Interior points gradient[1] = (one_dim[2] - one_dim[0])/2 = (4. - 1.)/2 = 1.5 gradient[2] = (one_dim[3] - one_dim[1])/2 = (8. - 2.)/2 = 3. gradient[3] = (one_dim[4] - one_dim[2])/2 = (16. - 4.)/2 = 6. # edge_order = 2 gradient[4] = (3*one_dim[4] + one_dim[4-2*1] - 4*one_dim[4-1])/(2*1) = (3*16. + 4. - 4*8.)/2 = 10. '''

Output:

For the rationale behind the second-order forward and backward difference formulas, please take a look at my previous hand-written deduction. I understand they do look quite strange but there is a logic behind π

## np.gradient() axis

In this part, I will show you how to deploy the argument `axis`

and calculate (actually approximate) the gradient for the dimension(s) you want with a 2d array example case.

Just to refresh your memory, here is the argument table of `numpy.gradient()`

:

When we have an input with more than one dimension, we can set `axis`

argument as `None`

or `int`

or tuple of ints to approximate the gradient along the corresponding axis or axes.

Letβs take a two-dimensional array as an example case.

First, letβs see what the default value, `None`

, will do.

import numpy as np # axis = None (Default) two_dim = np.array([[1, 2, 4, 8, 16], [2, 5, 8, 10, 20]], dtype=float) gradient = np.gradient(two_dim, axis=None) # Same as: # gradient = np.gradient(two_dim) print(f'axis = None (Default): \n\n{gradient}') print('\n', type(gradient))

Output:

As we can see, if `axis = None`

, `numpy.gradient()`

function will output gradient for all axes of the input array.

In this case, we can also pass an integer to `axis`

argument.

import numpy as np # axis = int two_dim = np.array([[1, 2, 4, 8, 16], [2, 5, 8, 10, 20]], dtype=float) row_gradient = np.gradient(two_dim, axis=0) col_gradient = np.gradient(two_dim, axis=1) # Same as: # row_gradient = np.gradient(two_dim, axis=-2) # col_gradient = np.gradient(two_dim, axis=-1) print(f'axis = 0 or -2: \n\n{row_gradient}') print('-'*85) print(f'axis = 1 or -1: \n\n{col_gradient}')

Output:

Last, we can try passing a tuple of ints to the `axis`

argument.

import numpy as np # axis = a tuple of ints two_dim = np.array([[1, 2, 4, 8, 16], [2, 5, 8, 10, 20]], dtype=float) gradient = np.gradient(two_dim, axis=[0, 1]) print(f'axis = [0,1]: \n\n{gradient}')

Output:

## Summary

Thatβs it for our `np.gradient()`

article.

We learned about its underlying concepts, syntax, arguments, and basic examples.

We also worked on the top two questions about the `np.gradient()`

function, ranging from `np.gradient edge_order`

and `np.gradient axis`

.

Hope you enjoy all this and happy coding!

Anqi Wu is an aspiring Data Scientist and enthusiastic Python Freelancer. She is an incoming student for a Master’s program in Analytics and builds her Python Freelancer profile on Upwork.

Anqi is passionate about machine learning, statistics, data mining, programming, and many other data science related fields. She has proven her expertise during her undergraduate years, including multiple winning and top placements in mathematical modeling contests. She loves supporting and enabling data-driven decision-making, developing data services, and teaching.

She is skilled at programming languages like Python, R, and SQL, actively delving into the world of Machine Learning and Deep Learning and traveling along her data science journey with joy. Data sensitivity and business acumen are her advantages to march towards the career path as a data scientist π

Here is a link to the authorβs website: https://www.anqiwu.one/. She uploads data science blogs weekly to document her data science learning and practicing for the past week, along with some best learning resources and inspirational thoughts.

I hope you enjoy this article! Cheers!