NumPy Broadcasting and Vectorization

Unidata Logo

NumPy Broadcasting and Vectorization

Unidata Python Workshop


NumPy Logo

Questions

  1. How can we work with arrays of differing shapes without needing to manually loop or copy data?
  2. How can we reframe operations on data to avoid looping in Python?

Objectives

  1. Use broadcasting to implicitly loop over data
  2. Vectorize calculations to avoid explicit loops

1. Using broadcasting to implicitly loop over data

Broadcasting is a useful NumPy tool that allows us to perform operations between arrays with different shapes, provided that they are compatible with each other in certain ways. To start, we can create an array below and add 5 to it:

In [1]:
import numpy as np

a = np.array([10, 20, 30, 40])
a + 5
Out[1]:
array([15, 25, 35, 45])

This works even though 5 is not an array; it works like as we would expect, adding 5 to each of the elements in a. This also works if 5 is an array:

In [2]:
b = np.array([5])
a + b
Out[2]:
array([15, 25, 35, 45])

This takes the single element in b and adds it to each of the elements in a. This won't work for just any b, though; for instance, the following:

b = np.array([5, 6, 7])
a + b

won't work. It does work if a and b are the same shape:

In [3]:
b = np.array([5, 5, 10, 10])
a + b
Out[3]:
array([15, 25, 40, 50])

What if what we really want is pairwise addition of a, b? Without broadcasting, we could accomplish this by looping:

In [4]:
b = np.array([1, 2, 3, 4, 5])
In [5]:
result = np.empty((5, 4), dtype=np.int32)
for row, valb in enumerate(b):
    for col, vala in enumerate(a):
        result[row, col] = vala + valb
result
Out[5]:
array([[11, 21, 31, 41],
       [12, 22, 32, 42],
       [13, 23, 33, 43],
       [14, 24, 34, 44],
       [15, 25, 35, 45]], dtype=int32)

We can also do this by manually repeating the arrays to the proper shape for the result, using np.tile. This avoids the need to manually loop:

In [6]:
aa = np.tile(a, (5, 1))
aa
Out[6]:
array([[10, 20, 30, 40],
       [10, 20, 30, 40],
       [10, 20, 30, 40],
       [10, 20, 30, 40],
       [10, 20, 30, 40]])
In [7]:
# Turn b into a column array, then tile it
bb = np.tile(b.reshape(5, 1), (1, 4))
bb
Out[7]:
array([[1, 1, 1, 1],
       [2, 2, 2, 2],
       [3, 3, 3, 3],
       [4, 4, 4, 4],
       [5, 5, 5, 5]])
In [8]:
aa + bb
Out[8]:
array([[11, 21, 31, 41],
       [12, 22, 32, 42],
       [13, 23, 33, 43],
       [14, 24, 34, 44],
       [15, 25, 35, 45]])

We can also do this using broadcasting, which is where NumPy implicitly repeats the array without using additional memory. With broadcasting, NumPy takes care of repeating for you, provided dimensions are "compatible". This works as:

  1. Check the number of dimensions of the arrays. If they are different, prepend size one dimensions
  2. Check if each of the dimensions are compatible: either the same size, or one of them is 1.
In [9]:
a.shape
Out[9]:
(4,)
In [10]:
b.shape
Out[10]:
(5,)

Right now, they have the same number of dimensions, 1, but that dimension is incompatible. We can solve this by appending a dimension using np.newaxis when indexing:

In [11]:
bb = b[:, np.newaxis]
bb.shape
Out[11]:
(5, 1)
In [12]:
a + bb
Out[12]:
array([[11, 21, 31, 41],
       [12, 22, 32, 42],
       [13, 23, 33, 43],
       [14, 24, 34, 44],
       [15, 25, 35, 45]])

This can be written more directly in one line:

In [13]:
a + b[:, np.newaxis]
Out[13]:
array([[11, 21, 31, 41],
       [12, 22, 32, 42],
       [13, 23, 33, 43],
       [14, 24, 34, 44],
       [15, 25, 35, 45]])

This also works 2D and 1D, etc.:

In [14]:
x = np.array([1, 2])
y = np.array([3, 4, 5])
z = np.array([6, 7, 8, 9])
In [15]:
d_2d = x[:, np.newaxis]**2 + y**2
In [16]:
d_2d.shape
Out[16]:
(2, 3)
In [17]:
d_3d = d_2d[..., np.newaxis] + z**2
In [18]:
d_3d.shape
Out[18]:
(2, 3, 4)

Or in one line:

In [19]:
h = x[:, np.newaxis, np.newaxis]**2 + y[np.newaxis, :, np.newaxis]**2 + z**2

We can see this one-line result has the same shape and same values as the other multi-step calculation.

In [20]:
h.shape
Out[20]:
(2, 3, 4)
In [21]:
np.all(h == d_3d)
Out[21]:
True

Broadcasting is often useful when you want to do calculations with coordinate values, which are often given as 1D arrays corresponding to positions along a particular array dimension. For example, taking range and azimiuth values for radar data (1D separable polar coordinates) and converting to x,y pairs relative to the radar location.

EXERCISE: Given the 3D temperature field and 1-D pressure coordinates below, calculate: $T * exp(P / 1000)$. You will need to use broadcasting to make the arrays compatible.
In [22]:
# Starting data
pressure = np.array([1000, 850, 500, 300])
temps = np.linspace(20, 30, 24).reshape(4, 3, 2)
print(temps.shape)
#
# YOUR CALCULATION HERE
#
(4, 3, 2)
SOLUTION
In [23]:
# %load solutions/broadcasting.py

# Cell content replaced by load magic replacement.
# Starting data
pressure = np.array([1000, 850, 500, 300])
temps = np.linspace(20, 30, 24).reshape(4, 3, 2)

temps * np.exp(pressure[:, np.newaxis, np.newaxis] / 1000)
Out[23]:
array([[[54.36563657, 55.54749823],
        [56.7293599 , 57.91122156],
        [59.09308323, 60.27494489]],

       [[52.89636361, 53.91360137],
        [54.93083913, 55.94807689],
        [56.96531466, 57.98255242]],

       [[41.57644944, 42.29328477],
        [43.01012011, 43.72695544],
        [44.44379078, 45.16062611]],

       [[37.56128856, 38.14818369],
        [38.73507883, 39.32197396],
        [39.90886909, 40.49576423]]])

Top


2. Vectorize calculations to avoid explicit loops

When working with arrays of data, loops over the individual array elements is a fact of life. However, for improved runtime performance, it is important to avoid performing these loops in Python as much as possible, and let NumPy handle the looping for you. Avoiding these loops frequently, but not always, results in shorter and clearer code as well.

Look ahead/behind

One common pattern for vectorizing is in converting loops that work over the current point as well as the previous and/or next point. This comes up when doing finite-difference calculations (e.g. approximating derivatives)

In [24]:
a = np.linspace(0, 20, 6)
a
Out[24]:
array([ 0.,  4.,  8., 12., 16., 20.])

We can calculate the forward difference for this array with a manual loop as:

In [25]:
d = np.zeros(a.size - 1)
for i in range(len(a) - 1):
    d[i] = a[i + 1] - a[i]
d
Out[25]:
array([4., 4., 4., 4., 4.])

It would be nice to express this calculation as a loop, if possible. To see how to go about this, let's condsider the values that are involved in calculating d[i], a[i+1] and a[i]. The values over the loop iterations are:

i a[i+1] a[i]
0 4 0
1 8 4
2 12 8
3 16 12
4 20 16

We can express the series of values for a[i+1] then as:

In [26]:
a[1:]
Out[26]:
array([ 4.,  8., 12., 16., 20.])

and a[i] as:

In [27]:
a[:-1]
Out[27]:
array([ 0.,  4.,  8., 12., 16.])

This means that we can express the forward difference as:

In [28]:
a[1:] - a[:-1]
Out[28]:
array([4., 4., 4., 4., 4.])

It should be noted that using slices in this way returns only a view on the original array. This means not only can you use the slices to modify the original data (even accidentally), but that this is also a quick operation that does not involve a copy and does not bloat memory usage.

EXERCISE: 2nd Derivative A finite difference estimate of the 2nd derivative is given by: $$f''(x) = 2 f_i - f_{i+1} - f_{i-1}$$ (we're ignoring $\Delta x$ here) * Write vectorized code to calculate this finite difference for a (using slices) What values should we be expecting to get for the 2nd derivative?
In [29]:
# YOUR CODE GOES HERE
SOLUTION
In [30]:
# %load solutions/vectorized_diff.py

# Cell content replaced by load magic replacement.
2 * a[1:-1] - a[:-2] - a[2:]
Out[30]:
array([0., 0., 0., 0.])

Blocking

Another application where vectorization comes into play to make operations more efficient is when operating on blocks of data. Let's start by creating some temperature data (rounding to make it easier to see/recognize the values).

In [31]:
temps = np.round(20 + np.random.randn(10) * 5, 1)
temps
Out[31]:
array([20.1, 22.9, 21.6, 17.8, 15.7, 21.2, 20.4, 26.7, 18.6, 15.6])

Let's start by writing a loop to take a 3-point running mean of the data. We'll do this by iterating over all the point in the array and average the 3 points centered on that point. We'll simplify the problem by avoiding dealing with the cases at the edges of the array.

In [32]:
avg = np.zeros_like(temps)
# We're just ignoring the edge effects here
for i in range(1, len(temps) - 1):
    sub = temps[i - 1:i + 2]
    avg[i] = sub.mean()
In [33]:
avg
Out[33]:
array([ 0.        , 21.53333333, 20.76666667, 18.36666667, 18.23333333,
       19.1       , 22.76666667, 21.9       , 20.3       ,  0.        ])

As with the case of doing finite differences, we can express this using slices of the original array:

In [34]:
# i - 1            i          i + 1
(temps[:-2] + temps[1:-1] + temps[2:]) / 3
Out[34]:
array([21.53333333, 20.76666667, 18.36666667, 18.23333333, 19.1       ,
       22.76666667, 21.9       , 20.3       ])

Another option to solve this is not using slicing but by using a powerful numpy tool: as_strided. This tool can result in some odd behavior, so take care when using--the tradeoff is that this can be used to do some powerful operations. What we're doing here is altering how NumPy is interpreting the values in the memory that underpins the array. So for this array:

In [35]:
temps
Out[35]:
array([20.1, 22.9, 21.6, 17.8, 15.7, 21.2, 20.4, 26.7, 18.6, 15.6])

we can create a view of the array with a new, bigger shape, with rows made up of overlapping values. We do this by specifying a new shape of 8x3, one row for each of the length 3 blocks we can fit in the original 1D array of data. We then use the strides argument to control how numpy walks between items in each dimension. The last item in the strides tuple is just as normal--it says that the number of bytes to walk between items is just the size of an item. (Increasing this would skip items.) The first item says that when we go to a new, in this case row, only advance the size of a single item. This is what gives us overlapping rows.

In [36]:
block_size = 3
new_shape = (len(temps) - block_size + 1, block_size)
bytes_per_item = temps.dtype.itemsize
temps_strided = np.lib.stride_tricks.as_strided(temps,
                                                shape=new_shape,
                                                strides=(bytes_per_item, bytes_per_item))
temps_strided
Out[36]:
array([[20.1, 22.9, 21.6],
       [22.9, 21.6, 17.8],
       [21.6, 17.8, 15.7],
       [17.8, 15.7, 21.2],
       [15.7, 21.2, 20.4],
       [21.2, 20.4, 26.7],
       [20.4, 26.7, 18.6],
       [26.7, 18.6, 15.6]])

Now that we have this view of the array with the rows representing overlapping blocks, we can operate across the rows with mean and the axis=-1 argument to get our running average:

In [37]:
temps_strided.mean(axis=-1)
Out[37]:
array([21.53333333, 20.76666667, 18.36666667, 18.23333333, 19.1       ,
       22.76666667, 21.9       , 20.3       ])

It should be noted that there are no copies going on here, so if we change a value at a single indexed location, the change actually shows up in multiple locations:

In [38]:
temps_strided[0, 2] = 2000
temps_strided
Out[38]:
array([[  20.1,   22.9, 2000. ],
       [  22.9, 2000. ,   17.8],
       [2000. ,   17.8,   15.7],
       [  17.8,   15.7,   21.2],
       [  15.7,   21.2,   20.4],
       [  21.2,   20.4,   26.7],
       [  20.4,   26.7,   18.6],
       [  26.7,   18.6,   15.6]])

Finding the difference between min and max

Another operation that crops up when slicing and dicing data is trying to identify a set of indexes, along a particular axis, within a larger multidimensional array. For instance, say we have a 3D array of temperatures, and want to identify the location of the $-10^oC$ isotherm within each column:

In [39]:
pressure = np.linspace(1000, 100, 25)
temps = np.random.randn(25, 30, 40) * 3 + np.linspace(25, -100, 25).reshape(-1, 1, 1)

NumPy has the function argmin() which returns the index of the minium value. We can use this to find the minimum absolute difference between the value and -10:

In [40]:
# Using axis=0 to tell it to operate along the pressure dimension
inds = np.argmin(np.abs(temps - -10), axis=0)
inds
Out[40]:
array([[7, 7, 7, ..., 7, 6, 6],
       [7, 7, 6, ..., 7, 7, 7],
       [7, 7, 7, ..., 7, 7, 7],
       ...,
       [7, 5, 7, ..., 7, 6, 7],
       [6, 8, 8, ..., 7, 6, 7],
       [7, 7, 6, ..., 7, 6, 7]])
In [41]:
inds.shape
Out[41]:
(30, 40)

Great! We have an array representing the index of the point closest to $-10^oC$ in each column of data. We could use this to look up into our pressure coordinates to find the pressure level for each column:

In [42]:
pressure[inds]
Out[42]:
array([[737.5, 737.5, 737.5, ..., 737.5, 775. , 775. ],
       [737.5, 737.5, 775. , ..., 737.5, 737.5, 737.5],
       [737.5, 737.5, 737.5, ..., 737.5, 737.5, 737.5],
       ...,
       [737.5, 812.5, 737.5, ..., 737.5, 775. , 737.5],
       [775. , 700. , 700. , ..., 737.5, 775. , 737.5],
       [737.5, 737.5, 775. , ..., 737.5, 775. , 737.5]])

How about using that to find the actual temperature value that was closest?

In [43]:
temps[inds, :, :].shape
Out[43]:
(30, 40, 30, 40)

Unfortunately, this replaced the pressure dimension (size 25) with the shape of our index array (30 x 40), giving us a 30 x 40 x 30 x 40 array (imagine what would have happened with real data!). One solution here would be to loop:

In [44]:
output = np.empty(inds.shape, dtype=temps.dtype)
for (i, j), val in np.ndenumerate(inds):
    output[i, j] = temps[val, i, j]
output
Out[44]:
array([[-14.74393253, -11.27464956,  -9.28431445, ...,  -9.91771046,
         -9.91589928, -10.51022177],
       [-14.10386876,  -9.86277966,  -9.27660136, ..., -10.31874454,
        -10.68160248,  -9.18095511],
       [-10.10994669, -10.51009102,  -9.12803363, ..., -10.49617632,
        -12.6296744 , -10.08161021],
       ...,
       [-10.64039796,  -5.60686186, -12.65882205, ..., -13.51045102,
         -8.94736546,  -8.70406772],
       [ -8.04172443, -11.34267057, -11.3133564 , ...,  -9.41633544,
         -7.05314116, -10.1051816 ],
       [ -9.79875788,  -8.54531154, -10.040366  , ..., -12.12941304,
         -5.81029834,  -4.75388255]])

Of course, what we really want to do is avoid the explicit loop. Let's temporarily simplify the problem to a single dimension. If we have a 1D array, we can pass a 1D array of indices (a full) range, and get back the same as the original data array:

In [45]:
pressure[np.arange(pressure.size)]
Out[45]:
array([1000. ,  962.5,  925. ,  887.5,  850. ,  812.5,  775. ,  737.5,
        700. ,  662.5,  625. ,  587.5,  550. ,  512.5,  475. ,  437.5,
        400. ,  362.5,  325. ,  287.5,  250. ,  212.5,  175. ,  137.5,
        100. ])
In [46]:
np.all(pressure[np.arange(pressure.size)] == pressure)
Out[46]:
True

We can use this to select all the indices on the other dimensions of our temperature array. We will also need to use the magic of broadcasting to combine arrays of indices across dimensions:

Now vectorized solution

In [47]:
y_inds = np.arange(temps.shape[1])[:, np.newaxis]
x_inds = np.arange(temps.shape[2])
temps[inds, y_inds, x_inds]
Out[47]:
array([[-14.74393253, -11.27464956,  -9.28431445, ...,  -9.91771046,
         -9.91589928, -10.51022177],
       [-14.10386876,  -9.86277966,  -9.27660136, ..., -10.31874454,
        -10.68160248,  -9.18095511],
       [-10.10994669, -10.51009102,  -9.12803363, ..., -10.49617632,
        -12.6296744 , -10.08161021],
       ...,
       [-10.64039796,  -5.60686186, -12.65882205, ..., -13.51045102,
         -8.94736546,  -8.70406772],
       [ -8.04172443, -11.34267057, -11.3133564 , ...,  -9.41633544,
         -7.05314116, -10.1051816 ],
       [ -9.79875788,  -8.54531154, -10.040366  , ..., -12.12941304,
         -5.81029834,  -4.75388255]])

Now let's say we want to find the relative humidity at the -10C isotherm

In [48]:
np.all(output == temps[inds, y_inds, x_inds])
Out[48]:
True

Top