Original author(s)  Travis Oliphant 

Developer(s)  Community project 
Initial release  As Numeric, 1995  ; as NumPy, 2006
Stable release  1.22.0
/ 31 December 2021 ^{[1]} 
Repository 

Written in  Python, C 
Operating system  Crossplatform 
Type  Numerical analysis 
License  BSD^{[2]} 
Website  numpy 
NumPy (pronounced /ˈnʌmpaɪ/ (NUMpy) or sometimes /ˈnʌmpi/^{[3]}^{[4]} (NUMpee)) is a library for the Python programming language, adding support for large, multidimensional arrays and matrices, along with a large collection of highlevel mathematical functions to operate on these arrays.^{[5]} The ancestor of NumPy, Numeric, was originally created by Jim Hugunin with contributions from several other developers. In 2005, Travis Oliphant created NumPy by incorporating features of the competing Numarray into Numeric, with extensive modifications. NumPy is opensource software and has many contributors. NumPy is a NumFOCUS fiscally sponsored project.^{[6]}
The Python programming language was not originally designed for numerical computing, but attracted the attention of the scientific and engineering community early on. In 1995 the special interest group (SIG) matrixsig was founded with the aim of defining an array computing package; among its members was Python designer and maintainer Guido van Rossum, who extended Python's syntax (in particular the indexing syntax^{[7]}) to make array computing easier.^{[8]}
An implementation of a matrix package was completed by Jim Fulton, then generalized^{[further explanation needed]} by Jim Hugunin and called Numeric^{[8]} (also variously known as the "Numerical Python extensions" or "NumPy").^{[9]}^{[10]} Hugunin, a graduate student at the Massachusetts Institute of Technology (MIT),^{[10]}^{: 10 } joined the Corporation for National Research Initiatives (CNRI) in 1997 to work on JPython,^{[8]} leaving Paul Dubois of Lawrence Livermore National Laboratory (LLNL) to take over as maintainer.^{[10]}^{: 10 } Other early contributors include David Ascher, Konrad Hinsen and Travis Oliphant.^{[10]}^{: 10 }
A new package called Numarray was written as a more flexible replacement for Numeric.^{[11]} Like Numeric, it too is now deprecated.^{[12]}^{[13]} Numarray had faster operations for large arrays, but was slower than Numeric on small ones,^{[14]} so for a time both packages were used in parallel for different use cases. The last version of Numeric (v24.2) was released on 11 November 2005, while the last version of numarray (v1.5.2) was released on 24 August 2006.^{[15]}
There was a desire to get Numeric into the Python standard library, but Guido van Rossum decided that the code was not maintainable in its state then.^{[when?]}^{[16]}
In early 2005, NumPy developer Travis Oliphant wanted to unify the community around a single array package and ported Numarray's features to Numeric, releasing the result as NumPy 1.0 in 2006.^{[11]} This new project was part of SciPy. To avoid installing the large SciPy package just to get an array object, this new package was separated and called NumPy. Support for Python 3 was added in 2011 with NumPy version 1.5.0.^{[17]}
In 2011, PyPy started development on an implementation of the NumPy API for PyPy.^{[18]} It is not yet fully compatible with NumPy.^{[19]}
NumPy targets the CPython reference implementation of Python, which is a nonoptimizing bytecode interpreter. Mathematical algorithms written for this version of Python often run much slower than compiled equivalents. NumPy addresses the slowness problem partly by providing multidimensional arrays and functions and operators that operate efficiently on arrays; using these requires rewriting some code, mostly inner loops, using NumPy.
Using NumPy in Python gives functionality comparable to MATLAB since they are both interpreted,^{[20]} and they both allow the user to write fast programs as long as most operations work on arrays or matrices instead of scalars. In comparison, MATLAB boasts a large number of additional toolboxes, notably Simulink, whereas NumPy is intrinsically integrated with Python, a more modern and complete programming language. Moreover, complementary Python packages are available; SciPy is a library that adds more MATLABlike functionality and Matplotlib is a plotting package that provides MATLABlike plotting functionality. Internally, both MATLAB and NumPy rely on BLAS and LAPACK for efficient linear algebra computations.
Python bindings of the widely used computer vision library OpenCV utilize NumPy arrays to store and operate on data. Since images with multiple channels are simply represented as threedimensional arrays, indexing, slicing or masking with other arrays are very efficient ways to access specific pixels of an image. The NumPy array as universal data structure in OpenCV for images, extracted feature points, filter kernels and many more vastly simplifies the programming workflow and debugging.
The core functionality of NumPy is its "ndarray", for ndimensional array, data structure. These arrays are strided views on memory.^{[11]} In contrast to Python's builtin list data structure, these arrays are homogeneously typed: all elements of a single array must be of the same type.
Such arrays can also be views into memory buffers allocated by C/C++, Cython, and Fortran extensions to the CPython interpreter without the need to copy data around, giving a degree of compatibility with existing numerical libraries. This functionality is exploited by the SciPy package, which wraps a number of such libraries (notably BLAS and LAPACK). NumPy has builtin support for memorymapped ndarrays.^{[11]}
Inserting or appending entries to an array is not as trivially possible as it is with Python's lists.
The np.pad(...)
routine to extend arrays actually creates new arrays of the desired shape and padding values, copies the given array into the new one and returns it.
NumPy's np.concatenate([a1,a2])
operation does not actually link the two arrays but returns a new one, filled with the entries from both given arrays in sequence.
Reshaping the dimensionality of an array with np.reshape(...)
is only possible as long as the number of elements in the array does not change.
These circumstances originate from the fact that NumPy's arrays must be views on contiguous memory buffers. A replacement package called Blaze attempts to overcome this limitation.^{[21]}
Algorithms that are not expressible as a vectorized operation will typically run slowly because they must be implemented in "pure Python", while vectorization may increase memory complexity of some operations from constant to linear, because temporary arrays must be created that are as large as the inputs. Runtime compilation of numerical code has been implemented by several groups to avoid these problems; open source solutions that interoperate with NumPy include scipy.weave
, numexpr^{[22]} and Numba.^{[23]} Cython and Pythran are staticcompiling alternatives to these.
Many modern largescale scientific computing applications have requirements that exceed the capabilities of the NumPy arrays. For example, NumPy arrays are usually loaded into a computer's memory, which might have insufficient capacity for the analysis of large datasets. Further, NumPy operations are executed on a single CPU. However, many linear algebra operations can be accelerated by executing them on clusters of CPUs or of specialized hardware, such as GPUs and TPUs, which many deep learning applications rely on. As a result, several alternative array implementations have arisen in the scientific python ecosystem over the recent years, such as Dask for distributed arrays and TensorFlow or JAX for computations on GPUs. Because of its popularity, these often implement a subset of Numpy's API or mimic it, so that users can change their array implementation with minimal changes to their code required.^{[5]} A recently introduced library named CUPy,^{[24]} accelerated by Nvidia's CUDA framework, has also shown potential for faster computing, being a 'dropin replacement' of NumPy.^{[25]}
>>> import numpy as np
>>> x = np.array([1, 2, 3])
>>> x
array([1, 2, 3])
>>> y = np.arange(10) # like Python's list(range(10)), but returns an array
>>> y
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> a = np.array([1, 2, 3, 6])
>>> b = np.linspace(0, 2, 4) # create an array with four equally spaced points starting with 0 and ending with 2.
>>> c = a  b
>>> c
array([ 1. , 1.33333333, 1.66666667, 4. ])
>>> a**2
array([ 1, 4, 9, 36])
>>> a = np.linspace(np.pi, np.pi, 100)
>>> b = np.sin(a)
>>> c = np.cos(a)
>>> from numpy.random import rand
>>> from numpy.linalg import solve, inv
>>> a = np.array([[1, 2, 3], [3, 4, 6.7], [5, 9.0, 5]])
>>> a.transpose()
array([[ 1. , 3. , 5. ],
[ 2. , 4. , 9. ],
[ 3. , 6.7, 5. ]])
>>> inv(a)
array([[2.27683616, 0.96045198, 0.07909605],
[ 1.04519774, 0.56497175, 0.1299435 ],
[ 0.39548023, 0.05649718, 0.11299435]])
>>> b = np.array([3, 2, 1])
>>> solve(a, b) # solve the equation ax = b
array([4.83050847, 2.13559322, 1.18644068])
>>> c = rand(3, 3) * 20 # create a 3x3 random matrix of values within [0,1] scaled by 20
>>> c
array([[ 3.98732789, 2.47702609, 4.71167924],
[ 9.24410671, 5.5240412 , 10.6468792 ],
[ 10.38136661, 8.44968437, 15.17639591]])
>>> np.dot(a, c) # matrix multiplication
array([[ 53.61964114, 38.8741616 , 71.53462537],
[ 118.4935668 , 86.14012835, 158.40440712],
[ 155.04043289, 104.3499231 , 195.26228855]])
>>> a @ c # Starting with Python 3.5 and NumPy 1.10
array([[ 53.61964114, 38.8741616 , 71.53462537],
[ 118.4935668 , 86.14012835, 158.40440712],
[ 155.04043289, 104.3499231 , 195.26228855]])
>>> M = np.zeros(shape=(2, 3, 5, 7, 11))
>>> T = np.transpose(M, (4, 2, 1, 3, 0))
>>> T.shape
(11, 5, 3, 7, 2)
>>> import numpy as np
>>> import cv2
>>> r = np.reshape(np.arange(256*256)%256,(256,256)) # 256x256 pixel array with a horizontal gradient from 0 to 255 for the red color channel
>>> g = np.zeros_like(r) # array of same size and type as r but filled with 0s for the green color channel
>>> b = r.T # transposed r will give a vertical gradient for the blue color channel
>>> cv2.imwrite('gradients.png', np.dstack([b,g,r])) # OpenCV images are interpreted as BGR, the depthstacked array will be written to an 8bit RGB PNGfile called 'gradients.png'
True
Iterative Python algorithm and vectorized NumPy version.
>>> # # # Pure iterative Python # # #
>>> points = [[9,2,8],[4,7,2],[3,4,4],[5,6,9],[5,0,7],[8,2,7],[0,3,2],[7,3,0],[6,1,1],[2,9,6]]
>>> qPoint = [4,5,3]
>>> minIdx = 1
>>> minDist = 1
>>> for idx, point in enumerate(points): # iterate over all points
... dist = sum([(dpdq)**2 for dp,dq in zip(point,qPoint)])**0.5 # compute the euclidean distance for each point to q
... if dist < minDist or minDist < 0: # if necessary, update minimum distance and index of the corresponding point
... minDist = dist
... minIdx = idx
>>> print('Nearest point to q: {0}'.format(points[minIdx]))
Nearest point to q: [3, 4, 4]
>>> # # # Equivalent NumPy vectorization # # #
>>> import numpy as np
>>> points = np.array([[9,2,8],[4,7,2],[3,4,4],[5,6,9],[5,0,7],[8,2,7],[0,3,2],[7,3,0],[6,1,1],[2,9,6]])
>>> qPoint = np.array([4,5,3])
>>> minIdx = np.argmin(np.linalg.norm(pointsqPoint,axis=1)) # compute all euclidean distances at once and return the index of the smallest one
>>> print('Nearest point to q: {0}'.format(points[minIdx]))
Nearest point to q: [3 4 4]