A ~5 minute guide to Numba — Numba 0+untagged.2155.g9ce83ef.dirty documentation (2024)

Numba is a just-in-time compiler for Python that works best on code that usesNumPy arrays and functions, and loops. The most common way to use Numba isthrough its collection of decorators that can be applied to your functions toinstruct Numba to compile them. When a call is made to a Numba-decoratedfunction it is compiled to machine code “just-in-time” for execution and all orpart of your code can subsequently run at native machine code speed!

Out of the box Numba works with the following:

  • OS: Windows (64 bit), OSX, Linux (64 bit). Unofficial support on*BSD.

  • Architecture: x86, x86_64, ppc64le, armv8l (aarch64), M1/Arm64.

  • GPUs: Nvidia CUDA.

  • CPython

  • NumPy 1.22 - 1.26

How do I get it?

Numba is available as a conda package for theAnaconda Python distribution:

$ conda install numba

Numba also has wheels available:

$ pip install numba

Numba can also becompiled from source, although we donot recommend it for first-time Numba users.

Numba is often used as a core package so its dependencies are kept to anabsolute minimum, however, extra packages can be installed as follows to provideadditional functionality:

  • scipy - enables support for compiling numpy.linalg functions.

  • colorama - enables support for color highlighting in backtraces/errormessages.

  • pyyaml - enables configuration of Numba via a YAML config file.

  • intel-cmplr-lib-rt - allows the use of the Intel SVML (high performanceshort vector math library, x86_64 only). Installation instructions are in theperformance tips.

Will Numba work for my code?

This depends on what your code looks like, if your code is numericallyorientated (does a lot of math), uses NumPy a lot and/or has a lot of loops,then Numba is often a good choice. In these examples we’ll apply the mostfundamental of Numba’s JIT decorators, @jit, to try and speed up somefunctions to demonstrate what works well and what does not.

Numba works well on code that looks like this:

from numba import jitimport numpy as npx = np.arange(100).reshape(10, 10)@jitdef go_fast(a): # Function is compiled to machine code when called the first time trace = 0.0 for i in range(a.shape[0]): # Numba likes loops trace += np.tanh(a[i, i]) # Numba likes NumPy functions return a + trace # Numba likes NumPy broadcastingprint(go_fast(x))

It won’t work very well, if at all, on code that looks like this:

from numba import jitimport pandas as pdx = {'a': [1, 2, 3], 'b': [20, 30, 40]}@jit(forceobj=True, looplift=True) # Need to use object mode, try and compile loops!def use_pandas(a): # Function will not benefit from Numba jit df = pd.DataFrame.from_dict(a) # Numba doesn't know about pd.DataFrame df += 1 # Numba doesn't understand what this is return df.cov() # or this!print(use_pandas(x))

Note that Pandas is not understood by Numba and as a result Numba would simplyrun this code via the interpreter but with the added cost of the Numba internaloverheads!

What is object mode?

The Numba @jit decorator fundamentally operates in two compilation modes,nopython mode and object mode. In the go_fast example above,the @jit decorator defaults to operating in nopython mode. The behaviourof the nopython compilation mode is to essentially compile the decoratedfunction so that it will run entirely without the involvement of the Pythoninterpreter. This is the recommended and best-practice way to use the Numbajit decorator as it leads to the best performance.

Should the compilation in nopython mode fail, Numba can compile usingobject mode. This achieved through using the forceobj=True key wordargument to the @jit decorator (as seen in the use_pandas exampleabove). In this mode Numba will compile the function with the assumption thateverything is a Python object and essentially run the code in the interpreter.Specifying looplift=True might gain some performance over pureobject mode as Numba will try and compile loops into functions that run inmachine code, and it will run the rest of the code in the interpreter.For best performance avoid using object mode mode in general!

How to measure the performance of Numba?

First, recall that Numba has to compile your function for the argument typesgiven before it executes the machine code version of your function. This takestime. However, once the compilation has taken place Numba caches the machinecode version of your function for the particular types of arguments presented.If it is called again with the same types, it can reuse the cached versioninstead of having to compile again.

A really common mistake when measuring performance is to not account for theabove behaviour and to time code once with a simple timer that includes thetime taken to compile your function in the execution time.

For example:

from numba import jitimport numpy as npimport timex = np.arange(100).reshape(10, 10)@jit(nopython=True)def go_fast(a): # Function is compiled and runs in machine code trace = 0.0 for i in range(a.shape[0]): trace += np.tanh(a[i, i]) return a + trace# DO NOT REPORT THIS... COMPILATION TIME IS INCLUDED IN THE EXECUTION TIME!start = time.perf_counter()go_fast(x)end = time.perf_counter()print("Elapsed (with compilation) = {}s".format((end - start)))# NOW THE FUNCTION IS COMPILED, RE-TIME IT EXECUTING FROM CACHEstart = time.perf_counter()go_fast(x)end = time.perf_counter()print("Elapsed (after compilation) = {}s".format((end - start)))

This, for example prints:

Elapsed (with compilation) = 0.33030009269714355sElapsed (after compilation) = 6.67572021484375e-06s

A good way to measure the impact Numba JIT has on your code is to time executionusing the timeit modulefunctions; these measure multiple iterations of execution and, as a result,can be made to accommodate for the compilation time in the first execution.

As a side note, if compilation time is an issue, Numba JIT supportson-disk caching of compiled functions and also hasan Ahead-Of-Time compilation mode.

How fast is it?

Assuming Numba can operate in nopython mode, or at least compile some loops,it will target compilation to your specific CPU. Speed up varies depending onapplication but can be one to two orders of magnitude. Numba has aperformance guide that covers common options forgaining extra performance.

How does Numba work?

Numba reads the Python bytecode for a decorated function and combines this withinformation about the types of the input arguments to the function. It analyzesand optimizes your code, and finally uses the LLVM compiler library to generatea machine code version of your function, tailored to your CPU capabilities. Thiscompiled version is then used every time your function is called.

Other things of interest:

Numba has quite a few decorators, we’ve seen @jit, but there’salso:

  • @njit - this is an alias for @jit(nopython=True) as it is so commonlyused!

  • @vectorize - produces NumPy ufunc s (with all the ufunc methodssupported). Docs are here.

  • @guvectorize - produces NumPy generalized ufunc s.Docs are here.

  • @stencil - declare a function as a kernel for a stencil like operation.Docs are here.

  • @jitclass - for jit aware classes. Docs are here.

  • @cfunc - declare a function for use as a native call back (to be calledfrom C/C++ etc). Docs are here.

  • @overload - register your own implementation of a function for use innopython mode, e.g. @overload(scipy.special.j0).Docs are here.

Extra options available in some decorators:

  • parallel = True - enable theautomatic parallelization of the function.

  • fastmath = True - enable fast-mathbehaviour for the function.

ctypes/cffi/cython interoperability:

  • cffi - The calling of CFFI functions is supportedin nopython mode.

  • ctypes - The calling of ctypes wrappedfunctions is supported in nopython mode.

  • Cython exported functions are callable.

GPU targets:

Numba can target Nvidia CUDA GPUs.You can write a kernel in pure Python and have Numba handle the computation anddata movement (or do this explicitly). Click for Numba documentation onCUDA.

A ~5 minute guide to Numba — Numba 0+untagged.2155.g9ce83ef.dirty documentation (2024)

References

Top Articles
Latest Posts
Article information

Author: Foster Heidenreich CPA

Last Updated:

Views: 5746

Rating: 4.6 / 5 (76 voted)

Reviews: 91% of readers found this page helpful

Author information

Name: Foster Heidenreich CPA

Birthday: 1995-01-14

Address: 55021 Usha Garden, North Larisa, DE 19209

Phone: +6812240846623

Job: Corporate Healthcare Strategist

Hobby: Singing, Listening to music, Rafting, LARPing, Gardening, Quilting, Rappelling

Introduction: My name is Foster Heidenreich CPA, I am a delightful, quaint, glorious, quaint, faithful, enchanting, fine person who loves writing and wants to share my knowledge and understanding with you.