# NumExpr Documentation Reference¶

Contents:

## How it works¶

The string passed to evaluate is compiled into an object representing the expression and types of the arrays used by the function numexpr.

The expression is first compiled using Python’s compile function (this means that the expressions have to be valid Python expressions). From this, the variable names can be taken. The expression is then evaluated using instances of a special object that keep track of what is being done to them, and which builds up the parse tree of the expression.

This parse tree is then compiled to a bytecode program, which describes how to perform the operation element-wise. The virtual machine uses “vector registers”: each register is many elements wide (by default 4096 elements). The key to NumExpr’s speed is handling chunks of elements at a time.

There are two extremes to evaluating an expression elementwise. You can do each operation as arrays, returning temporary arrays. This is what you do when you use NumPy: 2*a+3*b uses three temporary arrays as large as a or b. This strategy wastes memory (a problem if your arrays are large), and also is not a good use of cache memory: for large arrays, the results of 2*a and 3*b won’t be in cache when you do the add.

The other extreme is to loop over each element, as in:

for i in xrange(len(a)):
c[i] = 2*a[i] + 3*b[i]


This doesn’t consume extra memory, and is good for the cache, but, if the expression is not compiled to machine code, you will have a big case statement (or a bunch of if’s) inside the loop, which adds a large overhead for each element, and will hurt the branch-prediction used on the CPU.

numexpr uses a in-between approach. Arrays are handled as chunks (of 4096 elements) at a time, using a register machine. As Python code, it looks something like this:

for i in xrange(0, len(a), 256):
r0 = a[i:i+128]
r1 = b[i:i+128]
multiply(r0, 2, r2)
multiply(r1, 3, r3)
c[i:i+128] = r2


(remember that the 3-arg form stores the result in the third argument, instead of allocating a new array). This achieves a good balance between cache and branch-prediction. And the virtual machine is written entirely in C, which makes it faster than the Python above. Furthermore the virtual machine is also multi-threaded, which allows for efficient parallelization of NumPy operations.

http://www.bitsofbits.com/2014/09/21/numpy-micro-optimization-and-numexpr/

## Expected performance¶

The range of speed-ups for NumExpr respect to NumPy can vary from 0.95x and 20x, being 2x, 3x or 4x typical values, depending on the complexity of the expression and the internal optimization of the operators used. The strided and unaligned case has been optimized too, so if the expression contains such arrays, the speed-up can increase significantly. Of course, you will need to operate with large arrays (typically larger than the cache size of your CPU) to see these improvements in performance.

Here there are some real timings. For the contiguous case:

In : import numpy as np
In : import numexpr as ne
In : a = np.random.rand(1e6)
In : b = np.random.rand(1e6)
In : timeit 2*a + 3*b
10 loops, best of 3: 18.9 ms per loop
In : timeit ne.evaluate("2*a + 3*b")
100 loops, best of 3: 5.83 ms per loop   # 3.2x: medium speed-up (simple expr)
In : timeit 2*a + b**10
10 loops, best of 3: 158 ms per loop
In : timeit ne.evaluate("2*a + b**10")
100 loops, best of 3: 7.59 ms per loop   # 20x: large speed-up due to optimised pow()


For unaligned arrays, the speed-ups can be even larger:

In : a = np.empty(1e6, dtype="b1,f8")['f1']
In : b = np.empty(1e6, dtype="b1,f8")['f1']
In : a.flags.aligned, b.flags.aligned
Out: (False, False)
In : a[:] = np.random.rand(len(a))
In : b[:] = np.random.rand(len(b))
In : timeit 2*a + 3*b
10 loops, best of 3: 29.5 ms per loop
In : timeit ne.evaluate("2*a + 3*b")
100 loops, best of 3: 7.46 ms per loop   # ~ 4x speed-up


## NumExpr 2.0 User Guide¶

The numexpr package supplies routines for the fast evaluation of array expressions elementwise by using a vector-based virtual machine.

Using it is simple:

>>> import numpy as np
>>> import numexpr as ne
>>> a = np.arange(10)
>>> b = np.arange(0, 20, 2)
>>> c = ne.evaluate("2*a+3*b")
>>> c
array([ 0,  8, 16, 24, 32, 40, 48, 56, 64, 72])


### Building¶

NumExpr requires Python 2.6 or greater, and NumPy 1.7 or greater. It is built in the standard Python way:

$python setup.py build$ python setup.py install


You must have a C-compiler (i.e. MSVC on Windows and GCC on Linux) installed.

Then change to a directory that is not the repository directory (e.g. /tmp) and test numexpr with:

\$ python -c "import numexpr; numexpr.test()"


### Enabling Intel VML support¶

Starting from release 1.2 on, numexpr includes support for Intel’s VML library. This allows for better performance on Intel architectures, mainly when evaluating transcendental functions (trigonometrical, exponential, …). It also enables numexpr using several CPU cores.

If you have Intel’s MKL (the library that embeds VML), just copy the site.cfg.example that comes in the distribution to site.cfg and edit the latter giving proper directions on how to find your MKL libraries in your system. After doing this, you can proceed with the usual building instructions listed above. Pay attention to the messages during the building process in order to know whether MKL has been detected or not. Finally, you can check the speed-ups on your machine by running the bench/vml_timing.py script (you can play with different parameters to the set_vml_accuracy_mode() and set_vml_num_threads() functions in the script so as to see how it would affect performance).

Threads are spawned at import-time, with the number being set by the environment variable NUMEXPR_MAX_THREADS. The default maximum thread count is 64. There is no advantage to spawning more threads than the number of virtual cores available on the computing node. Practically NumExpr scales at large thread count (> 8) only on very large matrices (> 2**22). Spawning large numbers of threads is not free, and can increase import times for NumExpr or packages that import it such as Pandas or PyTables.

If desired, the number of threads in the pool used can be adjusted via an environment variable, NUMEXPR_NUM_THREADS (preferred) or OMP_NUM_THREADS. Typically only setting NUMEXPR_MAX_THREADS is sufficient; the number of threads used can be adjusted dynamically via numexpr.set_num_threads(int). The number of threads can never exceed that set by NUMEXPR_MAX_THREADS.

If the user has not configured the environment prior to importing NumExpr, info logs will be generated, and the initial number of threads _that are used_ will be set to the number of cores detected in the system or 8, whichever is less.

Usage:

import os
import numexpr as ne


### Usage Notes¶

NumExpr’s principal routine is:

evaluate(ex, local_dict=None, global_dict=None, optimization='aggressive', truediv='auto')


where ex is a string forming an expression, like "2*a+3*b". The values for a and b will by default be taken from the calling function’s frame (through the use of sys._getframe()). Alternatively, they can be specified using the local_dict or global_dict arguments, or passed as keyword arguments.

The optimization parameter can take the values 'moderate' or 'aggressive'. 'moderate' means that no optimization is made that can affect precision at all. 'aggressive' (the default) means that the expression can be rewritten in a way that precision could be affected, but normally very little. For example, in 'aggressive' mode, the transformation x~**3 -> x*x*x is made, but not in 'moderate' mode.

The truediv parameter specifies whether the division is a ‘floor division’ (False) or a ‘true division’ (True). The default is the value of __future__.division in the interpreter. See PEP 238 for details.

Expressions are cached, so reuse is fast. Arrays or scalars are allowed for the variables, which must be of type 8-bit boolean (bool), 32-bit signed integer (int), 64-bit signed integer (long), double-precision floating point number (float), 2x64-bit, double-precision complex number (complex) or raw string of bytes (str). If they are not in the previous set of types, they will be properly upcasted for internal use (the result will be affected as well). The arrays must all be the same size.

### Datatypes supported internally¶

NumExpr operates internally only with the following types:

• 8-bit boolean (bool)
• 32-bit signed integer (int or int32)
• 64-bit signed integer (long or int64)
• 32-bit single-precision floating point number (float or float32)
• 64-bit, double-precision floating point number (double or float64)
• 2x64-bit, double-precision complex number (complex or complex128)
• Raw string of bytes (str)

If the arrays in the expression does not match any of these types, they will be upcasted to one of the above types (following the usual type inference rules, see below). Have this in mind when doing estimations about the memory consumption during the computation of your expressions.

Also, the types in NumExpr conditions are somewhat stricter than those of Python. For instance, the only valid constants for booleans are True and False, and they are never automatically cast to integers.

### Casting rules¶

Casting rules in NumExpr follow closely those of NumPy. However, for implementation reasons, there are some known exceptions to this rule, namely:

• When an array with type int8, uint8, int16 or uint16 is used inside NumExpr, it is internally upcasted to an int (or int32 in NumPy notation).
• When an array with type uint32 is used inside NumExpr, it is internally upcasted to a long (or int64 in NumPy notation).
• A floating point function (e.g. sin) acting on int8 or int16 types returns a float64 type, instead of the float32 that is returned by NumPy functions. This is mainly due to the absence of native int8 or int16 types in NumExpr.
• In operations implying a scalar and an array, the normal rules of casting are used in NumExpr, in contrast with NumPy, where array types takes priority. For example, if a is an array of type float32 and b is an scalar of type float64 (or Python float type, which is equivalent), then a*b returns a float64 in NumExpr, but a float32 in NumPy (i.e. array operands take priority in determining the result type). If you need to keep the result a float32, be sure you use a float32 scalar too.

### Supported operators¶

NumExpr supports the set of operators listed below:

• Logical operators: &, |, ~
• Comparison operators: <, <=, ==, !=, >=, >
• Unary arithmetic operators: -
• Binary arithmetic operators: +, -, *, /, **, %, <<, >>

### Supported functions¶

The next are the current supported set:

• where(bool, number1, number2): number – number1 if the bool condition is true, number2 otherwise.
• {sin,cos,tan}(float|complex): float|complex – trigonometric sine, cosine or tangent.
• {arcsin,arccos,arctan}(float|complex): float|complex – trigonometric inverse sine, cosine or tangent.
• arctan2(float1, float2): float – trigonometric inverse tangent of float1/float2.
• {sinh,cosh,tanh}(float|complex): float|complex – hyperbolic sine, cosine or tangent.
• {arcsinh,arccosh,arctanh}(float|complex): float|complex – hyperbolic inverse sine, cosine or tangent.
• {log,log10,log1p}(float|complex): float|complex – natural, base-10 and log(1+x) logarithms.
• {exp,expm1}(float|complex): float|complex – exponential and exponential minus one.
• sqrt(float|complex): float|complex – square root.
• abs(float|complex): float|complex – absolute value.
• conj(complex): complex – conjugate value.
• {real,imag}(complex): float – real or imaginary part of complex.
• complex(float, float): complex – complex from real and imaginary parts.
• contains(str, str): bool – returns True for every string in op1 that contains op2.

### Supported reduction operations¶

The next are the current supported set:

• sum(number, axis=None): Sum of array elements over a given axis. Negative axis are not supported.
• prod(number, axis=None): Product of array elements over a given axis. Negative axis are not supported.

Note: because of internal limitations, reduction operations must appear the last in the stack. If not, it will be issued an error like:

>>> ne.evaluate('sum(1)*(-1)')
RuntimeError: invalid program: reduction operations must occur last


### General routines¶

• evaluate(expression, local_dict=None, global_dict=None, optimization='aggressive', truediv='auto'): Evaluate a simple array expression element-wise. See examples above.

• re_evaluate(local_dict=None): Re-evaluate the last array expression without any check. This is meant for accelerating loops that are re-evaluating the same expression repeatedly without changing anything else than the operands. If unsure, use evaluate() which is safer.

• test(): Run all the tests in the test suite.

• print_versions(): Print the versions of software that numexpr relies on.

• set_num_threads(nthreads): Sets a number of threads to be used in operations. Returns the previous setting for the number of threads. See note below to see how the number of threads is set via environment variables.

If you are using VML, you may want to use set_vml_num_threads(nthreads) to perform the parallel job with VML instead. However, you should get very similar performance with VML-optimized functions, and VML’s parallelizer cannot deal with common expressions like (x+1)*(x-2), while NumExpr’s one can.

• detect_number_of_cores(): Detects the number of cores on a system.

### Intel’s VML specific support routines¶

When compiled with Intel’s VML (Vector Math Library), you will be able to use some additional functions for controlling its use. These are:

• set_vml_accuracy_mode(mode): Set the accuracy for VML operations.

The mode parameter can take the values:

• 'low': Equivalent to VML_LA - low accuracy VML functions are called
• 'high': Equivalent to VML_HA - high accuracy VML functions are called
• 'fast': Equivalent to VML_EP - enhanced performance VML functions are called

It returns the previous mode.

This call is equivalent to the vmlSetMode() in the VML library. See:

http://www.intel.com/software/products/mkl/docs/webhelp/vml/vml_DataTypesAccuracyModes.html

• set_vml_num_threads(nthreads): Suggests a maximum number of threads to be used in VML operations.

This function is equivalent to the call mkl_domain_set_num_threads(nthreads, MKL_VML) in the MKL library. See:

• get_vml_version(): Get the VML/MKL library version.

### Authors¶

Numexpr was initially written by David Cooke, and extended to more types by Tim Hochberg.

Francesc Alted contributed support for booleans and simple-precision floating point types, efficient strided and unaligned array operations and multi-threading code.

Ivan Vilata contributed support for strings.

Gregor Thalhammer implemented the support for Intel VML (Vector Math Library).

Mark Wiebe added support for the new iterator in NumPy, which allows for better performance in more scenarios (like broadcasting, fortran-ordered or non-native byte orderings).

Gaëtan de Menten contributed important bug fixes and speed enhancements.

Antonio Valentino contributed the port to Python 3.