## Focal Black & White effect

## Focal Black & White Effect

A well known Google Picasa effect is the Focal Black & White Effect. This effect preserves the color within a focal region and converts pixels outside this region to grayscale.

The algorithm is surprisingly simple: it consists of calculating a weighting factor (that depends on the focal radius), converting the pixel RGB values at each position to grayscale, and calculating a weighted average of this value with the original RGB value. Let us see how this can be achieved in Quasar:

```
function [] = __kernel__ focus_bw(x, y, focus_spot, falloff, radius, pos)
% Calculation of the weight
p = (pos - focus_spot) / max(size(x,0..1))
weight = exp(-1.0 / (0.05 + radius * (2 * dotprod(p,p)) ^ falloff))
% Conversion to grayscale & averaging
rgbval = x[pos[0],pos[1],0..2]
grayval = dotprod(rgbval,[0.3,0.59,0.11])*[1,1,1]
y[pos[0],pos[1],0..2] = lerp(grayval, rgbval, weight)
end
```

## Code explanation

First, the kernel function `focus_bw`

is defined. `__kernel__`

is a special function qualifier that identifies a kernel function (similar to OpenCL’s `__kernel`

or `kernel`

qualifier). Kernel functions are natively compiled to any target architecture that you have in mind. This can be multi-core CPU x86/64 ELF, even ARM v9 with Neon instructions; up to NVidia PTX code. Next, a parameter list follows, the type of the parameters is (in this case) not specified.

Note the special parameter `pos`

. In Quasar, `pos`

is a parameter name that is reserved for kernel functions to obtain the current position in the image.

The kernel function contains two major blocks:

- In the weight calculation step, first the relative position compared to the focus spot coordinates is computed. This relative position is normalized by dividing by the maximum size in the first two dimensions (note that Quasar uses base-0), so this is the maximum of the width and the height of the image. Next, the weight is obtained as being inversely proportional to distance to the the focal spot. A special built-in function
`dotprod`

, that can also be found in high-level shader languages (HLSL/GLSL) and that calculates the dot product between two vectors, is used for this purpose. - For extracting the RGB value at the position
`pos`

, we use a matrix slice indexer:`0..2`

constructs a vector of length 3 (actually`[0,1,2]`

), which vector is used for indexing. In fact:`x[pos[0],pos[1],0..2] = [x[pos[0],pos[1],0],x[pos[0],pos[1],1],x[pos[0],pos[1],2]]`

which form do you prefer, the left-handed side, or the right-handed side? You can choose. Note that it is not possible to write:

`x[pos[0..1],0..2]`

, because this expression would construct a matrix of size`2 x 3`

. - The gray value is calculated by performing the dot product of
`[0.3,0.59,0.11]`

with the original RGB value. Finally, the gray value is mixed with the original RGB value using the`lerp`

“linear interpolation” function. In fact,`lerp`

is nothing more than the function:`lerp = (x,y,a) -> (1-a) * x + a * x`

The resulting RGB value is written to the output image

`y`

. That’s it!

Finally, we still need to call the kernel function. For this, we use the `parallel_do`

construct:

```
img_in = imread("quasar.jpg")
img_out = zeros(size(img_in))
parallel_do(size(img_out,0..1),img_in,img_out,[256,128],0.5,10,focus_bw)
imshow(img_out)
```

First, an input image `img_in`

is loaded using the function `imread`

“image read”. Then, an output image is allocated with the same size as the input image.

The `parallel_do`

function is called, with some parameters. The first parameter specifies the dimensions of the “work items” that can run in parallel. Here, each pixel of the image can be processed in parallel, hence the dimensions are the size (i.e., height + width) of the output image. The following parameters are argument values that are passed to the kernel function and that are declared in the kernel function definition. Finally, the kernel function to be called is passed.

Note that in contrast to scripting languages that are dynamically typed, the Quasar language is (mostly) statically typed and the Quasar compiler performs type inference in order to derive the data types of all the parameters. This is done based on the surrounding context. Here, Quasar will find out that `img_in`

is a `cube`

data type (a 3D array) and it will derive all other missing data types based on that. Consequently, efficient parallel code can be generated in a manner that is independent of the underlying platform.

Now: the complete code again:

```
function [] = __kernel__ focus_bw(x, y, focus_spot, falloff, radius, pos)
p = (pos - focus_spot) / max(size(x,0..1))
weight = exp(-1.0 / (0.05 + radius * (2 * dotprod(p,p)) ^ falloff))
rgbval = x[pos[0],pos[1],0..2]
grayval = dotprod(rgbval,[0.3,0.59,0.11])*[1,1,1]
y[pos[0],pos[1],0..2] = lerp(grayval, rgbval, weight)
end
img_in = imread("flowers.jpg")
img_out = zeros(size(img_in))
parallel_do(size(img_out,0..1),img_in,img_out,[256,128],0.5,10,focus_bw)
imshow(img_out)
```

## Example

With eleven lines of code, you have a beautifully shining Focal Black & White effect:

## Overview of some new features in Quasar

This documents lists a number of new features that were introduced in Quasar in Jan. 2014.

## Object-oriented programming

The implementation of object-oriented programming in Quasar is far from complete, however there are a number of new concepts:

**Unification of static and dynamic classes:**Before, there existed static class types (`type myclass : {mutable} class`

) and dynamic object types (`myobj = object()`

). In many cases the set of properties (and corresponding types) for`object()`

is known in advance. To enjoy the advantages of the type inference, there are now also*dynamic*class types:`type Bird : dynamic class name : string color : vec3 end`

The dynamic class types are similar to classes in Python. At run-time, it is possible to add fields or methods:

`bird = Bird() bird.position = [0, 0, 10] bird.speed = [1, 1, 0] bird.is_flying = false bird.start_flying = () -> bird.is_flying = true`

Alternatively, member functions can be implemented statically (similar to mutable or immutable classes):

`function [] = start_flying(self : bird) self.is_flying = true end`

Dynamic classes are also useful for interoperability with other languages, particularly when the program is run within the Quasar interpreter. The dynamic classes implement MONO/.Net dynamic typing, which means that imported libraries (e.g. through

`import "lib.dll"`

) can now use and inspect the object properties more easily.Dynamic classes are also frequently used by the UI library (

`Quasar.UI.dll`

). Thanks to the static typing for the predefined members, efficient code can be generated.One limitation is that dynamic classes cannot be used from within

`__kernel__`

or`__device__`

functions. As a compensation, the dynamic classes are also a bit lighter (in terms of run-time overhead), because there is no multi-device (CPU/GPU/…) management overhead. It is known a priori that the dynamic objects will “live” in the CPU memory.Also see Github issue #88 for some earlier thoughts.

**Parametric types**In earlier versions of Quasar, generic types could be obtained by not specifying the types of the members of a class:`type stack : mutable class tab pointer end`

However, this limits the type inference, because the compiler cannot make any assumptions w.r.t. the type of

`tab`

or`pointer`

. When objects of the type`stack`

are used within a for-loop, the automatic loop parallelizer will complain that insufficient information is available on the types of`tab`

and`pointer`

.To solve this issue, types can now be parametric:

`type stack[T] : mutable class tab : vec[T] pointer : int end`

An object of the type

`stack`

can then be constructed as follows:`obj = stack[int] obj = stack[stack[cscalar]]`

Parametric classes are similar to template classes in C++. For the Quasar back-ends, the implementation of parametric types is completely analogous as in C++: for each instantiation of the parametric type, a

`struct`

is generated.It is also possible to define methods for parametric classes:

`function [] = __device__ push[T](self : stack[T], item : T) cnt = (self.pointer += 1) % atomic add for thread safety self.tab[cnt - 1] = item end`

Methods for parametric classes can be

`__device__`

functions as well, so that they can be used on both the CPU and the GPU. In the future, this will allow us to create thread-safe and lock-free implementations of common data types, such as sets, lists, stacks, dictionaries etc. within Quasar.The internal implementation of parametric types and methods in Quasar (i.e. the runtime) uses a combination of erasure and reification.

**Inheritance**Inherited classes can be defined as follows:`type bird : class name : string color : vec3 end type duck : bird ... end`

Inheritance is allowed on all three class types (mutable, immutable and dynamic).

Note: multiple inheritance is currently not supported (multiple inheritance has the problem that special “precedent rules” are required to determine with method is used when multiple instances define a certain method. In a dynamical context, this would create substantial overhead.

**Constructors**Defining a constructor is based on the same pattern that we used to define methods. For the above stack class, we have:`% Default constructor function y = stack[T]() y = stack[T](tab:=vec[T](100), pointer:=0) end % Constructor with int parameter function y = stack[T](capacity : int) y = stack[T](tab:=vec[T](capacity), pointer:=0) end % Constructor with vec[T] parameter function y = stack[T](items : vec[T]) y = stack[T](tab:=copy(items), pointer:=0) end`

Note that the constructor itself creates an instance of the type, rather than that it is done automatically. Consequently, it is possible to return a

`null`

value as well.`function y : ^stack[T] = stack[T](capacity : int) if capacity > 1024 y = null % Capacity too large, no can do... else y = stack[T](tab:=vec[T](capacity), pointer:=0) endif end`

In C++ / Java this is not possible: the constructor always returns the

`this`

-object. This is often seen as a disadvantage.A constructor that is intended to be used on the GPU (or CPU in native mode), can then simply be defined by adding the

`__device__`

modifier:`function y = __device__ stack[T](items : vec[T]) y = stack[T](tab:=copy(items), pointer:=0) end`

Note #1: instead of

`stack[T]()`

, we could have used any other name, such as`make_stack[T]()`

. Using the type name to identify the constructor:- the compiler will know that this method is intended to be used to create objects of this class
- non-uniformity (
`new_stack[T]()`

,`make_stack[T]()`

,`create_stack()`

…) is avoided.

Note #2: there are no destructors (yet). Because of the automatic memory management, this is not a big issue right now.

## Type inference enhancements

**Looking ‘through’ functions (type reconstruction)**In earlier releases, the compiler could not handle the determination of the return types of functions very well. This could lead to some problems with the automatic loop parallelizer:`function y = imfilter(x, kernel) ... end % Warning - type of y unknown y = imfilter(imread("x.png")[:,:,1]) assert(type(y,"scalar")) % Gives compilation error!`

Here, the compiler cannot determine the type of

`y`

, even though it is known that`imread("x.png")[:,:,1]`

is a matrix.In the newest version, the compiler attempts to perform type inference for the

`imfilter`

function, knowing the type of`y`

. This does not allow to determine the return type of`imfilter`

in general, but it*does*for this specific case.Note that type reconstruction can create some additional burden for the compiler (especially when the function contains a lot of calls that require recursive type reconstruction). However, type reconstruction is only used when the type of at least one of the output parameters of a function could not be determined.

**Members of dynamic objects**The members of many dynamic objects (e.g.`qform`

,`qplot`

) are now statically typed. This also greatly improves the type inference in a number of places.

## High-level operations inside kernel functions

Automatic memory management *on* the computation device is a new feature that greatly improves the expressiveness of Quasar programs. Typically, the programmer intends to use (non-fixed length) vector or matrix expressions within a for-loop (or a kernel function). Up till now, this resulted in a compilation error *“function cannot be used within the context of a kernel function”* or *“loop parallelization not possible because of function XX”*. The transparent handling of vector or matrix expressions with in kernel functions requires some special (and sophisticated) handling at the Quasar compiler and runtime sides. In particular: what is needed is dynamic kernel memory. This is memory that is allocated on the GPU (or CPU) during the operation of the kernel. The dynamic memory is disposed (freed) either when the kernel function terminates or at a later point.

There are a few use cases for dynamic kernel memory:

- When the algorithm requires to process several small-sized (
`3x3`

) to medium-sized (e.g.`64x64`

) matrices. For example, a kernel function that performs matrix operations for every pixel in the image. The size of the matrices may or may not be known in advance. - Efficient handling of multivariate functions that are applied to (non-overlapping or overlapping) image
*blocks*. - When the algorithm works with dynamic data structures such as linked lists, trees, it is also often necessary to allocate “nodes” on the fly.
- To use some sort of “/scratch” memory that does not fit into the GPU shared memory (note: the GPU shared memory is 32K, but this needs to be shared between all threads – for 1024 threads this is 32 bytes private memory per thread). Dynamic memory does not have such a stringent limitation. Moreover, dynamic memory is not shared and disposed either 1) immediately when the memory is not needed anymore or 2) when a GPU/CPU thread exists. Correspondingly, when 1024 threads would use 32K each, this will require less than 32MB, because the threads are
*logically*in parallel, but not*physically*.

In all these cases, dynamic memory can be used, simply by calling the `zeros`

, `ones`

, `eye`

or `uninit`

functions. One may also use slicing operators (`A[0..9, 2]`

) in order to extract a sub-matrix. The slicing operations then take the current boundary access mode (e.g. mirroring, circular) into account.

### Examples

The following program transposes `16x16`

blocks of an image, creating a cool tiling effect. Firstly, a kernel function version is given and secondly a loop version. Both versions are equivalent: in fact, the second version is internally converted to the first version.

#### Kernel version

```
function [] = __kernel__ kernel (x : mat, y : mat, B : int, pos : ivec2)
r1 = pos[0]*B..pos[0]*B+B-1 % creates a dynamically allocated vector
r2 = pos[1]*B..pos[1]*B+B-1 % creates a dynamically allocated vector
y[r1, r2] = transpose(x[r1, r2]) % matrix transpose
% creates a dynamically allocated vector
end
x = imread("lena_big.tif")[:,:,1]
y = zeros(size(x))
B = 16 % block size
parallel_do(size(x,0..1) / B,x,y,B,kernel)
```

#### Loop version

```
x = imread("lena_big.tif")[:,:,1]
y = zeros(size(x))
B = 16 % block size
#pragma force_parallel
for m = 0..B..size(x,0)-1
for n = 0..B..size(x,1)-1
A = x[m..m+B-1,n..n+B-1] % creates a dynamically allocated vector
y[m..m+B-1,n..n+B-1] = transpose(A) % matrix transpose
end
end
```

### Memory models

To acommodate the widest range of algorithms, two memory models are currently provided (some more may be added in the future).

**Concurrent memory model**In the concurrent memory model, the computation device (e.g. GPU) autonomously manages a separate memory heap that is reserved for dynamic objects. The size of the heap can be configured in Quasar and is typically 32MB.The concurrent memory model is extremely efficient when all threads (e.g. > 512) request dynamic memory at the*same time*. The memory allocation is done by a specialized parallel allocation algorithm that significantly differs from traditional sequential allocators.For efficiency, there are some internal limitations on the size of the allocated blocks:

- The minimum size is 1024 bytes (everything smaller is rounded up to 1024 bytes)

```
* The maximum size is 32768 bytes
For larger allocations, please see the *cooperative memory model*. The minimum size also limits the number of objects that can be allocated.
```

**Cooperative memory model**In the cooperative memory model, the kernel function requests memory directly to the Quasar allocator. This way, there are no limitations on the size of the allocated memory. Also, the allocated memory is automatically garbage collected.Because the GPU cannot launch callbacks to the CPU, this memory model requires the kernel function to be executed on the CPU.

Advantages:

- The maximum block size and the total amount of allocated memory only depend on the available system resources.

Limitations:

- The Quasar memory allocator uses locking (to limited extend), so simultaneous memory allocations on all processor cores may be expensive.
- The memory is disposed only when the kernel function exists. This is to internally avoid the number of callbacks from kernel function code to host code. Suppose that you have a
`1024x1024`

grayscale image that allocates 256 bytes per thread. Then this would require`1GB`

of RAM! In this case, you should use the cooperative memory model (which does not have this problem).

Selection between the memory models.

### Features

- Device functions can also use dynamic memory. The functions may even return objects that are dynamically allocated.
**The following built-in functions are supported and can now be used from within kernel and device functions**:`zeros, czeros, ones, uninit, eye, copy, reshape, repmat, shuffledims, seq, linspace, real, imag, complex, mathematical functions matrix/matrix multiplication matrix/vector multiplication`

### Performance considerations

*Global memory access*: code relying on dynamic memory may be slow (for linear filters on GPU: 4x-8x slower), not because of the allocation algorithms, but because of the global memory accesses. However, it all depends on what you want to do: for example, for non-overlapping block-based processing (e.g., blocks of a fixed size), the dynamic kernel memory is an excellent choice.- Static vs. dynamic allocation: when the size of the matrices is known in advanced, static allocation (e.g. outside the kernel function may be used as well). The dynamic allocation approach relieves the programmer from writing code to pre-allocate memory and calculating the size as a function of the size of the data dimensions. The cost of calling the functions
`uninit`

,`zeros`

is negligible to the global memory access times (one memory allocation is comparable to 4-8 memory accesses on average – 16-32 bytes is still small compared to the typical sizes of allocated memory blocks). Because dynamic memory is disposed whenever possible when a particular threads exists, the maximum amount of dynamic memory that is in use at any time is much smaller than the amount of memory required for pre-allocation. - Use
`vecX`

types for vectors of length 2 to 16 whenever your algorithm allows it. This completely avoids using global memory, by using the registers instead. Once a vector of length 17 is created, the vector is allocated as dynamic kernel memory. - Avoid writing code that leads to thread divergence: in CUDA, instructions execute in warps of 32 threads. A group of 32 threads must execute (every instruction) together. Control flow instructions (
`if`

,`match`

,`repeat`

,`while`

) can negatively affect the performance by causing threads of the same warp to diverge; that is, to follow different execution paths. Then,the different execution paths must be serialized, because all of the threads of a warp share a program counter. Consequently, the total number of instructions executed for this warp is increased. When all the different execution paths have completed, the threads converge back to the same execution path.

To obtain best performance in cases where the control flow depends on the block position (`blkpos`

), the controlling condition should be written so as to minimize the number of divergent warps.

## Nested parallelism

It is desired to specify parallelism in all stages of the computation. For example, within a parallel loop, it must be possible to declare another parallel loop etc. Up till now, parallel loops could only be placed at the top-level (in a host function), and multiple levels of parallelism had to be expressed using multi-dimensional perfect for loops. A new feature is that `__kernel__`

and `__device__`

functions can now also use the `parallel_do`

(and `serial_do`

) functions. The top-level host function may for example spawn 8 threads, from which every of these 8 threads spans again 64 threads (after some algorithm-specific initialization steps). There are several advantages of this approach:

- More flexibility in expressing the algorithms
- The nested kernel functions are (or will be) mapped onto CUDA dynamic parallelism on Kepler devices such as the GTX 780, GTX Titan. (Note: requires one of these cards to be effective).
- When a
`parallel_do`

is placed in a`__device__`

function that is called directly from the host code (CPU computation device), the`parallel_do`

will be accelerated using OpenMP. - The high-level matrix operations from the previous section are automatically taking advantage of the nested parallelism.

Notes:

- There is no guarantee that the CPU/GPU will effectively perform the nested operations in parallel. However, future GPUs may be expected to become more efficient in handling parallelism on different levels.

Limitations:

- Nested kernel functions may not use shared memory (they can access the shared memory through the calling function however), and they may also not use thread sychronization.
- Currently only one built-in parameter for the nested kernel functions is supported:
`pos`

(and not`blkpos`

,`blkidx`

or`blkdim`

).

## Example

The following program showcases the nested parallelism, the improved type inference and the automatic usage of dynamic kernel memory:

```
function y = gamma_correction(x, gamma)
y = uninit(size(x))
% Note: #pragma force_parallel is required here, otherwise
% the compiler will just use dynamic typing.
#pragma force_parallel
for m=0..size(x,0)-1
for n=0..size(x,1)-1
for k=0..size(x,2)-1
y[m,n,k] = 512 * (x[m,n,k]/512)^gamma
end
end
end
end
function [] = main()
x = imread("lena_big.tif")
y = gamma_correction(x, 0.5)
#pragma force_parallel
for m=0..size(y,0)-1
for c=0..size(y,2)-1
row = y[m,:,c]
y[m,:,c] = row[numel(row)-1..-1..0]
end
end
imshow(y)
end
```

The pragma’s are just added here to illustrate that the corresponding loops need to be parallelized, however, using this pragma is optionally. Note:

- The lack of explicit typing in the complete program (even type T is only an unspecified type parameter)
- The return type of gamma correction is also not specified, however the compiler is able to deduce that
`type(y,"cube")`

is true. - The second for-loop (inside the
`main`

function), uses slicing operations (`:`

and`..`

). The assignment`row=y[m,:,c]`

leads to dynamic kernel memory allocation. - The vector operations inside the second for-loop automatically express nested parallelism and can be mapped onto CUDA dynamic parallelism.

## OpenCL And Function Pointers

## Introduction

Quasar has a nice mechanism to compose algorithms in a generic way, based on function types.

For example, we can define a function that reads an RGB color value from an image as follows:

` RGB_color_image = __device__ (pos : ivec2) -> img[pos[0], pos[1], 0..2]`

Now suppose we are dealing with images in some imaginary Yab color space where

```
Y = R+2*G+B G = (Y + a + b)/4
a = G - R or R = (Y - 3*a + b)/4
b = G - B B = (Y + a - 3*b)/4
```

we can define a similar read-out function that automatically converts Yab to RGB:

```
RGB_Yab_image = __device__ (pos : ivec2) ->
[dotprod(img[pos[0],pos[1],0..2], [1,-3/4,1/4]),
dotprod(img[pos[0],pos[1],0..2], [1/4,1/4,1/4]),
dotprod(img[pos[0],pos[1],0..2], [1,1,-3/4])]
```

Consequently, both functions have the same type:

```
RGB_color_image : [__device__ ivec2 -> vec3]
RGB_Yab_image : [__device__ ivec2 -> vec3]
```

Then we can build an algorithm that works both with RGB and Yab images:

```
function [] = __kernel__ brightness_contrast_enhancer(
brightness : scalar,
contrast : scalar,
x : [__device__ ivec2 -> vec3],
y : cube,
pos : ivec2)
y[pos[0],pos[1],0..2] = x(pos)*contrast + brightness
end
match input_fmt with
| "RGB" ->
fn_ptr = RGB_color_image
| "Yab" ->
fn_ptr = RGB_Yab_image
| _ ->
error("Sorry, input format is currently not defined")
end
y = zeros(512,512,3)
parallel_do(size(y),brightness,contrast,fn_ptr,y,brightness_contrast_enhancer)
```

Although this approach is very convenient, and allows also different algorithms to be constructed easily (for example for YCbCr, Lab color spaces), there are a number of disadvantages:

- The C-implementation typically requires making use of
**function pointers**. However, OpenCL currently does not support function pointers, so this kind of programs can not be executed on OpenCL-capable hardware. - Although CUDA supports function pointers, in some circumstances they result in an internal compiler error (NVCC bug). These cases are very hard to fix.
- In CUDA, kernel functions that use function pointers may be 2x slower than the same code without function pointers (e.g. by inlining the function).

## Manual solution

The (manual) solution is to use **function specialization**:

```
match input_fmt with
| "RGB" ->
kernel_fn = $specialize(brightness_contrast_enhancer, fn_ptr==RGB_color_image)
| "Yab" ->
kernel_fn = $specialize(brightness_contrast_enhancer, fn_ptr==RGB_Yab_image)
| _ ->
error("Sorry, input format is currently not defined")
end
y = zeros(512,512,3)
parallel_do(size(y),brightness,contrast,y,kernel_fn)
```

Here, the function `brightness_contrast_enhancer`

is specialized using both `RGB_color_image`

and `RGB_Yab_image`

respectively. These functions are then simply substituted into the kernel function code, effectively eliminating the function pointers.

## Automatic solution

The Quasar compiler now has an option UseFunctionPointers, which can have the following values:

*Always*: function pointers are always used (causes more compact code to be generated)*SmartlyAvoid*: avoid function pointers where possible (less compact code)*Error*: generate an error if a function pointer cannot be avoided.

In the example of the manual solution, the function pointer cannot be avoided. However, when rewriting the code block as follows:

```
y = zeros(512,512,3)
match input_fmt with
| "RGB" ->
parallel_do(size(y),brightness,contrast,RGB_color_image,y,brightness_contrast_enhancer)
| "Yab" ->
parallel_do(size(y),brightness,contrast,RGB_Yab_image,y,brightness_contrast_enhancer)
| _ ->
error("Sorry, input format is currently not defined")
end
```

The compiler is able to automatically specialize the function `brightness_contrast_enhancer`

for `RGB_color_image`

and `RGB_Yab_image`

(*Avoid* and *Error* modes).

## Generic Programming – Opening New Frontiers

To solve a limitation of Quasar, in which `__kernel__`

functions in some circumstances needed to be duplicated for different container types (e.g. `vec[int8]`

, `vec[scalar]`

, `vec[cscalar]`

), there is now finally support for generic programming.

Consider the following program that extracts the diagonal elements of a matrix and that is supposed to deal with arguments of either type `mat`

or type `cmat`

:

```
function y : vec = diag(x : mat)
assert(size(x,0)==size(x,1))
N = size(x,0)
y = zeros(N)
parallel_do(size(y), __kernel__
(x:mat, y:vec, pos:int) -> y[pos] = x[pos,pos])
end
function y : cvec = diag(x : cmat)
assert(size(x,0)==size(x,1))
N = size(x,0)
y = czeros(N)
parallel_do(size(y), __kernel__
(x:cmat, y:cvec, pos : int) -> y[pos] = x[pos,pos])
end
```

Although function overloading here greatly solves part of the problem (at least from the user’s perspective), there is still duplication of the function `diag`

. In general, we would like to specify functions that can “work” irrespective of their underlying type.

The solution is to use *Generic Programming*. In Quasar, this is fairly easy to do:

```
function y = diag[T](x : mat[T])
assert(size(x,0)==size(x,1))
N = size(x,0)
y = vec[T](N)
parallel_do(size(y), __kernel__
(pos) -> y[pos] = x[pos,pos])
end
```

As you can see, the types of the function signature have simply be omitted. The same holds for the `__kernel__`

function.

In this example, the type parameter `T`

is required because it is needed for the construction of vector `y`

(through the `vec[T]`

constructor). If `T==scalar`

, `vec[T]`

reduces to `zeros`

, while if`T==cscalar`

, `vec[T]`

reduces to `czeros`

(complex-valued zero matrix). In case the type parameter is not required, it can be dropped, as in the following example:

```
function [] = copy_mat(x, y)
assert(size(x)==size(y))
parallel_do(size(y), __kernel__
(pos) -> y[pos] = x[pos])
end
```

Remarkably, this is still a generic function in Quasar; no special syntax is needed here.

Note that in previous versions of Quasar, all kernel function parameters needed to be explicitly *typed*. This is now no longer the case: the compiler will deduce the parameter types by calls to `diag`

and by applying the internal type inference mechanism. The same holds for the `__device__`

functions.

When calling `diag`

with two different types of parameters (for example once with `x:mat`

and a second time with `x:cmat`

), the compiler will make two generic instantiations of `diag`

. Internally, the compiler may either:

- Keep the generic definition (
*type erasion*)`function y = diag(x)`

- Make two instances of
`diag`

(*reification*):`function y : vec = diag(x : mat) function y : cvec = diag(x : cmat)`

The compiler will combine these two techniques in a transparent way, such that:

- For kernel-functions explicit code is generated for the specific data types,
- For less performance-critical host code type erasion is used (to avoid code duplication).

The selection of the code to run is made at *compile-time*, so correspondingly the Spectroscope Debugger needs special support for this.

Of course, when calling the `diag`

function with a variable of type that cannot be determined at compile-time, a compiler error is generated:

`The type of the arguments ('op') needs to be fully defined for this function call!`

This is then similar to the original handling of kernel functions.

## Extensions

There are several extensions possible to fine-tune the behavior of the generic code.

### Type classes

Type classes allow the type range of the input parameters to be narrowed. For example:

` function y = diag(x : [mat|cmat])`

This construction only allows variables of the type `mat`

and `cmat`

to be passed to the function. This is useful when it is already known in advance which types are relevant (in this case a real-valued or complex-valued matrix).

Equivalently, type class aliases can be defined. The type:

` type AllInt : [int|int8|int16|int32|uint8|uint32|uint64]`

groups all integer types that exist in Quasar. Type classes are also useful for defining reductions:

```
type RealNumber: [scalar|cube|AllInt|cube[AllInt]]
type ComplexNumber: [cscalar|ccube]
reduction (x : RealNumber) -> real(x) = x
```

Without type classes, the reduction would need to be written 4 times, one for each element.

### Type parameters

### Levels of genericity

There are three levels of genericity (for which generic instances can be constructed):

*Type constraints*: a type constraint binds the type of an input argument of the function.*Value constraints*: gives an explicit value to the value of an input argument*Logic predicates*that are not type or value constraints

As an example, consider the following generic function:

```
function y = __device__ soft_thresholding(x, T)
if abs(x)>=T
y = (abs(x) - T) * (x / abs(x))
else
y = 0
endif
end
reduction x : scalar -> abs(x) = x where x >= 0
```

Now, we can make a specialization of this function to a specific type:

```
soft_thresholding_real = $specialize(soft_thresholding,
type(x,"scalar") && type(T, "scalar"))
```

But also for a fixed threshold:

` soft_thresholding_T = $specialize(soft_thresholding,T==10)`

We can even go one step further and specify that `x>0`

:

` soft_thresholding_P = $specialize(soft_thresholding,x>0)`

Everything combined, we get:

```
soft_thresholding_E = $specialize(soft_thresholding,
type(x,"scalar") && type(T,"scalar") && T==10 && x>0)
```

Based on this knowledge (and the above reduction), the compiler will then generate the following function:

```
function y = __device__ soft_thresholding_E(x : scalar, T : scalar)
if x >= 10
y = x - 10
else
y = 0
endif
end
```

There are two ways of performing this type of specialization:

- Using the
`$specialize`

function. Note that this approach is only recommended for testing. - Alternatively, the specializations can be performed automatically, using the
`assert`

function from the calling function:`function [] = __kernel__ denoising(x : mat, y : mat) assert(x[pos]>0) y[pos] = soft_thresholding(x[pos], 10) end`

## Reductions with where-clauses

Recently, reduction where-clauses have been implemented. The where clause is a condition that determines at runtime (or at compile time) whether a given reduction may be applied. There are two main use cases for where clauses:

- To avoid invalid results: In some circumstances, applying certain reductions may lead to invalid results (for example a real-valued sqrt function applied to a complex-valued input, derivative of tan(x) in pi/2…)
- For optimization purposes.

For example:

```
reduction (x : scalar) -> abs(x) = x where x >= 0
reduction (x : scalar) -> abs(x) = -x where x < 0
```

In case the compiler has no information on the sign of x, the following mapping is applied:

`abs(x) -> x >= 0 ? x : (x < 0 ? -x : abs(x))`

And the evaluation of the where clauses of the reduction is performed at runtime. However, when the compiler has information on x (e.g. `assert(x <= -1)`

), the mapping will be much simpler:

`abs(x) -> -x`

Note that the `abs(.)`

function is a trivial example, in practice this could be more complicated:

```
reduction (x : scalar) -> some_op(x, a) = superfast_op(x, a) where 0 <= a && a < 1
reduction (x : scalar) -> some_op(x, a) = accurate_op(x, a) where 1 <= a
```

## Assert the Truth, Unassert the Untruth

Quasar has a logic system, that is centered around the `assert`

function and that can be useful for several reasons:

- Assertions can be used for testing a specified condition, resulting in a runtime error (
`error`

) if the condition is not met:`assert(positiveNumber>0,"positiveNumber became negative while it shouldn't")`

- Assertions can also help the optimization system. For example, the type of variables can be “asserted” using type assertions:
`assert(type(cubicle, "cube[cube]"))`

The compiler can then verify the type of the variable

`cubicle`

and if it is not known at this stage, knowledge can be inserted into the compiler, resulting in the compilation message:`assert.q - Line 4: [info] 'cubicle' is now assumed to be of type 'cube[cube]'.`

At runtime, the assert function just behaves like usual, resulting in an error if the condition is not met.

- Assertions are useful in combination with reduction-where clauses:
`reduction (x : scalar) -> abs(x) = x where x >= 0`

If we previously assert that

`x`

is a positive number, then this assertion will eliminate the runtime check for`x >= 0`

. - Assertions can be used to cut branches:
`assert(x > 0 && x < 1) if x < 0 ... endif`

Here, the compiler will determine that the

`if`

-block will never be executed, so it will destroy the entire content of the`if`

-block, resulting in a compilation message:`assert.q - Line 10: [info] if-branch is cut due to the assertions `x > 0 && x < 1`.`

Similarly, pre-processor branches can be constructed with this approach.

- Assertions can be combined with generic function specialization. Later more about this.

It is not possible to fool the compiler. For example, if the compiler can determine at compile-time that the assertion will never be met, an error will be generated, and it will not be even possible to run the program.

## Logic system

The Quasar compiler has now a propositional logic system, that is able to “reason” about previous assertions. Also, different assertions can be combined using the logical operators AND `&&`

, OR `||`

and NOT `!`

.

There are three meta functions that help with assertions:

`$check(proposition)`

checks whether`proposition`

can be satisfied, given the previous set of assertions, resulting in three possible values:`"Valid"`

,`"Satisfiable"`

or`"Unsatisfiable"`

.`$assump(variable)`

lists all assertions that are currently known about a variable, including the implicit type predicates that are obtained through type inference. Note that the result of`$assump`

is an expression, so for visualization it may be necessary to convert it to a textual representation using`$str(.)`

(to avoid the expression from being evaluated).`$simplify(expr)`

simplifies logic expressions based on the knowledge that is inserted through assertions.

## Types of assertions

There are different types of assertions that can be combined in a transparent way.

### Equalities

The most simple cases of assertions are the equality assertions `a==b`

. For example:

```
symbolic a, b
assert(a==4 && b==6)
assert($check(a==5)=="Unsatisfiable")
assert($check(a==4)=="Valid")
assert($check(a!=4)=="Unsatisfiable")
assert($check(b==6)=="Valid")
assert($check(b==3)=="Unsatisfiable")
assert($check(b!=6)=="Unsatisfiable")
assert($check(a==4 && b==6)=="Valid")
assert($check(a==4 && b==5)=="Unsatisfiable")
assert($check(a==4 && b!=6)=="Unsatisfiable")
assert($check(a==4 || b==6)=="Valid")
assert($check(a==4 || b==7)=="Valid")
assert($check(a==3 || b==6)=="Valid")
assert($check(a==3 || b==5)=="Unsatisfiable")
assert($check(a!=4 || b==6)=="Valid")
print $str($assump(a)),",",$str($assump(b)) % prints (a==4),(b==6)
```

Here, we use `symbolic`

to declare symbolic variables (variables that are not to be “evaluated”, i.e. translated into their actual value since they don’t have a specific value). Next, the function assert is used to test whether the `$check(.)`

function works correctly (=self-checking).

### Inequalities

The propositional logic system can also work with **inequalities**:

```
symbolic a
assert(a>2 && a<4)
assert($check(a>1)=="Valid")
assert($check(a>3)=="Satisfiable")
assert($check(a<3)=="Satisfiable")
assert($check(a<2)=="Unsatisfiable")
assert($check(a>4)=="Unsatisfiable")
assert($check(a<=2)=="Unsatisfiable")
assert($check(a>=2)=="Valid")
assert($check(a<=3)=="Satisfiable")
assert($check(!(a>3))=="Satisfiable")
```

### Type assertions

As in the above example:

`assert(type(cubicle, "cube[cube]"))`

Please note that assertions should not be used with the intention of variable type declaration. To declare the type of certain variables there is a more straightforward approach:

`cubicle : cube[cube]`

Type assertions *can* be used in functions that accept generic `??`

arguments, then for example to ensure that a `cube[cube]`

is passed depending on another parameter.

### User-defined properties of variables

It is also possible to define “properties” of variables, using a symbolic declaration. For example:

`symbolic is_a_hero, Jan_Aelterman`

Then you can assert:

`assert(is_a_hero(Jan_Aelterman))`

Correspondingly, if you perform the test:

```
print $check(is_a_hero(Jan_Aelterman)) % Prints: Valid
print $check(!is_a_hero(Jan_Aelterman)) % Prints: Unsatisfiable
```

If you then try to assert the opposite:

`assert(!is_a_hero(Jan_Aelterman))`

The compiler will complain:

```
assert.q - Line 119: NO NO NO I don't believe this, can't be true!
Assertion '!(is_a_hero(Jan_Aelterman))' is contradictory with 'is_a_hero(Jan_Aelterman)'
```

## Unassert

In some cases, it is neccesary to undo certain assertions that were previously made. For this task, the function `unassert`

can be used:

`unassert(propositions)`

This function only has a meaning at compile-time; at run-time nothing needs to be done.

For example, if you wish to reconsider the assertion `is_a_hero(Jan_Aelterman)`

you can write:

```
unassert(is_a_hero(Jan_Aelterman))
print $check(is_a_hero(Jan_Aelterman)) % Prints: most likely not
print $check(!is_a_hero(Jan_Aelterman)) % Prints: very likely
```

Alternatively you could have written:

```
unassert(!is_a_hero(Jan_Aelterman))
print $check(is_a_hero(Jan_Aelterman)) % Prints: Valid
print $check(!is_a_hero(Jan_Aelterman)) % Prints: Unsatisfiable
```

## Boundary access modes in Quasar

In earlier versions of Quasar, the boundary extension modes (such as `'mirror`

, `'circular`

) only affected the `__kernel__`

and `__device__`

functions.

To improve transparency, this is has recently changed. This has the consequence that the following **get** access modes needed to be supported by the runtime:

```
(no modifier) % =error (default) or zero extension (kernel, device function)
safe % zero extension
mirror % mirroring near the boundaries
circular % circular boundary extension
clamped % keep boundary values
unchecked % results undefined when reading outside
checked % error when reading outside
```

Implementation details: there is a bit of work involved, because it needs to be done for all data types (`int8`

, `int16`

, `int32`

, `uint8`

, `uint16`

, `uint32`

, `single`

, `double`

, UDT, `object`

, …), for different dimensions (`vec`

, `mat`

, `cube`

), and for both matrix getters / slice accessors. perhaps the reason that you will not see this feature implemented in other programming languages: 5 x 10 x 3 x 2 = 300 combinations (=functions to be written). Luckily the use of generics in C# alleviates the problem, reducing it (after a bit of research) to 6 combinations (!) where each algorithm has 3 generic parameters. Idem for CUDA computation engine.

Note that the modifier `unchecked`

should be used with care: only when you are 100% sure that the function is working properly and that there are no memory accesses outside the matrix. A good approach is to use `checked`

first, and when you find out that there never occur any errors, you can switch to `unchecked`

, in other to gain a little speed-up (typically 20%-30% on memory accesses).

Now I would like to point out that the access modifiers are not part of type of the object itself, as the following example illustrates:

```
A : mat'circular = ones(10, 10)
B = A % boundary access mode: B : mat'circular
C : mat'unchecked = A
```

Here, both B and C will hold a reference to the matrix A. However, B will copy the access modifier from A (through type inference) and C will override the access modifier of A. The result is that the access modifiers for A, B and C are circular, circular and unchecked, respectively. Even though there is only one matrix involved, there are effectively two ways of accessing the data in this matrix.

Now, to make things even more complicated, there are also **put** access modes. But for implementation complexity reasons (and more importantly, to avoid unforeseen data races in parallel algorithms), the number of options have been reduced to three:

```
safe (default) % ignore writing outside the boundaries
unchecked % writing outside the boundaries = crash!
checked % error when writing outside the boundaries
```

This means that `'circular`

, `'mirror`

, and `'clamped`

are mapped to `'safe`

when writing values to the matrix. For example:

```
A : vec'circular = [1, 2, 3]
A[-1] = 0 % Value neglected
```

The advantage will then be that you can write code such as:

```
A : mat'circular = imread("lena_big.tif")[:,:,1]
B = zeros(size(A))
B[-127..128,-127..128] = A[-127..128,-127..128]
```

## Construction of Cell matrices

In Quasar, it is possible to construct cell vectors/matrices, similar to in MATLAB:

`A = `1,2,3j,zeros(4,4)´`

(Matlab-syntax uses braces as in `{1,2,3j,zeros(4,4)}`

).

Now, I have found that there are **two problems** with the cell construction syntax:

- The prime (´) cannot be represented with the ANSI character set and requires UTF-8 encoding. In the Redshift code editor, UTF-8 is used by default, however when editing Quasar code with other editors, problems can occur.
- The prime (´) is not present on QUERTY keyboards (in contrast to Belgian AZERTY keyboard). I suspect this is one of the reasons that the cell construction syntax is used rarely.
- Even the documentation system has problems with the prime symbol, that may be the cause that you cannot see it right now. In other words, it is an
*evil*symbol and all evil should be extinct.

To solve these problems, the Quasar parser now also accepts an apostrophe `'`

(ASCII character 39, or in hex 27h) for closing:

`A = `1,2,3j,zeros(4,4)'`

Because the apostrophe character exists in ANSI and on QUERTY keyboards, the use of cell matrices constructions is greatly simplified.

**Note**: the old-fashioned alternative was to construct cell matrices using the function `cell`

, or `vec[vec]`

, `vec[mat]`

, `vec[cube]`

, … For example:

```
A = cell(4)
A[0] = 1j
A[1] = 2j
A[2] = 3j
A[3] = 4j
```

Note that this notation is not very elegant, compared to `A='1j,2j,3j,4j'`

. Also it does not allow the compiler to fully determine the type of `A`

(the compiler will find `type(A) == "vec[??]"`

rather than `type(A) == "cvec"`

). In the following section, we will discuss the type inference in more detail.

## Type inference

Another new feature of the compiler is that it attempts to infer the type from cell matrices. In earlier versions, all cell matrices defined with the above syntax, had type `vec[??]`

. Now, this has changed, as illustrated by the following example:

```
a = `[1, 2],[1, 2, 3]'
print type(a) % Prints vec[vec[int]]
b = `(x->2*x), (x->3*x), (x->4*x)'
print type(b) % Prints [ [??->??] ]
c = ` `[1, 2],[1,2]',`[1, 2, 3],[4, 5, 6]' '
print type(c) % Prints vec[vec[vec[int]]]
d = ` [ [2, 1], [1, 2] ], [ [4, 3], [3, 4] ]'
print type(d) % Prints vec[mat]
e = `(x:int->2*x), (x:int->3*x), (x:int->4*x)'
print type(e) % Prints vec[ [int->int] ]
```

This allows cell matrices that are constructed with the above syntax to be used from kernel functions. A simple example:

```
d = `eye(4), ones(4,4)'
parallel_do(size(d[0]), d,
__kernel__ (d : vec[mat], pos : ivec2) -> d[0][pos] += d[1][pos])
print d[0]
```

The output is:

```
[ [2,1,1,1],
[1,2,1,1],
[1,1,2,1],
[1,1,1,2] ]
```

## Dynamic Memory Allocation on CPU/GPU

In some algorithms, it is desirable to dynamically allocate memory inside `__kernel__`

or `__device__`

functions, for example:

- When the algorithm processes blocks of data for which the maximum size is not known in advance.
- When the amount of memory that an individual thread uses is too large to fit in the shared memory or in the registers. The shared memory of the GPU consists of typically 32K, that has to be shared between all threads in one block. For single-precision floating point vectors
`vec`

or matrices`mat`

and for 1024 threads per block, the maximum amount of shared memory is 32K/(1024*4) = 8 elements. The size of the register memory is also of the same order: 32 K for CUDA compute architecture 2.0.

## Example: brute-force median filter

So, suppose that we want to calculate a *brute-force median filter* for an image (note that there exist much more efficient algorithms based on image histograms, see `immedfilt.q`

). The filter could be implemented as follows:

- we extract a local window per pixel in the image (for example of size
`13x13`

). - the local window is then passed to a generic median function, that sorts the intensities in the local window and returns the median.

The problem is that there may not be enough register memory for holding a local window of this size. 1024 threads x 13 x 13 x 4 = 692K!

The solution is then to use a new Quasar runtime & compiler feature: *dynamic kernel memory*. In practice, this is actually very simple: first ensure that the compiler setting “kernel dynamic memory support” is enabled. Second, matrices can then be allocated through the regular matrix functions `zeros`

, `complex(zeros(.))`

, `uninit`

, `ones`

.

For the median filter, the implementation could be as follows:

```
% Function: median
% Computes the median of an array of numbers
function y = __device__ median(x : vec)
% to be completed
end
% Function: immedfilt_kernel
% Naive implementation of a median filter on images
function y = __kernel__ immedfilt_kernel(x : mat, y : mat, W : int, pos : ivec2)
% Allocate dynamic memory (note that the size depends on W,
% which is a parameter for this function)
r = zeros((W*2)^2)
for m=-W..W-1
for n=-W..W-1
r[(W+m)*(2*W)+W+n] = x[pos+[m,n]]
end
end
% Compute the median of the elements in the vector r
y[pos] = median(r)
end
```

For `W=4`

this algorithm is illustrated in the figure below:

**Figure 1**. dynamic memory allocation inside a kernel function.

## Parallel memory allocation algorithm

To support dynamic memory, a special parallel memory allocator was designed. The allocator has the following properties:

- The allocation/disposal is
*distributed in space*and does not use lists/free-lists of any sort. - The allocation algorithm is designed for speed and correctness.

To accomplish such a design, a number of limitations were needed:

- The minimal memory block that can be allocated is 1024 bytes. If the block size is smaller then the size is rounded up to 1024 bytes.
- When you try to allocate a block with size that is not a pow-2 multiple of 1024 bytes (i.e.
`1024*2^N`

with`N`

integer), then the size is rounded up to a pow-2 multiple of 1024 bytes. - The maximal memory block that can be allocated is 32768 bytes (=2
^{15}bytes). Taking into account that this can be done per pixel in an image, this is actually quite a lot! - The total amount of memory that can be allocated from inside a kernel function is also limited (typically 16 MB). This restriction is mainly to ensure program correctness and to keep memory free for other processes in the system.

It is possible to compute an upper bound for the amount of memory that will be allocated at a given point in time. Suppose that we have a kernel function, that allocates a cube of size `M*N*K`

, then:

`max_memory = NUM_MULTIPROC * MAX_RESIDENT_BLOCKS * prod(blkdim) * 4*M*N*K`

Where `prod(blkdim)`

is the number of elements in one block, `MAX_RESIDENT_BLOCKS`

is the maximal number of resident blocks per multi-processor and `NUM_MULTIPROC`

is the number of multiprocessors.

So, suppose that we allocate a matrix of size `8x8`

on a Geforce 660Ti then:

`max_memory = 5 * 16 * 512 * 4 * 8 * 8 = 10.4 MB`

This is still much smaller than what would be needed if one would consider pre-allocation (in this case this number would depend on the image dimensions!)

## Comparison to CUDA malloc

CUDA has built-in `malloc(.)`

and `free(.)`

functions that can be called from device/kernel functions, however after a few performance tests and seeing warnings on CUDA forums, I decided not to use them. This is the result of a comparison between the Quasar dynamic memory allocation algorithm and that of NVidia:

```
Granularity: 1024
Memory size: 33554432
Maximum block size: 32768
Start - test routine
Operation took 183.832443 msec [Quasar]
Operation took 1471.210693 msec [CUDA malloc]
Success
```

I obtained similar results for other tests. As you can see, the memory allocation is about **8 times** faster using the new apporach than with the NVidia allocator.

## Why better avoiding dynamic memory

Even though the memory allocation is quite fast, to obtain the best performance, it is better to avoid dynamic memory:

- The main issue is that kernel functions using dynamic memory also require several read/write accesses to the global memory. Because dynamically allocated memory has typically the size of hundreds of KBs, the data will not fit into the cache (of size 16KB to 48KB). Correspondingly:
*the cost of using dynamic memory is in the associated global memory accesses!* - Please note that the compiler-level handling of dynamic memory is currently in development. As long as the memory is “consumed” locally as in the above example, i.e. not written to external data structures the should not be a problem.

## Matrix Conditional Assignment

In MATLAB, it is fairly simple to assign to a subset of a matrix, for example, the values that satisfy a given condition. For example, saturation can be obtained as follows:

```
A[A < 0] = 0
A[A > 1] = 1
```

In Quasar, this can be achieved with:

`A = saturate(A)`

However, the situation can be more complex and then there is no direct equivalent to MATLAB. For example,

`A[A > B] = C`

where `A`

, `B`

, `C`

are all matrices. The trick is to define a reduction (now in `system.q`

):

```
type matrix_type : [vec|mat|cube|cvec|cmat|ccube]
reduction (x : matrix_type, a : matrix_type, b, c) -> (x[a > b] = c) = (x += (c - x) .* (a > b))
reduction (x : matrix_type, a : matrix_type, b, c) -> (x[a < b] = c) = (x += (c - x) .* (a < b))
reduction (x : matrix_type, a : matrix_type, b, c) -> (x[a <= b] = c) = (x += (c - x) .* (a <= b))
reduction (x : matrix_type, a : matrix_type, b, c) -> (x[a >= b] = c) = (x += (c - x) .* (a >= b))
reduction (x : matrix_type, a : matrix_type, b, c) -> (x[a == b] = c) = (x += (c - x) .* (a == b))
reduction (x : matrix_type, a : matrix_type, b, c) -> (x[a != b] = c) = (x += (c - x) .* (a != b))
reduction (x : matrix_type, a : matrix_type, b, c) -> (x[a && b] = c) = (x += (c - x) .* (a && b))
reduction (x : matrix_type, a : matrix_type, b, c) -> (x[a || b] = c) = (x += (c - x) .* (a || b))
```

The first line defines a “general” matrix type, then is then used for the subsequent reductions. The reduction simply works on patterns of the form:

`x[a #op# b] = c`

and replaces them by the appropriate Quasar expression. The last two reductions are a trick, to get the conditional assignment also working with boolean expressions, such as:

`A[A<-0.1 || A>0.1] = 5`

Note that, on the other hand:

`B = A[A<-0.1]`

will currently result in a runtime error (this syntax is not defined yet).