Computer Vision Demo: 3D breakdancer

This demonstration shows real-time GPU processing of 8 Full HD video streams of the recording of a breakdancer. A visual hull algorithm reconstructs a volumetric image based on 2D silhoutte images (each image taken from a different camera view). A virtual touch sensor detects feet and hands touching the disco floor. The complete demo consisting of I/O, processing and graphics, was developed entirely in Quasar.

Computer Vision Demos: SLAM, 3D reconstruction, object detection and tracking

Combining information from different sensors (sensor fusion); extracting your own location whilst calculating a map of your environment (Simultaneous Localization And Mapping: SLAM); detection of obstacles such as bikes and pedestrians. These are all core parts for autonomous driving, drone steering and robotics.

Quasar excels in this application domain: the programming framework allows easily developing and testing computation and data intensive algorithms on CPU and GPU, while not being limited to a single platform. The applications shown in this video have been developed on a desktop system (with Windows or Linux) and have been successfully deployed on NVidia mobile platforms such as the NVidia Jetson TK1 with an ARM processor.

You can read more about this application Quasar in our blog.

Simulating galaxy collisions

Newton knew it a long time ago: all objects attract eachother to some extent or other. From the tiny sizes in molecular dynamics to the gigantic masses of the stars, wonders exist all around us. Better understanding of the history and the future of such systems teaches us about the world around us. However, there are a lot of particles interacting, and the number of interactions scales combinatorially. We efficiently simulate such N-body systems using the power of general purpose GPU processing. As particles further away have drastically less influence, we cluster points together when they are far away from the point we are evaluating.

The building and traversal of the octtree is done in parallel using the Quasar framework. It also allows multiple GPUs to work together, processing the dataset with little overhead. For example, two GTX980 beat a single one by 70%!

This video shows the simulation of a collision of two galaxies, for this video over 80000 particles are simulated.

View interpolation from stereo

In the future, 3D will be an integral part of our lives. The ultimate goal is the development of holographic 3D solutions. By studying these challenges from all possible angles, imec aims to bring the advantages of holographic 3D technology to domains such as culture, health and the economy.

The video shows an example of view interpolation: left the first view, right the second view, and in the middle the generated view. The top row shows the color frame, the bottom part shows the depth images.

Real-time interactive parameter tuning for video processing

Quasar offers both the ability of writing compact computationally efficient code and interacting with rich graphical programmable user interfaces. Using graphical sliders, parameters can easily be changed and different parameter configurations can be tested in a fast way. With this approach, instant feedback is given to the algorithm developer, allowing him/her to directly improve the algorithm.

Real-time video processing

Quasar enables real-time video enhancement, such as contrast enhancement, artifact removal (e.g., blocking artifacts due to compression) and color adjustment.

In the left half, the original unprocessed video sequence is shown. In the right half, the result of the Quasar video enhancement algorithm is displayed.

cell tracking

A major issue for many biological researchers is the sheer volume of the data captured. It is produced far faster than it can possibly be processed manually and therefore there is need of automatic approaches to extract the important information from the data. Recent research with regards of the study of surface wound healing and cell movement in general requires the tracking of cells through time and space in order to evaluate the effectiveness of various stimuli.

The video shows a phase contrast microscopy time lapse sequence of a cell culture. Initially a small set of cells are manually annotated. After this annotation step, the selected cells are tracked true the sequence.

MRI reconstruction

Similar to compressive sensing, parallel imaging is an MRI acquisition technique that allows to accelerate by skipping parts of the planned image spectrum. Instead of using prior models to fill in missing data however, multiple receiving antennas are used within the MRI system (instead of just one). This allows the missing data to be reconstructed as the a well-posed inversion of a linear system. That makes pMRI and CS complementary techniques that can be used jointly to great mutual benefit. The image shows a comparison between the reconstruction from 4 coils separately, the reconstruction from 4 coils jointly using just pMRI and CS separatly, and the reconstruction from 4 coils jointly using pMRI and CS jointly, using the COMPASS technique.

--You can read more about the GPU acceleration due to Quasar in our blog.

The video shows the reconstruction result after each iteration of the method.

Real-time optical flow

Tracking an object through time is often still a hard task for a computer. One option is to use optical flow, which finds correspondences between two image frames: which pixel moves where? For more information, have a look at our blog post.

The video shows the color coded optical flow of the webcam input.

Ultrasound segmentation

Automatically annotating medical images can help to monitor the (change in) health of a patient. However medical images are typically challenging to automatically analyze, due to presence of noise, clutter, the presence of other organs, etc. Fortunately the medical applications can exploit prior knowledge, for example the type of noise or the expected appearance and shape of an organ. This prior knowledge can be incorporated in the segmentation method, albeit at a computation cost. In order to cope with this computational burden we use GPU acceleration using Quasar.

The video shows the a shape regularized segmentation of an ultrasound sequence.

Interactive visualisation

Real-time lens blurring application made in Quasar. Based on an RGB image and a depth image. A non-stationary blur is applied, where the blur parameters are calculated from the mouse position and the depth level at that position. The resulting effect is similar to using a digital camera with a very large aperture.

Volumetric rendering

This is a short demonstration of a 3D volumetric rendering algorithm written in Quasar. The algorithm uses a volume ray casting algorithm with trilinear interpolation and simple lighting. 
The video was recorded directly in MPEG 4 using the new Quasar video capturing technology.

This video shows a 3D volumetric rendering of a low dose (very noisy) CT volume.

Depth-video denoising

Depth images are often corrupted due to the reflective properties of objects standing in between the subject and the camera. By increasing the denoising level of the algorithm, this problem can easily be solved. 

--More information on this topic can be found in our blog.

When the program runs, first a flat 2D image is shown on the left, together with the depth image on the right. By adjusting the “3D” slider, the flat image is extruded to a 3D surface.

Deep Learning

In phenovision, immense volumes of maize plant images are acquired. We require a binary segmentation of these color images in order to construct a 3D model of the maize plants in question. Initial attempts at segmentation were based on linear combinations of the color channels and thresholds. In comparison, the application of Convolutional Neural Nets (CNN) is a generalization of this concept. CNNs consist of several layers of convolutions with pre-trained filter banks, interspersed with activation function which are often simply taken to be ReLUs: rectified linear units. In the ideal case, the CNN response map contains values lower than 0 for all background pixels, and values higher than 1 for all maize plant pixels.

 

This video shows the training phase of our deep learning CNN. The top part shows the error and output of the images, whereas the bottom part of the video shows the input image and the classification of the input frame (green: classified as background, blue: classified as foreground, red: erroneous pixel classification).

Voxel carving and 3D-modelling

In order acquire a 3D model of the maize plants, we perform voxel carving to find a visual hull of the plant. This allows us to fit a parametric model in a later step, at which point we can easily perform automated measurements and even evolution through time. Voxel carving works by testing, for each candidate in the voxel cube, whether it corresponds to a plant pixel in all cameras. A perfect target for parallelization in Quasar!

Compared to multi-core CPU processing, we obtained speed-up factors of 20-30x on an NVidia Geforce GTX 980 GPU! The GTX 980 is able to perform segmentation on 7 input images at full scale and voxel carving on a 1024^3 cube within 20 seconds – well within the margins of the required imaging speed. To put this into perspective: on the CPU we need to process the images in half resolution and carve a 256^3 cube to achieve the same throughput!