mvIMPACT Acquire SDK C++
Image Processing

General

mvIMPACT Acquire has a broad spectrum of image processing filters at its' disposal, to ensure that the resulting processed image meets the requirements for even the most demanding application. These filters range from rudimentary filters which are often absolutely necessary (e.g. de-bayering) to filters which provide convenience (e.g. mirror) and optimization functionalities (e.g. flat-field correction).

Most of these filters will be applied on the host, thus will introduce additional CPU load when enabled. However these filters use highly optimized algorithms that try to run as fast as possible on a given machine. Not every filter is supported on every operating system.

The following list is a comprehensive table containing every filter and its availability on each operating system:

Windows 32-bitWindows 64-bitLinux x86Linux x86_64Linux ARMhfLinux ARMsfLinux ARM64
Format Reinterpreter
Defective Pixels
Dark Current
Flat Field
Tap Sort
Gain Offset Knee
Gain
Offset
Knee
Polarized Data Extraction
PolarizedDataExtractionMode = PseudoColorRepresentation
Mirror
Bayer
Sharpen
Saturation
Color Twist
LUT
Threshold
Linear Interpolation
Cubic Interpolation
Soft-Scaler
Channel Split
Watermark
Rotation
Format Conversion

Note
Some of the filters will require additional RAM when enabled. The amount of RAM needed by each filter is documented in the individual sections for each filter further down in this chapter. The way the memory is allocated in general can be influenced by an application by writing to the mvIMPACT::acquire::SystemSettings::imageProcessingOptimization

Depending on the value of this property the memory consumption can either be larger but static (thus less heap fragmentation will occur) or low but dynamic resulting in a smaller memory footprint but a higher risk of heap fragmentation. Depending on the application this property can have a massive influence on the overall system stability and performance. When configured to use a smaller memory footprint(this is the default for backward compatibility) e.g. when doing debayering on the host system the overall memory consumption will not increase significantly but as buffers are allocated as needed and freed as fast as possible after time the heap memory might get fragmented until no more buffers of the needed size can be allocated causing the application to stop functioning. This is especially true when the application itself works with the heap a lot and uses lots of small blocks. In such a scenario it might therefore be beneficial to use a more static heap usage approach. This has the drawback, that the overall memory consumption of the process will increase by a certain amount depending on various parameters:

  • the number of devices operated in parallel
  • the amount of requests buffers allocated for each device
  • the size of the individual request buffers
  • the number of enabled filters per device (host debayering, scaler, etc.)

If this is acceptable the static heap approach will result in a more robust application.

The Image Processing filters of mvIMPACT Acquire are chained in a strict order, which is also reflected by the order of properties in wxPropView that cannot be changed and is depicted in the following figure:

dot_inline_dotgraph_3.png
Figure 1: Image Processing Filter Chain

Image Processing Mode

From the picture above it can be seen that an image can take several ways through the image processing pipeline! Depending of the property mvIMPACT::acquire::SystemSettings::imageProcessingMode an application can influence whether an image shall be processed or not. Sometimes this can be useful e.g. if a viewer application applies a lot of processing to the image data on the host and therefore the host system is not capable of processing every image delivered by the device in the current configuration. When now both the result of the processing as well as the full theoretical frame rate are of interest but it is not important that every image get processed and displayed the device driver can be configured in a way that only some of the images get processed while others will only be acquired but will be returned without processing to the user. This can be done by decoupling the image processing from the acquisition engine. As something like this requires the use of at least one additional thread per device driver instance this will not be done by default, but must be enabled explicitly by an application by setting the property mvIMPACT::acquire::Device::userControlledImageProcessingEnable to true BEFORE the device is initialised. If this is done additional properties will become available under SystemSettings/ImageProcessing. The most important one will be the ImageProcessingMode just mentioned. It can be used to influence the behaviour of the image processing pipeline. Without setting this property to true (which is the default behaviour) the image processing on the host will be done in the thread context of the acquisition engine.

Pixel Formats Supported By The Individual Filters

Note
Not all devices support all filters! Also not every filter will be available on every platform!

Every filter in the above chain has different properties and different requirements. For example, it is obvious that the Bayer, sharpen and saturation filter, which is responsible for debayering mosaics, will not be able to accept already debayered pixel formats as input, but its' output will most probably be a debayered pixel format.

The following list is a comprehensive table with the pixel formats accepted and produced by all Image Processing filters of mvIMPACT Acquire:

Format ReinterpreterDefective PixelsDark CurrentFlat FieldTap SortGain Offset KneePolarized Data ExtractionMirrorBayerSharpenSaturationColor TwistLUTSoft-ScalerChannel SplitWatermarkRotationFormat Conversion
inoutin/outin/outin/outin/outin/outin outin/outin outin/outin/outin/outin/outin/outinoutin/outin/outin/out
Mono8 2.10.11.4.01.4.01.4.01.10.71.10.972.29.02.29.01.2.11.1.11.1.11.2.01.12.01.10.711.10.311.11.181.11.181.10.971.10.311.1.1
Mono10 2.10.11.4.01.4.01.4.01.10.71.10.972.29.02.29.01.2.11.10.641.12.01.10.711.10.311.11.181.11.181.10.971.10.311.1.1
Mono12 2.10.11.4.01.4.01.4.01.10.71.10.972.29.02.29.01.2.11.10.641.12.01.10.711.10.311.11.181.11.181.10.971.10.311.1.1
Mono12Packed_v1 2.5.02.5.02.5.02.5.02.5.0
Mono12Packed_v2 1.11.411.12.391.11.181.11.181.11.0
Mono14 2.10.11.4.01.4.01.4.01.10.71.10.972.29.02.29.01.2.11.10.641.12.01.10.711.10.311.11.181.11.181.10.971.10.311.1.1
Mono16 2.10.11.4.01.4.01.4.01.10.71.10.972.29.02.29.01.2.11.10.641.12.01.10.711.10.311.11.181.11.181.10.971.10.311.10.31
BGR888Packed 2.10.12.33.01.11.201.11.201.1.11.11.201.11.201.11.201.11.201.11.201.11.201.11.20
BGR101010Packed_v2 1.11.421.11.42
RGB888Packed 2.10.12.33.01.10.971.2.11.6.51.6.51.10.711.10.921.11.181.10.971.10.311.10.31
RGB101010Packed 2.10.12.33.01.10.971.10.921.9.31.9.31.10.711.10.921.11.181.10.971.10.921.10.31
RGB121212Packed 2.10.12.33.01.10.971.10.921.9.31.9.31.10.711.10.921.11.181.10.971.10.921.10.31
RGB141414Packed 2.10.12.33.01.10.971.10.921.9.31.9.31.10.711.10.921.11.181.10.971.10.921.10.31
RGB161616Packed 2.10.12.33.01.10.971.10.921.9.31.9.31.10.711.10.921.11.181.10.971.10.921.10.31
RGBx888Packed 2.33.01.10.971.10.201.2.01.2.01.12.01.10.711.11.201.11.181.11.201.10.311.10.31
RGB888Planar 2.17.02.38.0a2.17.02.17.02.17.02.17.02.17.02.17.02.17.0
RGBx888Planar 1.11.81.8.41.12.391.7.51.11.181.11.201.11.421.10.31
YUV411_UYYVYY_Packed 2.13.02.13.02.13.0
YUV422Packed 1.11.81.11.81.2.01.5.01.7.51.11.181.10.971.10.31
YUV422Planar 1.11.81.8.41.11.181.10.971.10.31
YUV422_10Packed 1.11.81.10.921.11.181.11.81.11.8
YUV422_UYVYPacked 1.11.81.10.81.10.921.11.181.10.971.10.92
YUV422_UYVY_10Packed 1.11.421.10.421.11.181.11.81.11.8
YUV444_UYVPacked 2.33.01.11.421.10.421.11.421.11.421.11.421.11.421.11.421.11.42
YUV444_UYV_10Packed 2.33.01.11.421.10.421.11.421.11.421.11.421.11.421.11.421.11.42
YUV444Packed 2.33.01.11.421.10.421.11.421.11.421.11.421.11.421.11.421.11.42
YUV444_10Packed 2.33.01.11.421.10.421.11.421.11.421.11.421.11.421.11.42

1.11.42

Figure 2: Pixel Formats Accepted and Produced by Image Processing Filters

aThe output pixel format if PolarizedDataExtractionMode is set to PseudoColorRepresentation

Note
The version numbers in the table represent the driver version initial support has been added for a certain filter/format combination.

In case a pixel format is selected to be processed with a filter which does not support this particular pixel format, an appropriate error message will be logged. Please note that Bayer pixel formats are handled like Mono formats with the same bit-depth.

For more information about each individual filter you may refer to the section below.


Individual Filters

Format Reinterpreter Filter

E.g. some CameraLink cameras transmit data in a non standard compliant way. E.g. some cameras use the 1X8 mono mode but effectively transmit RGB data. This filter can be used to treat such a format in a way that it can be post-processed and displayed correctly. This is done by modifying the parameters describing the buffer and NOT by modifying the actual pixel data itself. For example if a device sends out RGB data but transmits it using a mono format described by a certain standard, after passing this filter the mono buffer will have a width that became divided by 3 while preserving the original pitch and now the buffer will have 3 channels instead. This will allow correct post-processing and displaying of the data.

In addition to that by using the MonoX_To_MonoX modes an application can modify the Bayer mosaic parity attribute of a buffer. It is possible to either attach a parity to a mono buffer, remove the current attribute or change the current one.

All filters applied AFTER this filter will see the changed attributes thus treat this buffer as defined by the output format of the filter.

The following properties can be used to configure this filter:

Memory Consumption

This filter does require NO additional memory.

Defective Pixels Filter

Due to manufacturing imperfections of the sensors, some pixels may turn out to be defective. They may either be brighter than they should (leaky pixels, detectable in dark images) or they may be darker than expected (cold pixels, detectable in bright images). The Defective Pixel Correction filter replaces those faulty pixels with either the average or the median of the surrounding pixels, depending on the replacement method chosen by the user. The Defective Pixel Correction filter is a pixel-wise filter, which means that the correction is made for each pixel individually. Because of this fact, it will only work if all pixels of a requested image have been calibrated. More information about defective pixel correction may be found in the use cases section of product manuals.

The following properties can be used to configure this filter:

Memory Consumption

This filter does require NO additional memory.

Dark Current Correction Filter

Dark current is a characteristic of image sensors, which describes the undesired effect of image sensors delivering signals even in conditions of total darkness. Usually the source for this phenomenon is thermal noise of the sensor circuitry. This signal overlays the "true" image and depending on the temperature of the sensor and the exposure time may lead to visible distortion of the desired image. The dark current correction is a pixel wise correction where the dark current correction image removes the dark current from the original image. To achieve the best results, it is recommended to calibrate the camera in the same temperature and with the same exposure settings as the original image. The Dark Current Correction filter is a pixel-wise filter, which means that the correction is made for each pixel individually. Because of this fact, it will only work if all pixels of a requested image have been calibrated. More information about dark current calibration may be found in the use cases section of product manuals.

Note
If an image is requested with an AOI outside the calibrated region of the sensor, the filter will not work at all! In such cases messages like these will be written to the configured log output:
Cannot process data. The ROI of the input image(100, 100(200x200)) does not intersect with the ROI of the correction image(0, 0(100x100)) 
. If it is needed to move the AOI on a subregion on the sensor, the whole sub region (or even better the whole sensor) must be calibrated!

The following properties can be used to configure this filter:

Memory Consumption

When this filter is enabled or set into calibration mode it requires 1 additional buffer (independent from the amount of allocated request objects) of

width * height * 4 

bytes.

Flat-Field Correction Filter (FFC)

Due to slight differences of each pixel during manufacturing of the sensors, each pixel has its own distinctive properties, which may differentiate its' characteristics (e.g. spectral sensitivity) from the characteristics of its' neighbouring pixels. Furthermore due to poor choices of lenses, or poor lighting conditions additional illumination effects may influence the image acquisition in a negative way (e.g. vignette effects). The Flat-Field correction filter copes with these problems exactly. Each pixel is calibrated by placing a uniform white or gray "calibration plate" in front of the camera, and taking a picture with a saturation of 50%-75%. The filter then applies the necessary factors to each pixel so that the resulting image becomes a truly uniform flat field. The Flat-Field Correction filter is a pixel-wise filter, which means that the correction is made for each pixel individually. Because of this fact, it will only work if all pixels of a requested image have been calibrated. More information about dark current calibration may be found in the use cases section of product manuals.

Note
If an image is requested with an AOI outside the calibrated region of the sensor, the filter will not work at all! In such cases messages like these will be written to the configured log output:
Cannot process data. The ROI of the input image(100, 100(200x200)) does not intersect with the ROI of the correction image(0, 0(100x100)) 
. If it is needed to move the AOI on a sub region on the sensor, the whole sub region (or even better the whole sensor) must be calibrated!

The following properties can be used to configure this filter:

AOI Handling

Calibration AOI

The flat field filter offers an interface to work with a calibration AOI as well as a correction AOI. A calibration AOI can be defined when calibrating the flat field filter by setting the CalibrationAoiMode to amUseAoi. Then the 4 properties for defining the calibration AOI will be taken into account during the calibration. During the calibration only pixel inside this AOI will be taken into account for the calculation of the calibration values. The correction data however will contain data for the complete image but the correction factor for pixels outside the AOI will set to 1. So after the calibration and after switching on the filter this results in a correction image that will NOT affect the pixels OUTSIDE the calibrated AOI but only those inside. When the camera image size later in a way that for each pixel transmitted by the camera there is a corresponding entry in the calibration data (inside or outside the calibrated area) the correction will be performed.

Correction AOI

In addition to the calibration AOI the flat field filter also offers an interface to work with a correction AOI INSIDE the valid calibration data (inside or outside the calibration AOI). This AOI can be used to define a sub-region of the image that shall be corrected by the filter. The rest of the image will not be processed. As the calibration AOI stores correction data for the full image the correction AOI can be used to save processing time e.g. by not processing those parts of the image that have not been calibrated or to use a larger section of the image for calibration an effectively correcting only a section of it later.

Memory Consumption

When this filter is enabled or set into calibration mode it requires 1 additional buffer (independent from the amount of allocated request objects) of

width(of the calibrated area) * height(of the calibrated area) * 4 

bytes.

Tap-Sort Filter

Some devices have sensors that are divided in two or more logical areas, called "taps", due to various reasons (better performance, design limitations etc.). This logical division however, leads to the sensor data being delivered out of order, but in a very consistent way. To reconstruct the original image from these data, they have to be "unscrambled" and sorted, and this is done by the Tap-Sort filter.

The following properties can be used to configure this filter:

Memory Consumption

When active this filter might consume 1 buffer per allocated request object. The size of this buffer is equal to the size of the input buffer. Various tap sort operations however can be applied inplace. Then NO additional memory will be consumed.

Gain, Offset, Knee Filter

The following properties and functions can be used to configure this filter:

Memory Consumption

This filter does require NO additional memory.

Polarized Data Extraction

This filter, if enabled, will reorganize the pixel data of the input image. Depending on the selected mode either all the pixel will simply be reorganized in memory or a single new pixel value will be calculated from a given window thus reducing the overall dimensions of the source image.

The following properties and functions can be used to configure this filter:

Memory Consumption

When active this filter will consume 1 buffer per allocated request object of either the same size or less of the size of the input image depending on the selected extraction method.

Mirror Filter

This is a convenience filter, which if enabled, flips the image horizontally (when the LeftRight option is selected) or vertically (when the TopDown parameter is selected.). In case both TopDown and LeftRight options are selected, this filter effectively performs a 180-degree rotation of the captured image. For images consisting of multiple planes this can also be configured differently for each plane (e.g. only the blue channel of an RGB image can be flipped).

The following properties and functions can be used to configure this filter:

Memory Consumption

This filter does require NO additional memory.

Bayer, Sharpen, Saturation Filter

This filter is responsible for debayering Bayer-mosaic data provided by the sensor, and applying filters that digitally sharpen the image and saturation control for specific pixel formats. The Bayer mosaic conversion can be applied by different algorithms each with their specific advantages.

The following properties and functions can be used to configure this filter:

Memory Consumption

When active this filter will consume

  • 1 buffer per allocated request object when an image gets de-bayered in this case the needed buffer will be 3 times the size of the input buffer as mono data gets converted into RGB data
  • 1 buffer per allocated request object when an image is sharpened afterwards
  • NO additional buffers for the saturation operation

Color Twist Filter

This filter applies color correction transformations on the captured image data to adapt the sensor characteristic to the spectral characteristic of the human eye. Different sensors have different sensitivities in various parts of the spectrum. By measuring these sensor specific characteristics, it is able to compensate for them by applying an inverse transformation, which is the color correction matrix provided for each individual sensor. This color correction matrix is used in the linear part of the signal path.

For most sensors supplied by MATRIX VISION sensor specific color correct matrices are supplied to achieve optimal color fidelity. These can also be selected and enabled within this filter.

The formula used for the color correction matrix is as follows:

The following properties and functions can be used to configure this filter:

Memory Consumption

This filter does require NO additional memory.

Gamma Filter

Gamma correction is a non-linear operation on image data which is nowadays used to compensate for the also non-linear properties of the human vision. When gamma correction is applied to an image, the encoding bits of this image are more evenly distributed between bright and dark areas of the image, and as a result, the human eye perceives more information content. This filter can only turned on or off and applies a fixed gamma value to the image.

Attention
This filter has not yet been published officially. Its interface might change without prior notice and it might only be available in special builds. Use it with care!

Memory Consumption

This filter does require NO additional memory.

Look-Up Table Filter (LUT)

This filter is responsible for the Look-Up table functionality. By using a Look-Up table, each pixel value may be mapped to a different one, providing the mechanism to transform the input range of brightness to another range of brightness defined by the user.

The following properties and functions can be used to configure this filter:

Memory Consumption

This filter does require NO additional memory.

Soft Scaler Filter

This filter can re-scale the image to a user defined width and height using different interpolation algorithms.

The following properties and functions can be used to configure this filter:

Memory Consumption

When active this filter will consume 1 buffer per allocated request object. The size of this buffer depends on the desired scaled width and height.

Channel Split Filter

The channel split filter, as the name implies, is able to split the image to separate channels, and concatenate them into a single image. The channels may be concatenated vertically or horizontally, depending on the settings chosen by the user. Also individual channels can be extracted from a multi-channel image.

Converting packed data to planar formats

This filter can be of great help if packed data comes from a device when planar data is needed. E.g. YUV411 or YUV422 packed data can be converted into a planar representation using this filter. Especially processing function the supply a line pitch will benefit from this approach. Even though the filter will output a single channel mono buffer this data can easily be re-interpreted in different ways. E.g. YUV411_UYYVYY_Packed data can be converted into YUV420 planar as defined by the FourCC code I420 like this:

  • ChannelSplitEnable = bTrue
  • ChannelSplitMode = csmVertical

The result will be a Mono8 image with 1.5 time the height of the original image. However in memory the data is arranged exactly as needed for the I420 format, thus

6 bytes 6 bytes etc.
Cb(1,2,3,4) Y(1) Y(2) Cr(1,2,3,4) Y(3) Y(4) Cb(5,6,7,8) Y(5) Y(6) Cr(5,6,7,8) Y(7) Y(8) etc.
.................. Cb(n,n+1,n+2,n+3) Y(n) Y(n+1) Cr(n,n+1,n+2,n+3) Y(n+2) Y(n+3)

will become

Y(1) Y(2) Y(3) Y(4) Y(5) Y(6) Y(7) Y(8) etc.
.................. Y(n) Y(n+1)
Cb(1,2,3,4) Cb(5,6,7,8) etc.
.................. Cb(n,n+1,n+2,n+3)
Cr(1,2,3,4) Cr(5,6,7,8) etc.
.................. Cr(n,n+1,n+2,n+3)

thus simply treating the Mono8 buffer in a different way will then result in this buffer to be used as a YUV420 planar one!

The following properties and functions can be used to configure this filter:

Memory Consumption

When active this filter will consume 1 buffer per allocated request objects of the same size as the buffer fed into it from any of the previous stages. The only exception will be when RGB planar data is fed into the filter and vertical splitting is selected. Then no additional memory will be consumed.

Watermark Generator Filter

The watermark generator filter is a simple overlay filter which superimposes a simple image (a crosshair) over the original captured image data.

Attention
No official interface has been published for this filter yet. An application can bind to the properties from within the WatermarkGenerator lists of properties manually if needed.

Memory Consumption

This filter does require NO additional memory.

Rotation Filter

The rotation filter is a convenience filter which rotates the original captured image.

This filter allows rotating an image counterclockwise by a precise value in degrees (e.g. 36.756). The full ROI of the original image will be preserved, which for most angles results in the output image of this filter being larger than the input image. Empty regions in the corners of the rotated image will be painted grey. Currently only nearest neighbour interpolation is supported.

The following properties and functions can be used to configure this filter:

Memory Consumption

When active this filter will consume 1 buffer per allocated request object Depending on the rotation angle the size of this buffer might vary as the buffer is allocated with the dimensions of the bounding box of the rotated image.

Image Format Conversion Filter

This filter is located at the very end of the chain of filters. It allows converting the image that has been delivered either by the device or as the result of one or more of the processing filters into any other pixel format supported by mvIMPACT Acquire. This e.g. might help to fulfil the requirements by an application that demands a certain pixel format for it to execute its algorithms on.

The following properties and functions can be used to configure this filter:

Memory Consumption

When active this filter might consume 1 buffer per allocated request object of the same size as the resulting buffer passed to the application. Depending on the combination of input and output format of this filter sometimes no additional memory will be consumed. This might be the case whenever operations can happen inplace.

Final Output Stage

This is not an actual filter but a final node that is passed by the request until it is passed back to the user. There can be 2 situations when one additional copy operation for the image data is needed and this is handled here:

  • When the device works with a fixed size DMA memory and cannot accept user allocated chunks of memory the data must be copied if a user supplied buffer has been attached to the request that has not been filled already(e.g. while converting raw Bayer data coming from the device into the user supplied RGB buffer)
  • Some devices use an internal ring pool of DMA data. This will sometimes require copying the data even if no user supplied buffer has been attached to the request

Memory Consumption

When any of the above conditions are met this stage will consume 1 buffer per allocated request object of the same size as the resulting buffer passed to the application.

Post Processing Data Already Captured

Sometimes it might be desirable to apply a certain filter discussed here at a later moment in time or to an image that has already been captured an has been stored on disc. As the filter chain is part of the acquisition drivers capture chain this is not directly possible but can be achieved using the mvVirtualDevice driver. This driver is capable of capturing images from a use definable folder on disc. Various other properties can be used to further configure the resulting frames. E.g. the AOI can be specified as well as a certain Bayer parity. See the class mvIMPACT::acquire::ImageDestination::CameraSettingsVirtualDevice for details.

Especially when combined with the FreeImage library as described in the Use Cases section of the mvVirtualDevice drivers manual the most common file formats are supported.

mvIMPACT image processing (deprecated)

Warning
mvIMPACT image processing library is not recommended for new designs. It will no longer be supported!

When additional image processing of the captured images is required as well the mvIMPACT image processing library can be used in connection with this capture interface. All image processing functions however will ask for a special image buffer format.

The device can deliver this format natively but the functions returning this data format will not by default be included into the interface thus allowing the usage of the capture interface only without the need to install the complete image processing library as well. To include the functions that will return mvIMPACT image processing library compliant buffers to the user application the main header of the library must be included BEFORE including the capture interface headers:

#include <mvIMPACT.h>

If this include order is used, the class mvIMPACT::acquire::Request will provide 2 additional versions of the method mvIMPACT::acquire::Request::getIMPACTImage that can be used to obtain images in an image processing library compatible format.

mvIMPACT_acquire.h