MATRIX VISION - mvBlueFOX Technical Documentation
Use cases

Table of Contents

Introducing acquisition / recording possibilities

There are several use cases concerning the acquisition / recording possibilities of the camera:



Generating very long exposure times

Since
mvIMPACT Acquire 1.10.65

Very long exposure times are possible with mvBlueFOX. For this purpose a special trigger/IO mode is used.

You can do this as follows (pseudo code):

TriggerMode = OnHighExpose
TriggerSource = DigOUT0 - DigOUT3
Attention
In the standard mvBlueFOX DigOUT2 and DigOUT3 are internal signals, however, they can be used for this intention.
Note
Make sure that you adjust the ImageRequestTimeout_ms either to 0 (infinite) or to a reasonable value that is larger than the actual exposure time in order not to end up with timeouts resulting from the buffer timeout being smaller than the actual time needed for exposing, transferring and capturing the image:
ImageRequestTimeout_ms = 0 # or reasonable value

Now request a single image:

imageRequestSingle

Then the digital output is set and reset. Between these two instructions you can include source code to get the desired exposure time.

# The DigOUT which was chosen in TriggerSource
DigitalOutput* pOut = getOutput(digital output)
pOut->set();

# Wait as long as the exposure should continue.

pOut->reset();

Afterwards you will get the image.

If you change the state of corresponding output twice this will also work with wxPropView.



Using VLC Media Player

With the DirectShow Interface MATRIX VISION devices become a (acquisition) video device for the VLC Media Player.

Figure 1: VLC Media Player with a connected device via DirectShow

System requirements

It is necessary that following drivers and programs are installed on the host device (laptop or PC):

  • Windows 7, 32 bit or 64 bit
  • up-do-date VLC Media Player, 32 bit or 64 bit (here: version 2.0.6)
  • up-do-date MATRIX VISION driver, 32 bit or 64 bit (here: version 2.5.6)
Note
Using Windows 10: We tested the VLC Media Player with versions smaller than 2.2.0 successfully.

Installing VLC Media Player

  1. Download a suitable version of the VLC Media Player from the VLC Media Player website mentioned below.
  2. Run the setup.
  3. Follow the installation process and use the default settings.

A restart of the system is not required.

See also
http://www.videolan.org/

Setting up MV device for DirectShow

Note
Please be sure to register the MV device for DirectShow with the right version of mvDeviceConfigure . I.e. if you have installed the 32 bit version of the VLC Media Player, you have to register the MV device with the 32 bit version of mvDeviceConfigure ("C:/Program Files/MATRIX VISION/mvIMPACT Acquire/bin") !
  1. Connect the MV device to the host device directly or via GigE switch using an Ethernet cable.
  2. Power the camera using a power supply at the power connector.
  3. Wait until the status LED turns blue.
  4. Open the tool mvDeviceConfigure ,
  5. set a friendly name ,
  6. and register the MV device for DirectShow .
Note
In some cases it could be necessary to repeat step 5.

Working with VLC Media Player

  1. Start VLC Media Player.
  2. Click on "Media -> Open Capture Device..." .

    Figure 2: Open Capture Device...


  3. Select the tab "Device Selection" .
  4. In the section "Video device name" , select the friendly name of the MV device:

    Figure 3: Video device name


  5. Finally, click on "Play" .
    After a short delay you will see the live image of the camera.



Improving the acquisition / image quality

There are several use cases concerning the acquisition / image quality of the camera:



Correcting image errors of a sensor

Due to random process deviations, technical limitations of the sensors, etc. there are different reasons that image sensors have image errors. MATRIX VISION provides several procedures to correct these errors, by default these are host-based calculations.

Provided image corrections procedures are

  1. Defective Pixels Correction,
  2. Dark Current Correction, and
  3. Flat-Field Correction.
Note
If you execute all correction procedures, you have to keep this order. All gray value settings of the corrections below assume an 8-bit image.
Figure 1: Host-based image corrections

The path "Setting -> Base -> ImageProcessing -> ..." indicates that these corrections are host-based corrections.

Before starting consider the following hints:

  • To correct the complete image, you have to make sure no user defined AOI has been selected: Right-click "Restore Default" on the devices AOI parameters W and H in "Setting -> Base -> Camera".
  • You have several options to save the correction data. The chapter Storing and restoring settings describes the different ways.
See also
There is a white paper about image error corrections with extended information available on our website: http://www.matrix-vision.com/tl_files/mv11/Glossary/art_image_errors_sensors_en.pdf

Defective Pixels Correction

Due to random process deviations, not all pixels in an image sensor array will react in the same way to a given light condition. These variations are known as blemishes or defective pixels.

There are two types of defective pixels:

  1. leaky pixel (in the dark)
    which indicates pixels that produce a higher read out code than average
  2. cold pixel (in standard light conditions)
    which indicates pixels that produce a lower read out code than average when the sensor is exposed (e.g. caused by dust particles on the sensor)
Note
Please use either an 8 bit Mono or Bayer image format when correcting the image. After the correction, all image formats will be corrected.

Correcting leaky pixels

To correct the leaky pixels following steps are necessary:

  1. Set gain ("Setting -> Base -> Camera -> GenICam -> Analog Control" "Gain = 0 dB") and exposure time "Setting -> Base -> Camera -> GenICam -> Acquisition Control" "ExposureTime = 360 msec" to the given operating conditions
    The total number of defective pixels found in the array depend on the gain and the exposure time.
  2. Black out the lens completely
  3. Set the (Filter-) "Mode = Calibrate leaky pixel"
  4. Snap an image ("Acquire" with "Acquisition Mode = SingleFrame")
  5. To activate the correction, choose one of the neighbor replace methods: "Replace 3x1 average" or "Replace 3x3 median"
  6. Save the settings including the correction data via "Action -> Capture Settings -> Save Active Device Settings"
    (Settings can be saved in the Windows registry or in a file)
Note
After having re-started the camera you have to reload the capture settings vice versa.

The filter checks:

Pixel > LeakyPixelDeviation_ADCLimit // (default value: 50) 

All pixels above this value are considered as leaky pixel.

Correcting cold pixels

To correct the cold pixels following steps are necessary:

  1. You will need a uniform sensor illumination approx. 50 - 70 % saturation (which means an average gray value between 128 and 180)
  2. Set the (Filter-) "Mode = Calibrate cold pixel" (Figure 2)
  3. Snap an image ("Acquire" with "Acquisition Mode = SingleFrame")
  4. To activate the correction, choose one of the neighbor replace methods: "Replace 3x1 average" or "Replace 3x3 median"
  5. Save the settings including the correction data via "Action -> Capture Settings -> Save Active Device Settings"
    (Settings can be saved in the Windows registry or in a file)
Note
After having re-started the camera you have to reload the capture settings vice versa.

The filter checks:

Pixel < T[cold] // (default value: 15 %) 

// T[cold] = deviation of the average gray value (ColdPixelDeviation_pc)

All pixels below this value have a dynamic below normal behavior.

Figure 2: Image corrections: DefectivePixelsFilter
Note
Repeating the defective pixel corrections will accumulate the correction data which leads to a higher value in "DefectivePixelsFound". If you want to reset the correction data or repeat the correction process you have to set the (Filter-) "Mode = Reset Calibration Data".

Dark Current Correction

Dark current is a characteristic of image sensors, which means, that image sensors also deliver signals in total darkness by warmness, for example, which creates charge carriers spontaneously. This signal overlays the image information. Dark current depends on two circumstances:

  1. Exposure time
    The longer the exposure, the greater the dark current part. I.e. using long exposure times, the dark current itself could lead to an overexposed sensor chip
  2. Temperature
    By cooling the sensor chips the dark current production can be highly dropped (approx. every 6 °C the dark current is cut in half)

Correcting Dark Current

The dark current correction is a pixel wise correction where the dark current correction image removes the dark current from the original image. To get a better result it is necessary to snap the original and the dark current images with the same exposure time and temperature.

Note
Dark current snaps generally show noise.

To correct the dark current pixels following steps are necessary:

  1. Black out the lens completely
  2. Set "OffsetAutoCalibration = Off" (Figure 3)
  3. If applicable, change Offset_pc until you'll see an amplitude in the histogram (Figure 4)
  4. Set exposure time according to the application
  5. Set the (Filter-) "Mode = Calibrate"
  6. Snap an image ("Acquire" with "Acquisition Mode = SingleFrame")
  7. Finally, you have to activate the correction: Set the (Filter-) "Mode = On"
  8. Save the settings including the correction data via "Action -> Capture Settings -> Save Active Device Settings"
    (Settings can be saved in the Windows registry or in a file)

The filter snaps a number of images and averages the dark current images to one correction image.

Note
After having re-started the camera you have to reload the capture settings vice versa.
Figure 3: Image corrections (screenshot mvBlueFOX): OffsetAutoCalibration = Off
Figure 4: Image corrections: Offset histogram
Figure 5: Image corrections: Dark current

Flat-Field Correction

Each pixel of a sensor chip is a single detector with its own properties. Particularly, this pertains to the sensitivity as the case may be the spectral sensitivity. To solve this problem (including lens and illumination variations), a plain and equally "colored" calibration plate (e.g. white or gray) as a flat-field is snapped, which will be used to correct the original image. Between flat-field correction and the future application you must not change the optic. To reduce errors while doing the flat-field correction, a saturation between 50 % and 75 % of the flat-field in the histogram is convenient.

Note
Flat-field correction can also be used as a destructive watermark and works for all f-stops.

To make a flat field correction following steps are necessary:

  1. You need a plain and equally "colored" calibration plate (e.g. white or gray)
  2. No single pixel may be saturated - that's why we recommend to set the maximum gray level in the brightest area to max. 75% of the gray scale (i.e., to gray values below 190 when using 8-bit values)
  3. Choose a BayerXY in "Setting -> Base -> Camera -> GenICam -> Image Format Control -> PixelFormat".
  4. Set the (Filter-) "Mode = Calibrate" (Figure 6)
  5. Start a Live snap ("Acquire" with "Acquisition Mode = Continuous")
  6. Finally, you have to activate the correction: Set the (Filter-) "Mode = On"
  7. Save the settings including the correction data via "Action -> Capture Settings -> Save Active Device Settings"
    (Settings can be saved in the Windows registry or in a file)
Note
After having re-started the camera you have to reload the capture settings vice versa.

The filter snaps a number of images (according to the value of the CalibrationImageCount, e.g. 5) and averages the flat-field images to one correction image.

Figure 6: Image corrections: Host-based flat field correction



Optimizing the color fidelity of the camera

Purpose of this chapter is to optimize the color image of a camera, so that it looks as natural as possible on different displays and for human vision.

This implies some linear and nonlinear operations (e.g. display color space or Gamma viewing LUT) which are normally not necessary or recommended for machine vision algorithms. A standard monitor offers, for example, several display modes like sRGB, "Adobe RGB", etc., which reproduce the very same color of a camera color differently.

It should also be noted that users can choose for either

  • camera based settings and adjustments or
  • host based settings and adjustments or
  • a combination of both.

Camera based settings are advantageous to achieve highest calculating precision, independent of the transmission bit depth, lowest latency, because all calculations are performed in FPGA on the fly and low CPU load, because the host is not invoked with these tasks. These camera based settings are

Host based settings save transmission bandwidth at the expense of accuracy or latency and CPU load. Especially performing gain, offset, and white balance in the camera while outputting RAW data to the host can be recommended.

Of course host based settings can be used with all families of cameras (e.g. also mvBlueFOX).

Host based settings are:

  • look-up table (LUTOperations)
  • color correction (ColorTwist)

To show the different color behaviors, we take a color chart as a starting point:

Figure 1: Color chart as a starting point

If we take a SingleFrame image without any color optimizations, an image can be like this:

Figure 2: SingleFrame snap without color optimization
Figure 3: Corresponding histogram of the horizontal white to black profile

As you can see,

  • saturation is missing,
  • white is more light gray,
  • black is more dark gray,
  • etc.
Note
You have to keep in mind that there are two types of images: the one generated in the camera and the other one displayed on the computer monitor. Up-to-date monitors offer different display modes with different color spaces (e.g. sRGB). According to the chosen color space, the display of the colors is different.

The following figure shows the way to a perfect colored image

Figure 4: The way to a perfect colored image

including these process steps:

  1. Do a Gamma correction (Luminance),
  2. make a White balance and
  3. Improve the Contrast.
  4. Improve Saturation, and use a "color correction matrix" for both
    1. the sensor and / or
    2. the monitor.

The following sections will describe the single steps in detail.

Step 1: Gamma correction (Luminance)

First of all, a Gamma correction (Luminance) can be performed to change the image in a way how humans perceive light and color.

For this, you can change either

  • the exposure time,
  • the aperture or
  • the gain.

You can change the gain via wxPropView like the following way:

  1. Click on "Setting -> Base -> Camera". There you can find

    1. "AutoGainControl" and
    2. "AutoExposeControl".
    Figure 5: wxPropView: Setting -> Base -> Camera

    You can turn them "On" or "Off". Using the auto controls you can set limits of the auto control; without you can set the exact value.

After gamma correction, the image will look like this:

Figure 6: After gamma correction
Figure 7: Corresponding histogram after gamma correction
Note
As mentioned above, you can do a gamma correction via ("Setting -> Base -> ImageProcessing -> LUTOperations"):

Figure 8: LUTOperations dialog

Just set "LUTEnable" to "On" and adapt the single LUTs like (LUT-0, LUT-1, etc.).

Step 2: White Balance

As you can see in the histogram, the colors red and blue are below green. Using green as a reference, we can optimize the white balance via "Setting -> Base -> ImageProcessing" ("WhiteBalanceCalibration"):

Please have a look at White balance of a camera device (color version) for more information for an automatic white balance.

To adapt the single colors you can use the "WhiteBalanceSettings-1".

After optimizing white balance, the image will look like this:

Figure 9: After white balance
Figure 10: Corresponding histogram after white balance

Step 3: Contrast

Still, black is more a darker gray. To optimize the contrast you can use "Setting -> Base -> ImageProcessing -> LUTControl" as shown in Figure 8.

The image will look like this now:

Figure 11: After adapting contrast
Figure 12: Corresponding histogram after adapting contrast

Step 4: Saturation and Color Correction Matrix (CCM)

Still saturation is missing. To change this, the "Color Transformation Control" can be used ("Setting -> Base -> ImageProcessing -> ColorTwist"):

  1. Click on "Color Twist Enable" and
  2. click on "Wizard" to start the saturation via "Color Transformation Control" wizard tool (since firmware version 1.4.57).

    Figure 13: Selected Color Twist Enable and click on wizard will start wizard tool
  3. Now, you can adjust the saturation e.g. "1.1".

    Figure 14: Saturation via Color Transformation Control dialog
  4. Afterwards, click on "Enable".
  5. Since driver version 2.2.2, it is possible to set the special color correction matrices at
    1. the input (sensor),
    2. the output side (monitor) and
    3. the saturation itself using this wizard.
  6. Select the specific input and output matrix and
  7. click on "Enable".
  8. As you can see, the correction is done by the host ("Host Color Correction Controls").
    Note
    It is not possible to save the settings of the "Host Color Correction Controls" in the mvBlueFOX. Unlike in the case of Figure 14, the buttons to write the "Device Color Correction Controls" to the mvBlueFOX are not active.
  9. Finally, click on "Apply".

After the saturation, the image will look like this:

Figure 15: After adapting saturation
Figure 16: Corresponding histogram after adapting saturation



Working with triggers

There are several use cases concerning trigger:



Using external trigger with CMOS sensors

Scenario

The CMOS sensors used in mvBlueFOX cameras support the following trigger modes:

If an external trigger signal occurs (e.g. high), the sensor will start to expose and readout one image. Now, if the trigger signal is still high, the sensor will start to expose and readout the next image (see Figure 1, upper part). This will lead to an acquisition just like using continuous trigger.

Figure 1: External Trigger with CMOS sensors
  • ttrig = Time from trigger (internal or external) to integration start.

If you want to avoid this effect, you have to adjust the trigger signal. As you can see in Figure 1 (lower part), the possible period has to be smaller than the time an image will need (texpose + treadout).

Example

External synchronized image acquisition (high active)

Note
Using mvBlueFOX-MLC or mvBlueFOX-IGC, you have to select DigIn0 as the trigger source, because the camera has only one opto-coupled input. Only the TTL model of the mvBlueFOX-MLC has two I/O's.
  • Trigger modes
    • OnHighLevel:
      The high level of the trigger has to be shorter than the frame time. In this case, the sensor will make one image exactly. If the high time is longer, there will be images with the possible frequency of the sensor as long as the high level takes. The first image will start with the low-high edge of the signal. The integration time of the exposure register will be used.
    • OnLowLevel:
      The first image will start with the high-low edge of the signal.
    • OnHighExpose
      This mode is like OnHighLevel, however, the exposure time is used like the high time of the signal.
See also
Block diagrams with example circuits of the opto-isolated digital inputs and outputs can be found in Dimensions and connectors.



Working with HDR (High Dynamic Range Control)

There are several use cases concerning High Dynamic Range Control:



Adjusting sensor -x00w

Introduction

The HDR (High Dynamic Range) mode of the sensor -x00w increases the usable contrast range. This is achieved by dividing the integration time in two or three phases. The exposure time proportion of the three phases can be set independently. Furthermore, it can be set, how much signal of each phase is charged.

Functionality

Figure 1: Diagram of the -x00w sensor's HDR mode

Description

  • "Phase 0"
    • During T1 all pixels are integrated until they reach the defined signal level of Knee Point 1.
    • If one pixel reaches the level, the integration will be stopped.
    • During T1 no pixel can reached a level higher than P1.
  • "Phase 1"
    • During T2 all pixels are integrated until they reach the defined signal level of Knee Point 2.
    • T2 is always smaller than T1 so that the percentage compared to the total exposure time is lower.
    • Thus, the signal increase during T2 is lower as during T1.
    • The max. signal level of Knee Point 2 is higher than of Knee Point 1.
  • "Phase 2"
    • During T2 all pixels are integrated until the possible saturation.
    • T3 is always smaller than T2, so that the percentage compared to the total exposure time is again lower here.
    • Thus, the signal increase during T3 is lower as during T2.

For this reason, darker pixels can be integrated during the complete integration time and the sensor reaches its full sensitivity. Pixels, which are limited at each Knee Points, lose a part of their integration time - even more, if they are brighter.

Figure 2: Integration time of different bright pixels

In the diagram you can see the signal line of three different bright pixels. The slope depends of the light intensity , thus it is per pixel the same here (granted that the light intensity is temporally constant). Given that the very light pixel is limited soon at the signal levels S1 and S2, the whole integration time is lower compared to the dark pixel. In practice, the parts of the integration time are very different. T1, for example, is 95% of Ttotal, T2 only 4% and T3 only 1%. Thus, a high decrease of the very light pixels can be achieved. However, if you want to divide the integration thresholds into three parts that is S2 = 2 x S1 and S3 = 3 x S1, a hundredfold brightness of one pixel's step from S2 to S3, compared to the step from 0 and S1 is needed.

Using HDR with mvBlueFOX-x00w

Figure 3 is showing the usage of the HDR mode. Here, an image sequence was created with the integration time between 10us and 100ms. You can see three slopes of the HDR mode. The "waves" result from the rounding during the three exposure phases. They can only be partly adjusted during one line period of the sensor.

Figure 3: wxPropView HDR screenshot

Notes about the usage of the HDR mode with mvBlueFOX-x00w

  • In the HDR mode, the basic amplification is reduced by approx. 0.7, to utilize a huge, dynamic area of the sensor.
  • If the manual gain is raised, this effect will be reversed.
  • Exposure times, which are too low, make no sense. During the third phase, if the exposure time reaches a possible minimum (one line period), a sensible lower limit is reached.

Possible settings using mvBlueFOX-x00w

Possible settings of the mvBlueFOX-x00w in HDR mode are:

"HDREnable":

  • "Off": Standard mode
  • "On": HDR mode on, reduced amplification:

-"HDRMode":

  • "Fixed": Fixed setting with 2 Knee Points. modulation Phase 0 .. 33% / 1 .. 66% / 2 .. 100%
  • "Fixed0": Phase 1 exposure 12.5% , Phase 2 31.25% of total exposure
  • "Fixed1": Phase 1 exposure 6.25% , Phase 2 1.56% of total exposure
  • "Fixed2": Phase 1 exposure 3.12% , Phase 2 0.78% of total exposure
  • "Fixed3": Phase 1 exposure 1.56% , Phase 2 0.39% of total exposure
  • "Fixed4": Phase 1 exposure 0.78% , Phase 2 0.195% of total exposure
  • "Fixed5": Phase 1 exposure 0.39% , Phase 2 0.049% of total exposure

"User": Variable setting of the Knee Point (1..2), threshold and exposure time proportion

  • "HDRKneePointCount": Number of Knee Points (1..2)
  • "HDRKneePoints"
    • "HDRKneePoint-0"
      • "HDRExposure_ppm": Proportion of Phase 0 compared to total exposure in parts per million (ppm)
      • "HDRControlVoltage_mV": Control voltage for exposure threshold of first Knee Point (3030mV is equivalent to approx. 33%)
    • "HDRKneePoint-1"
      • "HDRExposure_ppm": Proportion of Phase 1 compared to total exposure in parts per million (ppm)
      • "HDRControlVoltage_mV": Control voltage for exposure threshold of first Knee Point (2630mV is equivalent to approx. 66%)



Adjusting sensor -x02d (-1012d)

Introduction

The HDR (High Dynamic Range) mode of the Aptina sensor increases the usable contrast range. This is achieved by dividing the integration time in three phases. The exposure time proportion of the three phases can be set independently.

Functionality

To exceed the typical dynamic range, images are captured at 3 exposure times with given ratios for different exposure times. The figure shows a multiple exposure capture using 3 different exposure times.

Figure 1: Multiple exposure capture using 3 different exposure times
Note
The longest exposure time (T1) represents the Exposure_us parameter you can set in wxPropView.

Afterwards, the signal is fully linearized before going through a compander to be output as a piece-wise linear signal. the next figure shows this.

Figure 2: Piece-wise linear signal

Description

Exposure ratios can be controlled by the program. Two rations are used: R1 = T1/T2 and R2 = T2/T3.

Increasing R1 and R2 will increase the dynamic range of the sensor at the cost of lower signal-to-noise ratio (and vice versa).

Possible settings

Possible settings of the mvBlueFOX-x02d in HDR mode are:

  • "HDREnable":
    • "Off": Standard mode
    • "On": HDR mode on, reduced amplification
      • "HDRMode":
        • "Fixed": Fixed setting with exposure-time-ratios: T1 -> T2 ratio / T2 -> T3 ratio
        • "Fixed0": 8 / 4
        • "Fixed1": 4 / 8
        • "Fixed2": 8 / 8
        • "Fixed3": 8 / 16
        • "Fixed4": 16 / 16
        • "Fixed5": 16 / 32
Figure 3: wxPropView - Working with the HDR mode



Working with LUTs

There are several use cases concerning LUTs (Look-Up-Tables):



Introducing LUTs

Introduction

Look-Up-Tables (LUT) are used to transform input data into a desirable output format. For example, if you want to invert an 8 bit image, a Look-Up-Table will look like the following:

Figure 1: Look-Up-Table which inverts a pixel of an 8 bit mono image

I.e., a pixel which is white in the input image (value 255) will become black (value 0) in the output image.

All MATRIX VISION devices use a hardware based LUT which means that

  • no host CPU load is needed and
  • the LUT operations are independent of the transmission bit depth.

Setting the hardware based LUTs via LUT Control

Note
The mvBlueFOX cameras also feature a hardware based LUT. Although, you have to set the LUT via Setting -> Base -> ImageProcessing -> LUTOperations, you can set where the processing takes place. For this reason, there is the parameter LUTImplementation. Just select either "Software" or "Hardware".

Setting the Host based LUTs via LUTOperations

Host based LUTs are also available via "Setting -> Base -> ImageProcessing -> LUTOperations"). Here, the changes will affect the 8 bit image data and the processing needs the CPU of the host system.

The mvBlueFOX cameras also feature a hardware based LUT. Although, you have to set the LUT via "Setting -> Base -> ImageProcessing -> LUTOperations", you can set where the processing takes place. For this reason, there is the parameter LUTImplementation. Just select either "Software" or "Hardware".

Three "LUTMode"s are available:

  • "Gamma"
    You can use "Gamma" to lift darker image areas and to flatten the brighter ones. This compensates the contrast of the object. The calculation is described here. It makes sense to set the "GammaStartThreshold" higher than 0 to avoid a too extreme lift or noise in the darker areas.
  • "Interpolated"
    With "Interpolated" you can set the key points of a characteristic line. You can defined the number of key points. The following figure shows the behavior of all 3 LUTInterpolationModes with 3 key points:
Figure 2: LUTMode "Interpolated" -> LUTInterpolationMode
  • "Direct"
    With "Direct" you can set the LUT values directly.

Example 1: Inverting an Image

To get an inverted 8 bit mono image like shown in Figure 1, you can set the LUT using wxPropView. After starting wxPropView and using the device,

  1. Set "LUTEnable" to "On" in "Setting -> Base -> ImageProcessing -> LUTOperations".
  2. Afterwards, set "LUTMode" to "Direct".
  3. Right-click on "LUTs -> LUT-0 -> DirectValues[256]" and select "Set Multiple Elements... -> Via A User Defined Value Range".
    This is one way to get an inverted result. It is also possible to use the "LUTMode" - "Interpolated".
  4. Now you can set the range from 0 to 255 and the values from 255 to 0 as shown in Figure 2.
Figure 3: Inverting an image using wxPropView with LUTMode "Direct"



Saving data on the device

Note
As described in Storing and restoring settings, it is also possible to save the settings as an XML file on the host system. You can find further information about for example the XML compatibilities of the different driver versions in the mvIMPACT Acquire SDK manuals and the according setting classes: https://www.matrix-vision.com/manuals/SDK_CPP/classmvIMPACT_1_1acquire_1_1FunctionInterface.html (C++)

There are several use cases concerning device memory:



Creating user data entries

Basics about user data

It is possible to save arbitrary user specific data on the hardware's non-volatile memory. The amount of possible entries depends on the length of the individual entries as well as the size of the devices non-volatile memory reserved for storing:

  • mvBlueFOX,
  • mvBlueFOX-M,
  • mvBlueFOX-MLC,
  • mvBlueFOX3, and
  • mvBlueCOUGAR-X

currently offer 512 bytes of user accessible non-volatile memory of which 12 bytes are needed to store header information leaving 500 bytes for user specific data.

One entry will currently consume:
1 + <length_of_name (up to 255 chars)> + 2 + <length_of_data (up to 65535 bytes)> + 1 (access mode) bytes

as well as an optional:
1 + <length_of_password> bytes per entry if a password has been defined for this particular entry

It is possible to save either String or Binary data in the data property of each entry. When storing binary data please note, that this data internally will be stored in Base64 format thus the amount of memory required is 4/3 time the binary data size.

The UserData can be accessed and created using wxPropView (the device has to be closed). In the section "UserData" you will find the entries and following methods:

  • "CreateUserDataEntry"
  • "DeleteUserDataEntry"
  • "WriteDataToHardware"
Figure 1: wxPropView - section "UserData -> Entries"

To create a user data entry, you have to

  • Right click on "CreateUserDataEntry"
  • Select "Execute" from the popup menu.
    An entry will be created.
  • In "Entries" click on the entry you want to adjust and modify the data fields.
    To permanently commit a modification made with the keyboard the ENTER key must be pressed.
  • To save the data on the device, you have to execute "WriteDataToHardware". Please have a look at the "Output" tab in the lower right section of the screen as shown in Figure 2, to see if the write process returned with no errors. If an error occurs a message box will pop up.
Figure 2: wxPropView - analysis tool "Output"

Coding sample

If you e.g. want to use the UserData as dongle mechanism (with binary data), it is not suggested to use wxPropView. In this case you have to program the handling of the user data.

See also
mvIMPACT::acquire::UserDataEntry in mvIMPACT_Acquire_API_CPP_manual.chm.



Working with device features

There are several use cases concerning device features:



Working with several cameras simultaneously

There are several use cases concerning multiple cameras:



Using 2 mvBlueFOX-MLC cameras in Master-Slave mode

Scenario

If you want to have a synchronized stereo camera array (e.g. mvBlueFOX-MLC-202dG) with a rolling shutter master camera (e.g. mvBlueFOX-MLC-202dC), you can solve this task as follows:

  1. Please check, if all mvBlueFOX cameras are using firmware version 1.12.16 or newer.
  2. Now, open wxPropView and set the master camera:

    Figure 1: wxPropView - Master camera outputs at DigOut 0 a frame synchronous V-Sync pulse


    Note
    Alternatively, it is also possible to use HRTC - Hardware Real-Time Controller HRTC to set the master camera. The following sample shows the HRTC - Hardware Real-Time Controller HRTC program which sets the trigger signal and the digital output.
    The sample will lead to a constant frame rate of 16 fps (50000 us + 10000 us = 60000 us for one cycle. 1 / 60000 us * 1000000 = 16.67 Hz).

    Figure 2: wxPropView - HRTC program sets the trigger signal and the digital output


    Do not forget to set HRTC as the trigger source for the master camera.

    Figure 3: wxPropView - HRTC is the trigger source for the master camera


  3. Then, set the slave with wxPropView :

    Figure 4: wxPropView - Slave camera with TriggerMode "OnHighLevel" at DigIn 0

Connection using -UOW versions (opto-isolated inputs and outputs)

The connection of the mvBlueFOX cameras should be like this:

Figure 5: Connection with opto-isolated digital inputs and outputs


Symbol Comment Input voltage Min Typ Max Unit
Uext. External power   3.3   30 V
Rout Resistor digital output     2   kOhm
Rin Resistor digital input 3.3 V .. 5 V   0   kOhm
12 V   0.68   kOhm
24 V   2   kOhm

You can add further slaves.

Connection using -UTW versions (TTL inputs and outputs)

The connection of the mvBlueFOX cameras should be like this:

Figure 6: Connection with TTL digital inputs and outputs

For this case we offer a synchronization cable called "KS-MLC-IO-TTL 00.5".

Note
There a no further slaves possible.
See also



Synchronize the cameras to expose at the same time

This can be achieved by connecting the same external trigger signal to one of the digital inputs of each camera like it's shown in the following figure:

Figure 1: Electrical setup for sync. cameras

Each camera then has to be configured for external trigger somehow like in the image below:

Figure 2: wxPropView - Setup for sync. cameras

This assumes that the image acquisition shall start with the rising edge of the trigger signal. Every camera must be configured like this. Each rising edge of the external trigger signal then will start the exposure of a new image at the same time on each camera. Every trigger signal that will occur during the exposure of an image will be silently discarded.



Working with the Hardware Real-Time Controller (HRTC)

Note
Please have a look at the Hardware Real-Time Controller (HRTC) chapter for basic information.

There are several use cases concerning the Hardware Real-Time Controller (HRTC):

  • "Using single camera":
  • "Using multiple cameras":



Single camera samples (HRTC)

Note
Please have a look at the Hardware Real-Time Controller (HRTC) chapter for basic information.

Using a single camera there are following samples available:



Achieve a defined image frequency (HRTC)

Note
Please have a look at the Hardware Real-Time Controller (HRTC) chapter for basic information.

With the use of the HRTC, any feasible frequency with the accuracy of micro seconds(us) is possible. The program to achieve this roughly must look like this (with the trigger mode set to ctmOnRisingEdge):

0. WaitClocks( <frame time in us> - <trigger pulse width in us>) )
1. TriggerSet 1
2. WaitClocks( <trigger pulse width in us> )
3. TriggerReset
4. Jump 0

So to get e.g. exactly 10 images per second from the camera the program would somehow look like this(of course the expose time then must be smaller or equal then the frame time in normal shutter mode):

0. WaitClocks 99000
1. TriggerSet 1
2. WaitClocks 1000
3. TriggerReset
4. Jump 0
Figure 1: wxPropView - Entering the sample "Achieve a defined image frequency"
See also
Download this sample as an rtp file: Frequency10Hz.rtp. To open the file in wxPropView, click on "Digital I/O -> HardwareRealTimeController -> Filename" and select the downloaded file. Afterwards, click on "int Load( )" to load the HRTC program.
Note
Please note the max. frame rate of the corresponding sensor!

To see a code sample (in C++) how this can be implemented in an application see the description of the class mvIMPACT::acquire::RTCtrProgram (C++ developers)

Delay the external trigger signal (HRTC)

Note
Please have a look at the Hardware Real-Time Controller (HRTC) chapter for basic information.
0. WaitDigin DigIn0->On
1. WaitClocks <delay time>
2. TriggerSet 0
3. WaitClocks <trigger pulse width>
4. TriggerReset
5. Jump 0

<trigger pulse width> should not less than 100us.
Figure 1: Delay the external trigger signal

As soon as digital input 0 changes from high to low (0), the HRTC waits the < delay time > (1) and starts the image expose. The expose time is used from the expose setting of the camera. Step (5) jumps back to the beginning to be able to wait for the next incoming signal.

Note
WaitDigIn waits for a state.
Between TriggerSet and TriggerReset has to be a waiting period.
If you are waiting for an external edge in a HRTC sequence like
WaitDigIn[On,Ignore]
WaitDigIn[Off,Ignore]
the minimum pulse width which can be detected by HRTC has to be at least 5 us.

Creating double acquisitions (HRTC)

Note
Please have a look at the Hardware Real-Time Controller (HRTC) chapter for basic information.

If you need a double acquisition, i.e. take two images in a very short time interval, you can achieve this by using the HRTC.

With the following HRTC code, you will

  • take an image using TriggerSet and after TriggerReset you have to
  • set the camera to ExposeSet immediately.
  • Now, you have to wait until the first image was read-out and then
  • set the second TriggerSet.

The ExposureTime was set to 200 us.

0    WaitDigin  DigitalInputs[0] - On
1    TriggerSet  1
2    WaitClocks 200
3    TriggerReset
4    WaitClocks 5
5    ExposeSet
6    WaitClocks 60000
7    TriggerSet 2
8    WaitClocks 100
9    TriggerReset
10   ExposeReset
11   WaitClocks 60000
12   Jump 0



Take two images after one external trigger (HRTC)

Note
Please have a look at the Hardware Real-Time Controller (HRTC) chapter for basic information.
0. WaitDigin DigIn0->Off
1. TriggerSet 1
2. WaitClocks <trigger pulse width>
3. TriggerReset
4. WaitClocks <time between 2 acquisitions - 10us> (= WC1)
5. TriggerSet 2
6. WaitClocks <trigger pulse width>
7. TriggerReset
8. Jump 0

<trigger pulse width> should not less than 100us.
Figure 1: Take two images after one external trigger

This program generates two internal trigger signals after the digital input 0 is going to low. The time between those internal trigger signals is defined by step (4). Each image is getting a different frame ID. The first one has the number 1, defined in the command (1) and the second image will have the number 2. The application can ask for the frame ID of each image, so well known which image is the first and the second one.

he

Take two images with different expose times after an external trigger (HRTC)

Note
Please have a look at the Hardware Real-Time Controller (HRTC) chapter for basic information.

The following code shows the solution in combination with a CCD model of the camera. With CCD models you have to set the exposure time using the trigger width.

0.  WaitDigin DigIn0->Off
1.  ExposeSet
2.  WaitClocks <expose time image1 - 10us> (= WC1)
3.  TriggerSet 1
4.  WaitClocks <trigger pulse width>
5.  TriggerReset
6.  ExposeReset
7.  WaitClocks <time between 2 acquisitions - expose time image1 - 10us> (= WC2)
8.  ExposeSet
9.  WaitClocks <expose time image2 - 10us> (= WC3)
10. TriggerSet 2
11. WaitClocks <trigger pulse width>
12. TriggerReset
13. ExposeReset
14. Jump 0

<trigger pulse width> should not less than 100us.
Figure 1: Take two images with different expose times after an external trigger
Note
Due to the internal loop to wait for a trigger signal, the WaitClocks call between "TriggerSet 1" and "TriggerReset" constitute 100 us . For this reason, the trigger signal cannot be missed.
Before the ExposeReset, you have to call the TriggerReset otherwise the normal flow will continue and the image data will be lost!
The sensor expose time after the TriggerSet is 0 us .
See also
Download this sample as an rtp file: 2Images2DifferentExposureTimes.rtp with two consecutive exposure times (10ms / 20ms). To open the file in wxPropView, click on "Digital I/O -> HardwareRealTimeController -> Filename" and select the downloaded file. Afterwards, click on "int Load( )" to load the HRTC program. There are timeouts added in line 4 and line 14 to illustrate the different exposure times.

Using a CMOS model (e.g. the mvBlueFOX-MLC205), a sample with four consecutive exposure times (10ms / 20ms / 40ms / 80ms) triggered just by one hardware input signal would look like this:

0.  WaitDigin DigIn0->On
1.  TriggerSet
2.  WaitClocks 10000 (= 10 ms)
3.  TriggerReset
4.  WaitClocks 1000000 (= 1 s)
5.  TriggerSet
6.  WaitClocks 20000 (= 20 ms)
7.  TriggerReset
8.  WaitClocks 1000000 (= 1 s)
9.  TriggerSet
10. WaitClocks 40000 (= 40 ms)
11. TriggerReset
12. WaitClocks 1000000 (= 1 s)
13. TriggerSet
14. WaitClocks 80000 (= 40 ms)
15. TriggerReset
16. WaitClocks 1000000 (= 1 s)
17. Jump 0
See also
This second sample is also available as an rtp file: MLC205_four_images_diff_exp.rtp.



Edge controlled triggering (HRTC)

Note
Please have a look at the Hardware Real-Time Controller (HRTC) chapter for basic information.

To achieve an edged controlled triggering, you can use HRTC. Please follow these steps:

  1. First of all, you have to set the TriggerMode to OnHighLevel .
  2. Then, set the TriggerSource to RTCtrl .
Figure 1: wxPropView - TriggerMode and TriggerSource

Afterwards you have to configure the HRTC program:

  1. The HRTC program waits for a rising edge at the digital input 0 (step 1).
  2. If there is a rising edge, the trigger will be set (step 2).
  3. After a short wait time (step 3),
  4. the trigger will be reset (step 4).
  5. Now, the HRTC program waits for a falling edge at the digital input 0 (step 5).
  6. If there is a falling edge, the trigger will jump to step 0 (step 6).
Note
The waiting time at step 0 is necessary to debounce the signal level at the input (the duration should be shorter than the frame time).
Figure 2: wxPropView - Edge controller triggering using HRTC
See also
Download this sample as a capture settings file: MLC200wG_HRTC_TriggerFromHighLevelToEdgeControl.xml. How you can work with capture settings is described in the following chapter.

To see a code sample (in C++) how this can be implemented in an application see the description of the class mvIMPACT::acquire::RTCtrProgram (C++ developers)

Multiple camera samples (HRTC)

Note
Please have a look at the Hardware Real-Time Controller (HRTC) chapter for basic information.

Using a multiple camera there are following samples available:

Delay the expose start of the following camera (HRTC)

Note
Please have a look at the Hardware Real-Time Controller (HRTC) chapter for basic information.
The use case Synchronize the cameras to expose at the same time shows how you have to connect the cameras.

If a defined delay should be necessary between the cameras, the HRTC can do the synchronization work.

In this case, one camera must be the master. The external trigger signal that will start the acquisition must be connected to one of the cameras digital inputs. One of the digital outputs then will be connected to the digital input of the next camera. So camera one uses its digital output to trigger camera two. How to connect the cameras to one another can also be seen in the following image:

Figure 1: Connection diagram for a defined delay from the exposure start of one camera relative to another

Assuming that the external trigger is connected to digital input 0 of camera one and digital output 0 is connected to digital input 0 of camera two. Each additional camera will then be connected to it predecessor like camera 2 is connected to camera 1. The HRTC of camera one then has to be programmed somehow like this:

0. WaitDigin DigIn0->On
1. TriggerSet 0
2. WaitClocks <trigger pulse width>
3. TriggerReset
4. WaitClocks <delay time>
5. SetDigout DigOut0->On
6. WaitClocks 100us
7. SetDigout DigOut0->Off
8. Jump 0

<trigger pulse width> should not less than 100us.

When the cameras are set up to start the exposure on the rising edge of the signal <delay time> of course is the desired delay time minus <trigger pulse width>.

If more than two cameras shall be connected like this, every camera except the last one must run a program like the one discussed above. The delay times of course can vary.

Figure 2: Delay the expose start of the following camera