MATRIX VISION - mvBlueFOX3 Technical Documentation
Use cases

Table of Contents

GenICam to mvIMPACT Acquire code generator

Using the code generator

As any GenICam™ compliant device for which there is a GenICam™ GenTL compliant capture driver in mvIMPACT Acquire can be used using the mvIMPACT Acquire interface and it can't be known which features are supported by a device until a device has been initialised and its GenICam™ XML file has been processed it is not possible to provide a complete C++ wrapper for every device statically.

Therefore an interface code generator has been embedded into each driver capable of handling arbitrary devices. This code generator can be used to create a convenient C++ interface file that allows access to every feature offered by a device.

Warning
A code generated interface can result in incompatibility issues, because the interface name will be constructed from the version and product information that comes with the GenICam XML file (see comment in code sample below). To avoid incompatibility, please use the common interface from the namespace mvIMPACT::acquire::GenICam whenever possible.

To access features needed to generate a C++ wrapper interface a device needs to be initialized. Code can only be generated for the interface layout selected when the device was opened. If interfaces shall be created for more than a single interface layout the steps that will explain the creation of the wrapper files must be repeated for each interface layout.

Once the device has been opened the code generator can be accessed by navigating to "System Settings -> CodeGeneration".

Figure 1: wxPropView - Code Generation section


To generate code first of all an appropriate file name should be selected. In order to prevent file name clashes the following hints should be kept in mind when thinking about a file name:

  • If several devices from different families or different vendors shall later be included into an application each device or device family will need its own header file thus either the files should be organized in different subfolders or must have unique names.
  • If a device shall be used using different interface layouts again different header files must be generated.

If only a single device family is involved but 2 interface layouts will be used later a suitable file name might be "mvIMPACT_acquire_GenICam_Wrapper_DeviceSpecific.h".

For a more complex application involving different device families using the GenICam interface layoutonly something like this might make sense:

  • mvIMPACT_acquire_GenICam_Wrapper_MyDeviceA.h
  • mvIMPACT_acquire_GenICam_Wrapper_MyDeviceB.h
  • mvIMPACT_acquire_GenICam_Wrapper_MyDeviceC.h
  • ...

Once a file name has been selected the code generator can be invoked by executing the "int GenerateCode()" method:

Figure 2: wxPropView - GenerateCode() method


The result of the code generator run will be written into the "LastResult" property afterwards:

Figure 3: wxPropView - LastResult property


Using the result of the code generator in an application

Each header file generated by the code generator will include "mvIMPACT_CPP/mvIMPACT_acquire.h" thus when an application is compiled with files that have been automatically generated these header files must have access to this file. This can easily achieved by appropriately settings up the build environment / Makefile.

To avoid problems of multiple includes the file will use an include guard build from the file name.

Within each header file the generated data types will reside in a sub namespace of "mvIMPACT::acquire" in order to avoid name clashes when working with several different created files in the same application. The namespace will automatically be generated from the ModelName tag and the file version tags in the devices GenICam XML file and the interface layout. For a device with a ModelName tag mvBlueIntelligentDevice and a file version of 1.1.0 something like this will be created:

namespace mvIMPACT {
   namespace acquire {
      namespace DeviceSpecific { // the name of the interface layout used during the process of code creation
         namespace MATRIX_VISION_mvBlueIntelligentDevice_1 { // this name will be constructed from the version and product
                                                             // information that comes with the GenICam XML file. As defined
                                                             // by the GenICam standard, different major versions of a devices
                                                             // XML file may not be compatible thus different interface files should be created here


// all code will reside in this inner namespace

         } // namespace MATRIX_VISION_mvBlueIntelligentDevice_1
      } // namespace DeviceSpecific
   } // namespace acquire
} // namespace mvIMPACT

In the application the generated header files can be used like normal include files:

#include <string>
#include <mvIMPACT_CPP/mvIMPACT_acquire.h>
#include <mvIMPACT_CPP/mvIMPACT_acquire_GenICam_Wrapper_DeviceSpecific.h>

Now to access data types from the header files of course the namespaces must somehow be taken into account. When there is just a single interface that has been created automatically the easiest thing to do would probably an appropriate using statement:

using namespace mvIMPACT::acquire::DeviceSpecific::MATRIX_VISION_mvBlueIntelligentDevice_1;

If several files created from different devices shall be used and these devices define similar features in a slightly different way this however might result in name clashes and/or unexpected behaviour. In that case the namespaces should be specified explicitly when creating data type instances from the header file in the application:

//-----------------------------------------------------------------------------
void fn( Device* pDev )
//-----------------------------------------------------------------------------
{
  mvIMPACT::acquire::DeviceSpecific::MATRIX_VISION_mvBlueIntelligentDevice_1::DeviceControl dc( pDev );
  if( dc.timestampLatch.isValid() )
  {
    dc.timestampLatch.call();
  }
}

When working with a using statement the same code can be written like this as well:

//-----------------------------------------------------------------------------
void fn( Device* pDev )
//-----------------------------------------------------------------------------
{
  DeviceControl dc( pDev );
  if( dc.timestampLatch.isValid() )
  {
    dc.timestampLatch.call();
  }
}



Introducing acquisition / recording possibilities

There are several use cases concerning the acquisition / recording possibilities of the camera:



Acquiring a number of images

As described in chapter Acquisition Control, if you want to acquire a number of images, you can use as "Setting -> Base -> Camera -> Acquisition Control -> Acquisition Mode" "MultiFrame" and you have to set the "Acquisition Frame Count".

Afterwards, if you start the acquisition via the button "Acquire", the camera will deliver the number of images.

The "MultiFrame" functionality can be combined with an external signal to start the acquisition.

There are several ways and combinations possible, e.g.:

  • A trigger starts the acquisition (Figure 1).
  • A trigger starts the acquisition start event and a second trigger starts the grabbing itself (Figure 2).
Figure 1: Starting an acquisition after one trigger event

For this scenario, you have to use the "Setting -> Base -> Camera -> Acquisition Control -> Trigger Selector" "AcquisitionStart".

The following figure shows, how to set the scenario shown in Figure 1 with wxPropView

Figure 2: wxPropView - Setting acquisition of a number of images started by an external signal

A rising edge at line 4 will start the acquisition of 20 images.



Recording sequences with pre-trigger

What is pre-trigger?

With pre-trigger it is possible to record frames before and after a trigger event.

How it works

To use this functionality you have to set the mv Acquisition Memory Mode to "mvPretrigger". This can be used to define the number of frames which will be recorded before the trigger event occurs:

Figure 1: wxPropView - setting pre-trigger

Afterwards, you have to define the "AcquisitionStart" or "AcquisitionActive" event. In figure 1 this is Line4 as trigger event, which starts the regular camera streaming.

Now, start the camera by pressing "Live" and generate the acquisition event.

The camera will output the number of pre-trigger frames as fast as possible followed by the frames in live mode as fast as possible until the frame rate is in sync:

Figure 2: wxPropView - recorded / output images



Creating acquisition sequences (Sequencer Control)

Introduction

As mentioned in GenICam and Advanced Features section of this manual, the Sequencer Mode is a feature to define feature sets with specific settings. The sets are activated by a user-defined trigger source and event.

Note
At the moment, the Sequencer Mode is only available for MATRIX VISION cameras with CCD sensors and Sony's CMOS sensors. Please consult the "Device Feature and Property List"s to get a summary of the actually supported features of each sensor.

The following features are currently available for using them inside the sequencer control:

  • BinningHorizontal
  • BinningVertical
  • CounterDuration (can be used to configure a certain set of sequencer parameters to be applied for the next CounterDuration frames)
  • DecimationHorizontal
  • DecimationVertical
  • ExposureTime
  • Gain
  • Height
  • OffsetX
  • OffsetY
  • Width
  • UserOutputValueAll
  • UserOutputValueAllMask
  • Multiple conditional sequencer paths.

The Sequencer Control uses Counter And Timer Control Counter1 to work. If you already preset Counter1 and you save a new acquisition sequence, the settings of Counter1 will be overwritten.

Note
Configured sequencer programs are stored as part of the User Sets like any other feature.

Creating a sequence using the Sequencer Control in wxPropView

In this sample, we define an acquisition sequence with five different exposure times on the device whereby the last step should be repeated five times. We also activate the digital outputs 0..3 to the sets accordingly - this, for example, can be used as flash signals. All the configuration is done on the device itself so after finishing the configuration and starting the acquisition the device itself will apply the parameter changes when necessary. The host application only needs to acquire the images then. This results in a much faster overall frame rate compared to when applying these changes on a frame to frame basis from the host application.

  • 1000 us
  • 5000 us
  • 10000 us
  • 20000 us
  • 50000 us (5x)

This will result in the following flow diagram:

Figure 1: Working diagram of the sample

As a consequence the following exposure times will be used to expose images inside an endless loop once the sequencer has been started:

  • Frame 1: 1000 us
  • Frame 2: 5000 us
  • Frame 3: 10000 us
  • Frame 4: 20000 us
  • Frame 5: 50000 us
  • Frame 6: 50000 us
  • Frame 7: 50000 us
  • Frame 8: 50000 us
  • Frame 9: 50000 us
  • Frame 10: 1000 us
  • Frame 11: 5000 us
  • Frame 12: 10000 us
  • ...

So the actual sequence that will be executed on the device later on will be like this

while( sequencerMode == On )
{
  take 1 image using set 0
  take 1 image using set 1
  take 1 image using set 2
  take 1 image using set 3
  take 5 images using set 4
}
  • There are 2 C++ examples called GenICamSequencerUsage and GenICamSequencerParameterChangeAtRuntime that show how to control the sequencer from an application. They can be found in the Examples section of the C++ interface documentation

The following steps are needed to configure the device as desired:

Note
This is the SFNC way how to create an acquisition sequence and consequently how you have to program it. However, wxPropView offers a wizard to define an acquisition sequence in a much easier way.
  1. First, switch into the "Configuration Mode": "Sequencer Configuration Mode" = "On". Only then the sequencer on a device can be configured.

    Figure 2: wxPropView - Sequencer Configuration Mode = On


  2. Set the "Sequencer Feature Selector", if it is not already active (pink box, figure 3): "Sequencer Feature Selector" = "ExposureTime" and "Sequencer Feature Enable" = "1".
  3. Set the "Sequencer Feature Selector" for the duration counter (pink box, figure 4): "Sequencer Feature Selector" = "CounterDuration" and "Sequencer Feature Enable" = "1".
  4. Then, each sequencer set must be selected by the "Sequencer Set Selector" (orange box, figure 3): "Sequencer Set Selector" = "0".

    Figure 3: wxPropView - Sequencer set 0


  5. Set the following sequencer set using "Sequencer Set Next" (brown box, figure 3): "Sequencer Set Next" = "1".
  6. Set the "Exposure Time" (red box, figure 3): "Exposure Time" = "1000".
  7. Finally, save the "Sequencer Set" (green box, figure 3): "int SequencerSetSave()". This ends the configuration of this sequencer set and all the relevant parameters have been stored inside the devices RAM.
  8. Set the "UserOutputValueAllMask" (purple box, figure 4) to a suitable value. In this case we want to use all UserOutputs, so we set it to "0xf".
  9. Set the "UserOutputValueAll" (red box, figure 4): "UserOutputValueAll" = "0x1".

    Figure 4: wxPropView - DigitalIOControl - Set UserOutputValueAll of Line3


  10. Repeat these steps for the following 3 sequencer sets (Exposure Times 5000, 10000, 20000; UserOutputValueAll 0x2, 0x4, 0x8).
  11. For the last sequencer set, set the desired sequencer set with "Sequencer Set Selector" (orange box, figure 5): "Sequencer Set Selector" = "4".
  12. Set the following sequencer set with "Sequencer Set Next" and trigger source with "Sequencer Trigger Source" (brown box, figure 5):
    "Sequencer Set Next" = "0". This will close the loop of sequencer sets by jumping from here back to the first one.
    "Sequencer Trigger Source" = "Counter1End".
  13. Set the "Exposure Time" (red box, figure 5): "Exposure Time" = "50000".

    Figure 5: wxPropView - Sequencer set 4


  14. Set the "Counter Duration" in "Counter And Timer Control" (red box, figure 6): "Counter Duration" = "5".

    Figure 6: wxPropView - "Sequencer Mode" = "On"


  15. As there are only four UserOutputs we decided not to show sequencer set "4" on the output lines.
  16. Finally, save the "Sequencer Set" (green box, figure 3): "int SequencerSetSave()".
  17. Leave the "Configuration Mode" (red box, figure 4: "Sequencer Configuration Mode" = "Off".
  18. Activate the "Sequencer Mode" (red box, figure 4): "Sequencer Mode" = "On".

    Figure 7: wxPropView - "Sequencer Mode" = "On"


Note
The "Sequencer Mode" will overwrite the current device settings.

You will now see that the sequencer sets are processed endlessly. Via the chunk data (activate chunk data via "Setting -> Base -> Camera -> GenICam -> Chunk Data Control" - activate "Chunk Mode Active"), the "Info Plot" of the analysis grid can be used to visualize the exposure times:

Figure 7: wxPropView - Info Plot shows the exposure times

Adapting the active time on the output lines via logic gates

If you do not want to see the whole active time of a given sequencer set but only the exposure time of a given sequencer set, you can combine your signal with the logic gates in mvLogicGateControl. Figure 8 shows sample settings of Line3:

Figure 8: wxPropView - mvLogicGateControl

This produces the following output on the output lines:

Figure 9: UserOutputs via mvLogicGateControl on output lines

These signals could be used as flash signals for separate sequencer sets.

You can program this as follows:

#include <mvIMPACT_CPP/mvIMPACT_acquire_GenICam.h>

GenICam::DigitalIOControl dio = new GenICam::DigitalIOControl(pDev);
dio.userOutputValueAllMask.write( 0xF );
dio.userOutputValueAll.write( 0x1 ); // 0x2, 0x4, 0x8, 0x0

GenICam::mvLogicGateControl mvlgc = new GenICam::mvLogicGateControl(pDev);
mvlgc.mvLogicGateANDSelector.writeS("mvLogicGateAND1");
mvlgc.mvLogicGateANDSource1.writeS("UserOutput0");
mvlgc.mvLogicGateANDSource2.writeS("ExposureActive");
mvlgc.mvLogicGateORSelector.writeS("mvLogicGateOR1");
mvlgc.mvLogicGateORSource1.writeS("mvLogicGateAND1Output");
mvlgc.mvLogicGateORSource2.writeS("Off");
mvlgc.mvLogicGateORSource3.writeS("Off");
mvlgc.mvLogicGateORSource4.writeS("Off");

dio.lineSelector.writeS("Line0");
dio.lineSource.writeS("mvLogicGateOR1Output");

// To output the UserOutputs directly on the output lines would be like this:
// dio.lineSource.writeS("UserOutput0");

Using the Sequencer Control wizard

Since
mvIMPACT Acquire 2.18.0

wxPropView offers a wizard for the Sequencer Control usage:

Figure 10: wxPropView - Wizard button

The wizard can be used to get a comfortable overview about the settings of the sequence sets and to create and set a sequence sets in a much easier way:

Figure 11: wxPropView - Sequencer Control wizard

Just

  • select the desired properties,
  • select the desired Set tabs,
  • set the properties in the table directly, and
  • "Auto-Assign Displays To Sets", if you like to show each sequence set in a different display.

Do not forget to save the settings at the end.

Working with sequence paths

Since
mvIMPACT Acquire 2.28.0

It is possible to define sets with a maximum of two active paths. The following diagram shows that two paths are defined in "Set 0". "Path 0" ("SequencePathSelector = 0") is the "standard" path that in this configuration loops and "Path 1" ("SequencePathSelector = 1") will jump to "Set 1" after it is activated via a "RisingEdge" ("SequencerTriggerActivation = RisingEdge") signal at "UserOutput0" ("SequencerTriggerSource = 0"). "UserOutput0" can be connected, for example, to one of the digital input lines of the camera:

Figure 12: Working diagram of a sample with sequence paths

There are some specifications concerning the sequencer path feature:

  • A path is inactive as soon as the "SequencerTriggerSource" is Off.
  • If none of the paths are triggered or both parts are inactive, the sequencer will remain in the current set.
  • If both paths were triggered, the path with the trigger that happened first, will be followed.
  • As the next sequencer set parameters (like ExposureTime) need to be prepared beforehand, the set sequence might not seem straight forward at first glance. The sequencer will always need one frame to switch to the new set; this frame will be the already prepared set.

Programming a sequence with paths using the Sequencer Control

First the sequencer has to be configured.

  #include <mvIMPACT_CPP/mvIMPACT_acquire_GenICam.h>

  GenICam::SequencerControl sc = new GenICam::SequencerControl( pDev );
  GenICam::AcquisitionControl ac = new GenICam::AcquisitionControl( pDev );
  TDMR_ERROR result = DMR_NO_ERROR;

  // general sequencer settings
  sc.sequencerMode.writeS( "Off" );
  sc.sequencerConfigurationMode.writeS( "On" );
  sc.sequencerFeatureSelector.writeS( "ExposureTime" );
  sc.sequencerFeatureEnable.write( bTrue );
  sc.sequencerFeatureSelector.writeS( "CounterDuration" );
  sc.sequencerFeatureEnable.write( bFalse );
  sc.sequencerFeatureSelector.writeS( "Gain" );
  sc.sequencerFeatureEnable.write( bFalse );
  sc.sequencerSetStart.write( 0 );

  // set0
  sc.sequencerSetSelector.write( 0LL );
  ac.exposureTime.write( 1000 );
  // set0 path0
  sc.sequencerPathSelector.write( 0LL );
  sc.sequencerTriggerSource.writeS( "ExposureEnd" );
  sc.sequencerSetNext.write( 0LL );
  // set0 path1
  sc.sequencerPathSelector.write( 1LL );
  sc.sequencerTriggerSource.writeS( "UserOutput0" );
  sc.sequencerTriggerActivation.writeS( "RisingEdge" );
  sc.sequencerSetNext.write( 1LL );
  // save set
  if( ( result = static_cast<TDMR_ERROR>( sc.sequencerSetSave.call() ) ) != DMR_NO_ERROR )
  {
    std::cout << "An error was returned while calling function '" << sc.sequencerSetSave.displayName() << "' on device " << pDev->serial.read()
              << "(" << pDev->product.read() << "): " << ImpactAcquireException::getErrorCodeAsString( result ) << endl;
  }

  // set1
  sc.sequencerSetSelector.write( 1LL );
  ac.exposureTime.write( 5000 );
  // set1 path0
  sc.sequencerPathSelector.write( 0LL );
  sc.sequencerTriggerSource.writeS( "ExposureEnd" );
  sc.sequencerSetNext.write( 0LL );
  // set1 path1
  sc.sequencerPathSelector.write( 1LL );
  sc.sequencerTriggerSource.writeS( "Off" );
  // save set
  if( ( result = static_cast<TDMR_ERROR>( sc.sequencerSetSave.call() ) ) != DMR_NO_ERROR )
  {
    std::cout << "An error was returned while calling function '" << sc.sequencerSetSave.displayName() << "' on device " << pDev->serial.read()
              << "(" << pDev->product.read() << "): " << ImpactAcquireException::getErrorCodeAsString( result ) << endl;
  }

  // final general sequencer settings
  sc.sequencerConfigurationMode.writeS( "Off" );
  sc.sequencerMode.writeS( "On" );

Then it can later be triggered during runtime.

  GenICam::DigitalIOControl dic = new GenICam::DigitalIOControl( pDev );
  dic.userOutputSelector.write( 0 );
  dic.userOutputValue.write( bTrue );
  dic.userOutputValue.write( bFalse );

This will set an internal event that will cause the sequencer to use set0-path1 at the next possible time, i.e. the next time we are in set0.

There is a C++ example called GenICamSequencerUsageWithPaths that shows how to control the sequencer with paths from an application. It can be found in the Examples section of the C++ interface documentation



Generating very long exposure times

Basics

At the moment the exposure time is limited to a maximum of 1 up to 20 seconds depending on certain internal sensor register restrictions. So each device might report a different maximum exposure time.

Since
mvIMPACT Acquire 2.28.0

Firmware version 2.28 did contain a major overhaul here so updating to at least this version can result in a much higher maximum exposure time. However, current sensor controllers can be configured to use even longer exposure times if needed using one of the devices timers to create an external exposure signal that can be fed back into the sensor. This use case will explain how this can be done.

This approach of setting up long exposure times requires the sensor of the camera to allow the configuration of an external signal to define the length of the exposure time, so only devices offering the ExposureMode TriggerWidth can be used for this setup.

Note
The maximum exposure time in microseconds that can be achieved in this configuration is the maximum value offered by the timer used.

With GenICam compliant devices that support all the needed features the setup is roughly like this:

  1. Select "Setting -> Base -> Camera -> GenICam -> Counter And Timer Control -> Timer Selector -> Timer 1" and .
  2. set "Timer Trigger Source" = "UserOutput0".
  3. set "Timer Trigger Activation" = "RisingEdge".
    I.e. a rising edge on UserOutput0 will start Timer1.
  4. Then set the "Timer Duration" property to the desired exposure time in us.
  5. In "Setting -> Base -> Camera -> GenICam -> Acquisition Control" set the Trigger Selector = "FrameStart".
    I.e. the acquisition of one frame will start when
  6. Timer1 is active: "Trigger Source" = "Timer1Active".
  7. Exposure time will be the trigger width: "Exposure Mode" = "TriggerWidth".

The following diagram illustrates all the signals involved in this configuration:

Figure 1: Long exposure times using GenICam

To start the acquisition of one frame a rising edge must be detected on UserOutput0 in this example but other configurations are possible as well.

Setting up the device

The easiest way to define a long exposure time would be by using a single timer. The length of the timer active signal is then used as trigger signal and the sensor is configured to expose while the trigger signal is active. This allows to define exposure time with micro-second precision up the the maximum value of the timer register. With a 32 bit timer register this results in a maximum exposure time of roughly 4295 seconds (so roughly 71.5 minutes). When writing code e.g. in C# this could look like this:

private static void configureDevice(Device pDev, int exposureSec, GenICam.DigitalIOControl ioc)
{
  try
  {
    // establish access to the CounterAndTimerControl interface
    GenICam.CounterAndTimerControl ctc = new mv.impact.acquire.GenICam.CounterAndTimerControl(pDev);
    // set TimerSelector to Timer1 and TimerTriggerSource to UserOutput0
    ctc.timerSelector.writeS("Timer1");
    ctc.timerTriggerSource.writeS("UserOutput0");
    ctc.timerTriggerActivation.writeS("RisingEdge");

    // Set timer duration for Timer1 to value from user input
    ctc.timerDuration.write(exposureSec * 1000000);

    // set userOutputSelector to UserOutput0 and set UserOutput0 to inactive. We will later generate a pulse here to initiate the exposure
    ioc.userOutputSelector.writeS("UserOutput0");
    ioc.userOutputValue.write(TBoolean.bFalse);

    // establish access to the AcquisitionCotrol interface
    GenICam.AcquisitionControl ac = new mv.impact.acquire.GenICam.AcquisitionControl(pDev);
    // set TriggerSelector to FrameStart and try to set ExposureMode to TriggerWidth
    ac.triggerSelector.writeS("FrameStart");
    // set TriggerSource for FrameStart to Timer1Active and activate TriggerMode
    ac.triggerSource.writeS("Timer1Active");
    ac.triggerMode.writeS("On");

    // expose as long as we have a high level from Timer1
    ac.exposureMode.writeS("TriggerWidth");
  }
  catch (Exception e)
  {
    Console.WriteLine("ERROR: Selected device does not support all features needed for this long time exposure approach: {0}, terminating...", e.Message);
    System.Environment.Exit(1);
  }
}
Note
Make sure that you adjust the ImageRequestTimeout_ms either to 0 (infinite)(this is the default value) or to a reasonable value that is larger than the actual exposure time in order not to end up with timeouts resulting from the buffer timeout being smaller than the actual time needed for exposing, transferring and capturing the image:
ImageRequestTimeout_ms = 0 # or reasonable value
See also
Counter And Timer Control
Digital I/O Control
Acquisition Control



Working with multiple AOIs (mv Multi Area Mode)

Since
mvIMPACT Acquire 2.18.3

Introduction

A special feature of Pregius sensors (a.k.a. IMX) from Sony is the possibility to define multiple AOIs (Areas of Interests - a.k.a. ROI - Regions of Interests) and to transfer them at the same time. Because many applications just need one or several specific parts in an image to be checked, this functionality can increase the frame rate.

Once activated, the "mv Multi Area Mode" allows you, depending on the sensor, to define up to eight AOIs (mvArea0 to mvArea7) in one image. There are several parameters in combination with the AOIs which are illustrated in the following figure:

Figure 1: Multiple AOIs principle

The "Resulting Offset X" and "Resulting Offset Y" indicates the starting point of the specific AOI in the output image. To complete the rectangular output image, the "missing" areas are filled up with the image data horizontally and vertically. We recommend to use the wizard as a starting point - the wizard provides a live preview of the final merged output image.

Using wxPropView

To create multiple AOIs with wxPropView, you have to do the following step:

  1. Start wxPropView and
  2. connect to the camera.
  3. Then change in "Setting -> Base -> Camera -> GenICam -> Image Format Control" the
    "mv Multi Area Mode"
    to "mvMultiAreasCombined".
    Afterwards, "mv Area Selector" is available.
  4. Now, select the area a.k.a. AOI you want to create via "mv Area Selector", e.g. "mvArea3" and
  5. set the parameters "mv Area Width", "mv Area Height", "mv Area Offset X", and "mv Area Offset Y" to your needs.
  6. Activate the area a.k.a. AOI by checking the box of "mv Area Enable".

    Figure 2: wxPropView - Multiple AOIs

  7. Finally, start the acquisition by clicking the button "Acquire".

Using the Multi AOI wizard

Since
mvIMPACT Acquire 2.19.0

wxPropView offers a wizard for the Multi AOI usage:

Figure 3: wxPropView - Wizard menu

The wizard can be used to get a comfortable overview about the settings of the AOIs and to create and set the AOIs in a much easier way:

Figure 4: wxPropView - Multi AOI wizard

Just

  • select the desired mvArea tabs,
  • set the properties like offset, width, and height in the table directly, and
  • confirm the changes at the end using the Ok button.

The live image shows the created AOIs and the merged or "missing" areas which are used to get the final rectangular output image.

Figure 4: wxPropView - Multi AOI wizard - Live image

Programming multiple AOIs

#include <mvIMPACT_CPP/mvIMPACT_acquire.h>
#include <mvIMPACT_CPP/mvIMPACT_acquire_GenICam.h>

... 
    GenICam::ImageFormatControl ifc( pDev );
    ifc.mvMultiAreaMode.writeS( "mvMultiAreasCombined" );
    ifc.mvAreaSelector.writeS( "mvArea0" );
    ifc.mvAreaEnable.write( bTrue );
    ifc.mvAreaOffsetX.write( 0 );
    ifc.mvAreaOffsetY.write( 0 );
    ifc.mvAreaWidth.write( 256 );
    ifc.mvAreaHeight.write( 152 );
    ifc.mvAreaSelector.writeS( "mvArea1" );
    ifc.mvAreaEnable.write( bFalse );
    ifc.mvAreaSelector.writeS( "mvArea2" );
    ifc.mvAreaEnable.write( bFalse );
    ifc.mvAreaSelector.writeS( "mvArea3" );
    ifc.mvAreaEnable.write( bTrue );
    ifc.mvAreaOffsetX.write( 0 );
    ifc.mvAreaOffsetY.write( 0 );
    ifc.mvAreaWidth.write( 256 );
    ifc.mvAreaHeight.write( 152 );
    ifc.mvAreaOffsetX.write( 1448 );
    ifc.mvAreaOffsetY.write( 912 );
...



Working with burst mode buffer

If you want to acquire a number of images at sensor's maximum frame rate while at the same time the image transfer should be at a lower frame rate, you can use the internal memory of the mvBlueFOX3.

Figure 1: Principle of burst mode buffering of images
Note
The maximum buffer size can be found in "Setting -> Base -> Camera -> GenICam -> Acquisition Control -> mv Acquisition Memory Max Frame Count".

To create a burst mode buffering of images, please follow these steps:

  1. Set image acquisition parameter ("Setting -> Base -> Camera -> GenICam -> Acquisition Control -> mv Acquisition Frame Rate Limit Mode") to "mvDeviceMaxSensorThroughput".
  2. Finally, set the acquisition parameter "mv Acquisition Frame Rate Enable" to "Off".

    Figure 2: wxPropView - Setting the bandwidth using "mv Acquisition Frame Rate Limit Mode"

Alternatively, you can set the burst mode via the desired input frames and the desired output bandwidth:

  1. Set image acquisition parameters to the desired input frames per second value ("Setting -> Base -> Camera -> GenICam -> Acquisition Control").

    Figure 3: wxPropView - Setting the "Acquisition Frame Rate"

  2. Set bandwidth control to the desired MByte/s out value in "Setting -> Base -> Camera -> GenICam -> Device Control -> Device Link Selector -> Device Link Troughput Limit Mode" to "On" and
  3. set the desired "Device Link Troughput Limit" in Bits per second (Bps).

    Figure 4: wxPropView - Setting the bandwidth using "Device Link Troughput Limit Mode"

Now, the camera will buffer burst number of images in internal memory and readout at frames per second out.

Triggered frame burst mode

With the triggerSelector "FrameBurstStart", you can also start a frame burst acquisition by a trigger. A defined number of images ("AcquisitionBurstFrameCount") will be acquired directly one after the other. With the "mv Acquisition Frame Rate Limit Mode" set to mvDeviceMaxSensorThroughput , the there won't be hardly any gap between these images.

As shown in figure 5, "FrameBurstStart" can be trigger by a software trigger, too.

Figure 5: wxPropView - Setting the frame burst mode triggered by software
Figure 6: Principle of FrameBurstStart



Using the SmartFrameRecall feature

Since
mvIMPACT Acquire 2.18.0

Introduction

The SmartFrameRecall is a new, FPGA based smart feature which takes the data handling of industrial cameras to a new level.

So far, the entire data amount has to transferred to the host PC whereby the packetizer in the camera split the data packages and distributed them to the two Gigabit Ethernet lines. On the host PC, the data was merged again, shrank, and this AOI was possibly processed (Figure 1).

Figure 1: Data handling so far

This procedure has several disadvantages:

  • Both data lines (in case of Dual GigE cameras for example) for each camera are required which
    • leads to high cabling efforts.
  • A high end PC is needed to process the data which
    • leads to high power consumptions.
  • In USB 3 multi-camera solutions (depending on the resolution) each camera requires a separate connection line to the host PC which
    • limits the cabling possibilities, the possible distances, and makes an installation more complex (without the possibilities to use hubs, for example).
  • Last but not least, the frame rates a limited to the bandwidth.

The SmartFrameRecall is a new data handling approach which buffers the hi-res images in the camera and only transfers thumbnails. You or your software decides on the host PC which AOI should be sent to the host PC (Figure 2).

Figure 2: SmartFrameRecall working method

This approach allows

  • higher bandwidths,
  • less CPU load and power consumption,
  • higher frame rates, and
  • less complex cabling.
Figure 3: Connection advantages in combination with SmartFrameRecall

Implementing a SmartFrameRecall application

First of all, clarify if the SmartFrameRecall makes sense to be used in your application:

  • Taking a short look on the thumbnails, could you specify which frames don't interest you at all or which portions of the frame suffice to you for further processing?

If you can, your application can do so too, and then SmartFrameRecall is the right framework for you. To use the SmartFrameRecall follow these steps:

  • Activate the Chunk Data Control:
#include <mvIMPACT_CPP/mvIMPACT_acquire.h>
#include <mvIMPACT_CPP/mvIMPACT_acquire_GenICam.h>

... 
GenICam::ChunkDataControl cdc( pDev );
cdc.chunkModeActive.write( bTrue );
cdc.chunkSelector.writeS( "Image" );
cdc.chunkEnable.write( bTrue );
cdc.chunkSelector.writeS( "mvCustomIdentifier" );
cdc.chunkEnable.write( bTrue );
...

It is necessary to activate the chunk data that your application can easily distinguish frames belonging to the normal stream from the ones requested by the host application.

  • Reduce the size of the streamed images. This can reduce the size using Decimation, both horizontally and vertically. E.g. setting decimation to 16, a normal image will only consume 16*16 thus 1/256th of the bandwidth:
...
GenICam::ImageFormatControl ifc( pDev );
ifc.decimationHorizontal.write( 16 );
ifc.decimationVertical.write( 16 );
...
  • Make sure that the resulting image width is a multiple of 8! If this is not the case the SmartFrameRecall feature cannot be activated.
  • Activate the SmartFrameRecall feature:
...

GenICam::AcquisitionControl ac( pDev );
ac.mvSmartFrameRecallEnable.write( bTrue );
...

This will configure the devices internal memory to store each frame (that gets transmitted to the host) in full resolution. These images can be requested by an application when needed. As soon as the memory is full, the oldest one will be removed from the memory whenever a new one becomes ready (FIFO).

  • Analyze the images of the reduced data stream.
  • If necessary, request the desired image in full resolution:
...
struct ThreadParameter
{
    Device* pDev;
    GenICam::CustomCommandGenerator ccg;
    ...
}
...
unsigned int DMR_CALL liveThread( void* pData )
{
    ThreadParameter* pThreadParameter = reinterpret_cast<ThreadParameter*>( pData );
    ...
    pThreadParameter->ccg.requestTransmission( pRequest, x, y, w, h, rtmFullResolution, cnt );
    ...
}
...

The last parameter of requestTransmission will be written into the chunkmvCustomIdentifier and your application can recognize the request.

  • Finally do you analysis/processing with the requested image in full resolution.

We provide



Using VLC Media Player

With the DirectShow Interface MATRIX VISION devices become a (acquisition) video device for the VLC Media Player.

Figure 1: VLC Media Player with a connected device via DirectShow

System requirements

It is necessary that following drivers and programs are installed on the host device (laptop or PC):

  • Windows 7, 32 bit or 64 bit
  • up-do-date VLC Media Player, 32 bit or 64 bit (here: version 2.0.6)
  • up-do-date MATRIX VISION driver, 32 bit or 64 bit (here: version 2.5.6)
Attention
Using Windows 10: VLC Media Player with versions 2.2.0 or smaller have been tested successfully. Newer versions do NOT work with mvIMPACT Acquire! There are some bug tickets in the VLC repository that might be related (At the time of writing none of these seem to have a fix):

Installing VLC Media Player

  1. Download a suitable version of the VLC Media Player from the VLC Media Player website mentioned below.
  2. Run the setup.
  3. Follow the installation process and use the default settings.

A restart of the system is not required.

See also
http://www.videolan.org/

Setting up MV device for DirectShow

Note
Please be sure to register the MV device for DirectShow with the right version of mvDeviceConfigure . I.e. if you have installed the 32 bit version of the VLC Media Player, you have to register the MV device with the 32 bit version of mvDeviceConfigure ("C:/Program Files/MATRIX VISION/mvIMPACT Acquire/bin") !
  1. Connect the MV device to the host device directly or via GigE switch using an Ethernet cable.
  2. Power the camera using a power supply at the power connector.
  3. Wait until the status LED turns blue.
  4. Open the tool mvDeviceConfigure ,
  5. set a friendly name ,
  6. and register the MV device for DirectShow .
Note
In some cases it could be necessary to repeat step 5.

Working with VLC Media Player

  1. Start VLC Media Player.
  2. Click on "Media -> Open Capture Device..." .

    Figure 2: Open Capture Device...

  3. Select the tab "Device Selection" .
  4. In the section "Video device name" , select the friendly name of the MV device:
    Figure 3: Video device name

  5. Finally, click on "Play" .
    After a short delay you will see the live image of the camera.



Using the linescan mode

Both CMOS sensors from e2v offer a line scan mode. One (gray scale sensor) or two lines (in terms of color sensor) can be selected to be read out of the full line height of 1024 or 1200 lines. This or these lines are grouped to a pseudo frame of selectable height in the internal buffer of the camera.

Complete instructions for using the line scan mode are provided here:

System requirements

  • mvBlueCOUGAR-X
    • "firmware version" at least "1.6.32.0"
  • mvBlueFOX3
    • "firmware version" at least "1.6.139.0"

Initial situation and settings

Generally, line scan cameras are suitable for inspections of moving, continuous materials. In order that the line scan camera acquires the line at the right time, an incremental encoder, for example, at a conveyor belt triggers the line scan camera. Normally, an incremental encoder does this using a specific frequency like 1:1 which means that there is a signal for every line. However, during the adjustment of a line trigger application or while choosing a specific incremental encoder you have to keep the specific frequency in mind.

Note
Using timers and counters it is possible to skip trigger signals.

In line scan mode, the camera adds the single lines to one image of the height of max. 1024 or 1200 lines (according to the used sensor). The images are provided with no gap.

Note
While using the line scan mode with a gray scale sensor, one trigger signal will lead to one acquired image line. Using a color sensor, one trigger signal will lead to two acquired image lines.

Due to aberrations at the edges of the lens, you should set an offset in the direction of Y ("Offset Y", see the following figure), generally around the half of the sensor's height (a.k.a. sensor's Y center). With Offset Y you can adjust the scan line in the direction of motion.

Figure 1: Sensor's view and settings with a sensor with max. height of 1024 pixels/lines (e.g. -x02e / -1013)

Scenarios

With regards to the external trigger signals provided by an incremental encoder, there are two possible scenarios:

  1. A conveyor belt runs continuously and so does the incremental encoder, or - like in a reverse vending machine,
  2. a single item is analyzed and the conveyor belt and so the incremental encoder stops after the inspection and restarts for the next item.

In the first scenario you can use the standard settings of the MATRIX VISION devices. Please have a look at the sample Sample 2: Triggered line scan acquisition with a specified number of image blocks and pausing trigger signals scan_Sample "Triggered line scan acquisition with exposure time of 250 us" which shows how you can set the line scan mode with continuous materials and signals from the encoder. However, it is absolutely required that the external trigger is always present. During a trigger interruption, controlling or communication to the camera is not possible.

In the second scenario, the external trigger stops. If there is a heartbeat functionality in the system (e.g. with GigE Vision), there can be a halt of the system. Please have a look at the sample Triggered line scan acquisition with a specified number of image blocks and pausing trigger signals which shows how you can handle pausing trigger signals.

Sample 1: Triggered linescan acquisition with exposure time of 250 us

This sample will show you how to use the line scan mode of the sensors -1013 and -1013GE  using an external trigger provided by an incremental encoder which sends a "trigger for every line" (1:1)

Note
You can also use the sensor -1020 . However, the sensor is slower due to the higher number of pixels.

In this sample, we chose an exposure time of "250 us" and to ease the calculations we used "1000 px image height".

Note
To get suitable image results, it might be necessary to increase the gain or the illumination.

These settings result in a max. "frame rate" of "2.5 frames per second".

To adjust the opto-mechanics (focus, distance, illumination, etc.), you can use the area mode of the sensor. That's a main advantage of an area sensor with line scan mode compared to a line scan camera!

You will need the following pins of the mvBlueFOX3:

Pin. Signal (Standard version) Description
1 GND Common ground
2 12V .. 24V Power supply
4 Opto DigIn0 (line4) The output signal A of the incremental encoder

Setting the application in wxPropView

Summary of our sample:

Property name wxPropView Setting GenICam Control Comment
Device Scan Type line scan Device Control  
 
Height (in pixels) 1000 Image Format Control  
Offset Y (in pixels) 500 Image Format Control  
 
Exposure Time (in microseconds) 250.000 Acquisition Control  
Trigger Mode On Acquisition Control  
Trigger Source Line4 Acquisition Control  
 
ImageRequestTimeout_ms (in milliseconds) 0 ms - This is necessary otherwise there will be error counts and no frames are created.


Figure 2: Settings in wxPropView

Sample 2: Triggered line scan acquisition with a specified number of image blocks and pausing trigger signals

This section will provide you with some information you have to keep in mind while working with pausing triggers and specified number of image blocks.

First of all, using mvBlueCOUGAR-X or mvBlueCOUGAR-XD it is necessary to disable the heartbeat of the GigE Vision control protocol (GVCP) ("Device Link Heartbeat Mode = On") otherwise a paused trigger signal can be misinterpreted as a lost connection:

Figure 3: wxPropView - Disabling the heartbeat

Secondly, since the conveyor belt stops sometime, the trigger will do so, too. Be sure, that the trigger signal is available until the last image block was received.

Thirdly, if you know the number of image blocks, you can use the How to see the first image MultiFrame functionality (in "Setting -> Base -> Camera -> GenICam -> Acquisition Control" set "Acquisition Mode = MultiFrame" and "Acquisition Frame Count"). This will acquire the number of image blocks and will stop the acquisition afterwards.



Working with Event Control

The mvBlueFOX3 camera generates Event notifications. An Event is a message that is sent to the host application to notify it of the occurrence of an internal event. With "Setting -> Base -> Camera -> GenICam -> Event Control" you can handle these notifications.

At the moment, it is possible to handle

  • Exposure End (= sensor's exposure end)
  • Line 4 (= DigIn0) Rising Edge
  • Line 5 (= DigIn1) Rising Edge
  • Frame End (= the camera is ready for a new trigger)

Setting Event notifications using wxPropView

To activate the notifications, just

  1. Select the Event via "Setting -> Base -> Camera -> GenICam -> Event Control -> Event Selector", e.g. ExposureEnd .
  2. Set the "Event Notification" to "On" .

Afterwards, it is possible to attach a custom callback that gets called whenever the property is modified. E.g. if you want to attach a callback to the Frame ID after the exposure was finished, you have to

  1. select "Setting -> Base -> Camera -> GenICam -> Event Control -> Event Exposure End Data -> Event Exposure End Frame ID",
  2. right-click on the property, and
  3. click on "Attach Callback".

    Figure 1: wxPropView - "Attach Callback" to Event Exposure End Frame ID

Now, you can track the property modifications in the output window:

Figure 2: wxPropView - Output window with the Event notifications

You can find a detailed Callback code example in the C++ API manual.



Improving the acquisition / image quality

There are several use cases concerning the acquisition / image quality of the camera:



Correcting image errors of a sensor

Due to random process deviations, technical limitations of the sensors, etc. there are different reasons that image sensors have image errors. MATRIX VISION provides several procedures to correct these errors, by default these are host-based calculations.

Provided image corrections procedures are

  1. Defective Pixels Correction,
  2. Dark Current Correction, and
  3. Flat-Field Correction.
Note
If you execute all correction procedures, you have to keep this order. All gray value settings of the corrections below assume an 8-bit image.
Figure 1: Host-based image corrections

The path "Setting -> Base -> ImageProcessing -> ..." indicates that these corrections are host-based corrections.

Before starting consider the following hints:

  • To correct the complete image, you have to make sure no user defined AOI has been selected: Right-click "Restore Default" on the devices AOI parameters Width and Height or "Setting -> Base -> Camera -> GenICam -> Image Format Control" using the GenICam interface layout.
  • You have several options to save the correction data. The chapter Storing and restoring settings describes the different ways.
See also
There is a white paper about image error corrections with extended information available on our website: http://www.matrix-vision.com/tl_files/mv11/Glossary/art_image_errors_sensors_en.pdf

Defective Pixels Correction

Due to random process deviations, not all pixels in an image sensor array will react in the same way to a given light condition. These variations are known as blemishes or defective pixels.

There are three types of defective pixels:

  1. leaky pixel (in the dark)
    which indicates pixels that produce a higher read out code than the average
  2. hot pixel (in standard light conditions)
    which indicates pixels that produce a higher non-proportional read out code when temperatures are rising
  3. cold pixel (in standard light conditions)
    which indicates pixels that produce a lower read out code than average when the sensor is exposed (e.g. caused by dust particles on the sensor)
Note
Please use either an Mono or raw Bayer image format when detecting defective pixel data in the image.

To correct the defective pixels various substitution methods exist:

  1. "Replace 3x1 average"
    which substitutes the detected defective pixels with the average value from the left and right neighboring pixel (3x1)
  2. "Replace 3x3 median"
    which substitutes the detected defective pixels with the median value calculated from the nearest neighboring in a 3 by 3 region
  3. "Replace 3x3 Filtered Data Averaged"
    which substitutes and treats the detected defective pixels as if they have been processed with a 3 by 3 filter algorithm before reaching this filter
    Only recommended for devices which do not offer a defective pixel compensation; packed RGB or packed YUV444 data is needed. See enumeration value dpfmReplaceDefectivePixelAfter3x3Filter in the corresponding API manual for additional details about this algorithm and when and why it is needed

Correcting leaky pixels

To correct leaky pixels the following steps are necessary:

  1. Set gain ("Setting -> Base -> Camera -> GenICam -> Analog Control -> Gain = 0 dB") and exposure time "Setting -> Base -> Camera -> GenICam -> Acquisition Control -> ExposureTime = 360 msec" to the given operating conditions
    The total number of defective pixels found in the array depend on the gain and the exposure time.
  2. Black out the lens completely
  3. Set the (Filter-) "Mode = Calibrate leaky pixel"
  4. Snap an image (e.g. by pressing Acquire in wxPropView with "Acquisition Mode = SingleFrame")
  5. To activate the correction, choose one of the substitution methods mentioned above
  6. Save the settings including the correction data via "Action -> Capture Settings -> Save Active Device Settings"
    (Settings can be saved in the Windows registry or in a file)
Note
After having re-started the camera you have to reload the capture settings!

The filter checks:

Pixel > LeakyPixelDeviation_ADCLimit // (default value: 50)

All pixels above this value are considered as leaky pixel.

Correcting hot pixels

To correct hot pixels the following steps are necessary:

  1. Set gain ("Setting -> Base -> Camera -> GenICam -> Analog Control -> Gain = 0 dB") and exposure time "Setting -> Base -> Camera -> GenICam -> Acquisition Control -> ExposureTime = 360 msec" to the given operating conditions
    The total number of defective pixels found in the array depend on the gain and the exposure time.
  2. Black out the lens completely
  3. Set the (Filter-) "Mode = Calibrate Hot Pixel"
  4. Snap an image (e.g. by pressing Acquire in wxPropView with "Acquisition Mode = SingleFrame")
  5. To activate the correction, choose one of the substitution methods mentioned above
  6. Save the settings including the correction data via "Action -> Capture Settings -> Save Active Device Settings"
    (Settings can be saved in the Windows registry or in a file)
Note
After having re-started the camera you have to reload the capture settings!

The filter checks:

Pixel > T[hot] // (default value: 15 %)

// T[hot] = deviation of the average gray value

Correcting cold pixels

To correct cold pixels the following steps are necessary:

  1. You will need a uniform sensor illumination approx. 50 - 70 % saturation (which means an average gray value between 128 and 180)
  2. Set the (Filter-) "Mode = Calibrate cold pixel" (Figure 2)
  3. Snap an image (e.g. by pressing Acquire in wxPropView with "Acquisition Mode = SingleFrame")
  4. To activate the correction, choose one of the substitution methods mentioned above
  5. Save the settings including the correction data via "Action -> Capture Settings -> Save Active Device Settings"
    (Settings can be saved in the Windows registry or in a file)
Note
After having re-started the camera you have to reload the capture settings!

The filter checks:

Pixel < T[cold] // (default value: 15 %)

// T[cold] = deviation of the average gray value

All pixels below this value have a dynamic below normal behavior.

Figure 2: Image corrections: DefectivePixelsFilter
Note
Repeating the defective pixel corrections will accumulate the correction data which leads to a higher value in "DefectivePixelsFound". If you want to reset the correction data or repeat the correction process you have to set the (Filter-) "Mode = Reset Calibration Data".

Storing pixel data on the device

To save and load the defective pixel data, appropriate functions are available:

  • int mvDefectivePixelDataLoad( void )
  • int mvDefectivePixelDataSave( void )

The section "Setting -> Base -> ImageProcessing -> DefectivePixelsFilter" was also extended (see Figure 2a). First, the DefectivePixelsFound indicates the number of found defective pixels. The coordinates are available through the properties DefectivePixelOffsetX and DefectivePixelOffsetY now. In addition to that it is possible to edit, add and delete these values manually (via right-click on the "DefectivePixelOffset" and select "Append Value" or "Delete Last Value"). Second, with the function

  • int mvDefectivePixelReadFromDevice( void )
  • int mvDefectivePixelWriteToDevice( void )

you can exchange the data from the filter with the camera and vice versa

Figure 2a: Image corrections: DefectivePixelsFilter (since driver version 2.17.1 and firmware version 2.12.406)

Just right-click on mvDefectivePixelWriteToDevice and click on "Execute" to write the data to the camera (and hand over the data to the Features_section_mvDefectivePixelCorrectionControl). To permanently store the data inside the cameras non-volatile memory afterwards mvDefectivePixelDataSave must be called as well!

Figure 2b: Defective pixel data are written to the camera (since driver version 2.17.1 and firmware version 2.12.406)

While opening the camera, the camera will load the defective pixel data from the camera. If there are pixels in the filter available (via calibration), nevertheless you can load the values from the camera. In this case the values will be merged with the existing ones. I.e., new ones are added and duplicates are removed.

Dark Current Correction

Dark current is a characteristic of image sensors, which means, that image sensors also deliver signals in total darkness by warmness, for example, which creates charge carriers spontaneously. This signal overlays the image information. Dark current depends on two circumstances:

  1. Exposure time
    The longer the exposure, the greater the dark current part. I.e. using long exposure times, the dark current itself could lead to an overexposed sensor chip
  2. Temperature
    By cooling the sensor chips the dark current production can be highly dropped (approx. every 6 °C the dark current is cut in half)

Correcting Dark Current

The dark current correction is a pixel wise correction where the dark current correction image removes the dark current from the original image. To get a better result it is necessary to snap the original and the dark current images with the same exposure time and temperature.

Note
Dark current snaps generally show noise.

To correct the dark current pixels following steps are necessary:

  1. Black out the lens completely
  2. Set exposure time according to the application
  3. Set the number of image for calibration in "Setting -> Base -> ImageProcessing -> DarkCurrentFilter -> CalibrationImageCount" (Figure 3).
  4. Set "Setting -> Base -> ImageProcessing -> DarkCurrentFilter -> Mode" to "Calibrate" (Figure 3)
  5. Snap an image ("Acquire" with "Acquisition Mode = SingleFrame")
  6. Finally, you have to activate the correction: Set "Setting -> Base -> ImageProcessing -> DarkCurrentFilter -> Mode" to "On"
  7. Save the settings including the correction data via "Action -> Capture Settings -> Save Active Device Settings"
    (Settings can be saved in the Windows registry or in a file)

The filter snaps a number of images and averages the dark current images to one correction image.

Note
After having re-started the camera you have to reload the capture settings vice versa.
Figure 3: Image corrections: CalibrationImageCount
Figure 4: Image corrections: Calibrate
Figure 5: Image corrections: Dark current

Flat-Field Correction

Each pixel of a sensor chip is a single detector with its own properties. Particularly, this pertains to the sensitivity as the case may be the spectral sensitivity. To solve this problem (including lens and illumination variations), a plain and equally "colored" calibration plate (e.g. white or gray) as a flat-field is snapped, which will be used to correct the original image. Between flat-field correction and the future application you must not change the optic. To reduce errors while doing the flat-field correction, a saturation between 50 % and 75 % of the flat-field in the histogram is convenient.

Note
Flat-field correction can also be used as a destructive watermark and works for all f-stops.

To make a flat field correction following steps are necessary:

  1. You need a plain and equally "colored" calibration plate (e.g. white or gray)
  2. No single pixel may be saturated - that's why we recommend to set the maximum gray level in the brightest area to max. 75% of the gray scale (i.e., to gray values below 190 when using 8-bit values)
  3. Choose a BayerXY in "Setting -> Base -> Camera -> GenICam -> Image Format Control -> PixelFormat".
  4. Set the (Filter-) "Mode = Calibrate" (Figure 6)
  5. Start a Live snap ("Acquire" with "Acquisition Mode = Continuous")
  6. Finally, you have to activate the correction: Set the (Filter-) "Mode = On"
  7. Save the settings including the correction data via "Action -> Capture Settings -> Save Active Device Settings"
    (Settings can be saved in the Windows registry or in a file)
Note
After having re-started the camera you have to reload the capture settings vice versa.

The filter snaps a number of images (according to the value of the CalibrationImageCount, e.g. 5) and averages the flat-field images to one correction image.

Figure 6: Image corrections: Host-based flat field correction



Optimizing the color fidelity of the camera

Purpose of this chapter is to optimize the color image of a camera, so that it looks as natural as possible on different displays and for human vision.

This implies some linear and nonlinear operations (e.g. display color space or Gamma viewing LUT) which are normally not necessary or recommended for machine vision algorithms. A standard monitor offers, for example, several display modes like sRGB, "Adobe RGB", etc., which reproduce the very same color of a camera color differently.

It should also be noted that users can choose for either

  • camera based settings and adjustments or
  • host based settings and adjustments or
  • a combination of both.

Camera based settings are advantageous to achieve highest calculating precision, independent of the transmission bit depth, lowest latency, because all calculations are performed in FPGA on the fly and low CPU load, because the host is not invoked with these tasks. These camera based settings are

Host based settings save transmission bandwidth at the expense of accuracy or latency and CPU load. Especially performing gain, offset, and white balance in the camera while outputting RAW data to the host can be recommended.

Of course host based settings can be used with all families of cameras (e.g. also mvBlueFOX).

Host based settings are:

  • look-up table (LUTOperations)
  • color correction (ColorTwist)

To show the different color behaviors, we take a color chart as a starting point:

Figure 1: Color chart as a starting point

If we take a SingleFrame image without any color optimizations, an image can be like this:

Figure 2: SingleFrame snap without color optimization
Figure 3: Corresponding histogram of the horizontal white to black profile

As you can see,

  • saturation is missing,
  • white is more light gray,
  • black is more dark gray,
  • etc.
Note
You have to keep in mind that there are two types of images: the one generated in the camera and the other one displayed on the computer monitor. Up-to-date monitors offer different display modes with different color spaces (e.g. sRGB). According to the chosen color space, the display of the colors is different.

The following figure shows the way to a perfect colored image

Figure 4: The way to a perfect colored image

including these process steps:

  1. Do a Gamma correction (Luminance),
  2. make a White balance and
  3. Improve the Contrast.
  4. Improve Saturation, and use a "color correction matrix" for both
    1. the sensor and / or
    2. the monitor.

The following sections will describe the single steps in detail.

Step 1: Gamma correction (Luminance)

First of all, a Gamma correction (Luminance) can be performed to change the image in a way how humans perceive light and color.

For this, you can change either

  • the exposure time,
  • the aperture or
  • the gain.

You can change the gain via wxPropView like the following way:

  1. Click on "Setting -> Base -> Camera -> GenICam -> LUT Control -> LUT Selector".
  2. Afterwards, click on "Wizard" to start the LUT Control wizard tool.
    The wizard will load the data from the camera.

    Figure 5: Selected LUT Selector and click on wizard will start wizard tool
    Figure 6: LUT Control
  3. Now, click on the "Gamma..." button
  4. and enter e.g. "2.2" as the Gamma value:

    Figure 7: Gamma Parameter Setup
  5. Then, click on "Copy to..." and select "All" and
  6. and click on "Enable All".
  7. Finally, click on Synchronize and play the settings back to the device (via "Cache -> Device").

    Figure 8: Synchronize

After gamma correction, the image will look like this:

Figure 9: After gamma correction
Figure 10: Corresponding histogram after gamma correction
Note
As mentioned above, you can do a gamma correction via ("Setting -> Base -> ImageProcessing -> LUTControl"). Here, the changes will affect the 8 bit image data and the processing needs the CPU of the host system:

Figure 11: LUTControl dialog

Just set "LUTEnable" to "On" and adapt the single LUTs like (LUT-0, LUT-1, etc.).

Step 2: White Balance

As you can see in the histogram, the colors red and blue are above green. Using green as a reference, we can optimize the white balance via "Setting -> Base -> Camera -> GenICam -> Analog Control -> Balance Ratio Selector" ("Balance White Auto" has to be "Off"):

  1. Just select "Blue" and
  2. adjust the "Balance Ratio" value until the blue line reaches the green one.

    Figure 12: Optimizing white balance
  3. Repeat this for "Red".

After optimizing white balance, the image will look like this:

Figure 13: After white balance
Figure 14: Corresponding histogram after white balance

Step 3: Contrast

Still, black is more a darker gray. To optimize the contrast you can use "Setting -> Base -> Camera -> GenICam -> Analog Control -> Black Level Selector":

  1. Select "DigitalAll" and
  2. adjust the "Black Level" value until black seems to be black.

    Figure 15: Back level adjustment

The image will look like this now:

Figure 16: After adapting contrast
Figure 17: Corresponding histogram after adapting contrast

Step 4: Saturation and Color Correction Matrix (CCM)

Still saturation is missing. To change this, the "Color Transformation Control" can be used ("Setting -> Base -> Camera -> GenICam -> Color Transformation Control"):

  1. Click on "Color Transformation Enable" and
  2. click on "Wizard" to start the saturation via "Color Transformation Control" wizard tool (since firmware version 1.4.57).

    Figure 18: Selected Color Transformation Enable and click on wizard will start wizard tool
  3. Now, you can adjust the saturation e.g. "1.1".

    Figure 19: Saturation Via Color Transformation Control dialog
  4. Afterwards, click on "Enable".
  5. Since driver version 2.2.2, it is possible to set the special color correction matrices at
    1. the input (sensor),
    2. the output side (monitor) and
    3. the saturation itself using this wizard.
  6. Select the specific input and output matrix and
  7. click on "Enable".
  8. As you can see, the correction is done by the host by default ("Host Color Correction Controls"). However, you can decide, if the color correction is done by the device by clicking on "Write To Device And Switch Off Host Processing". The wizard will take the settings of the "Host Color Correction Controls" and will save it in the device.
  9. Finally, click on "Apply".

After the saturation, the image will look like this:

Figure 20: After adapting saturation
Figure 21: Corresponding histogram after adapting saturation
Note
As mentioned above, you can change the saturation and the color correction matrices via ("Setting -> Base -> ImageProcessing -> ColorTwist"). Here, the changes will affect the 8 bit image data and the processing needs the CPU of the host system:
Figure 22: ColorTwist dialog
Figure 23: Input and output color correction matrix



Reducing noise by frame averaging

What is frame average?

As the name suggests, the functionality averages the gray values of each pixel using the information of subsequent frames. This can be used to

  • reduce the noise in an image and
  • compensate motion in an image.

MATRIX VISION implemented two modes of the frame averaging:

  • "mvNTo1" and
  • "mvNTo1Sum".

These modes are a FPGA function which will not need any CPU of the host system. However, these modes are only available for the following cameras:

  • mvBlueFOX3-2xxx family.

mvNTo1

To get an averaged pixel, this mode takes the gray value of each pixel of the specified number of subsequent frames ("mv Frame Average Frame Count") and calculates the average. E.g. Averaging of pixel [0,0] using 8 frames ("mv Frame Average Frame Count = 8") will be as follows:

                 Gray_value[0,0,image1] + Gray_value[0,0,image2] + ... + Gray_value[0,0,image8]
Pixel_avg[0,0] = ------------------------------------------------------------------------------
                                                     8

Using wxPropView

Using the frame average mode "mvNTo1", you have to do the following step:

  1. Start wxPropView and
  2. connect to the camera.
  3. Then specify in "Setting -> Base -> Camera -> GenICam -> Device Control" which processing unit of the camera should do the frame averaging, e.g. unit 0 should do the frame averaging
    "mv Device Processing Unit Selector = 0"
    "mv Device Processing Unit = mvFrameAverage".
    Afterwards, "mv Frame Average Control" is available.
  4. Now open "Setting -> Base -> Camera -> GenICam -> mv Frame Average Control" and
  5. select "mv Frame Average Mode = mvNto1".
  6. Select the number of frames you want to use for averaging ("mv Frame Average Frame Count"), e.g. 8.
    This, of course, reduces the frame rate. If you have a frame rate of 8 frames per second and a "mv Frame Average Frame Count" of 8 frames, this will result to a frame rate of 1 Hz.
  7. Activate frame averaging by setting "mv Frame Average Frame Enable = 1".
Figure 1: wxPropView: Setting frame average mode "mvNTo1"

mvNTo1Sum

In this mode, the values of the pixel are summed up with the specified number of subsequent frames ("mv Frame Average Frame Count"). E.g. Summing up of pixel [0,0] using 8 frames ("mv Frame Average Frame Count = 8") will be as follows:

Pixel_avg[0,0] = Gray_value[0,0,image1] + Gray_value[0,0,image2] + ... + Gray_value[0,0,image8]
  1. Start wxPropView and
  2. connect to the camera.
  3. Then specify in "Setting -> Base -> Camera -> GenICam -> Device Control" which processing unit of the camera should do the frame averaging, e.g. unit 0 should do the frame averaging
    "mv Device Processing Unit Selector = 0"
    "mv Device Processing Unit = mvFrameAverage".
    Afterwards, "mv Frame Average Control" is available.
  4. Now open "Setting -> Base -> Camera -> GenICam -> mv Frame Average Control" and
  5. select "mv Frame Average Mode = mvNTo1Sum".
  6. Select the number of frames you want to use for summing up ("mv Frame Average Frame Count"), e.g. 8.
    This, of course, reduces the frame rate. If you have a frame rate of 8 frames per second and a "mv Frame Average Frame Count" of 8 frames, this will result to a frame rate of 1 Hz.
  7. Activate the mode by setting "mv Frame Average Frame Enable = 1".



Optimizing the bandwidth

Limiting the bandwidth of the imaging device

It is possible to limit the used bandwidth like the following way:

Since
mvIMPACT Acquire 2.25.0
  1. In "Setting -> Base -> Camera -> GenICam -> Device Control -> Device Link Selector" set property "Device Link Throughput Limit Mode" to "On".
  2. Now, you can set the bandwidth with "Device Link Throughput Limit" to your desired bandwidth in bits per second

    Figure 1: wxPropView - Setting Device Link Throughput Limit




Setting a flicker-free auto expose and auto gain

Introduction

In order to prevent oscillations it is important to adapt the camera frequency to the frequency of AC light.

This is, for example, in

  • Europe 50 cycles (100 fluctuations/s) whereas in
  • USA, Japan and other countries it is 60 Hz.

This means the camera must strictly be coupled to this frequency. In conjunction with auto exposure this can only be maintained by using a timer based generation of external trigger pulses. This is a behavior of both sensor types: CCD and CMOS.

Note
It is not enough to use "Setting -> Base -> Camera -> GenICam -> Acquisition Control -> Acquisition Frame Rate" for this, as there are small fluctuations in this frame rate if the exposure time changes. These fluctuations lead to oscillations (see settings marked with red boxes in Figure 1). The "Acquisition Frame Rate" will only provide the exact frame rate if auto exposure is turned off.

As shown in Figure 1, it is possible to set ("mv Exposure Auto Mode") which part of the camera handles the auto expose (device or the sensor itself; pink boxes). Using "mvSensor" as "mv Exposure Auto Mode", it is possible to avoid oscillations in some cases. The reason for this behavior is that you can set more parameters like "mv Exposure Auto Delay Images" in contrast to "mvDevice". However, as mentioned above it is recommended to use a timer based trigger when using auto expose together with continuous acquisition.

Figure 1: wxPropView - Auto expose is turned on and the frame rate is set to 25 fps

Example of using a timer for external trigger

Figure 2 shows how to generate a 25 Hz signal, which triggers the camera:

  • "Setting -> Base -> Camera -> GenICam -> Counter & Timer Control -> Timer Selector -> Timer 1":
    • "Timer Trigger Source" = "Timer1End"
    • "Timer Duration" = "40000"
                   1
      FPS_max =  -----     = 25
                 40000 us
      
  • "Setting -> Base -> Camera -> GenICam -> Acquisition Control -> Trigger Selector -> FrameStart":
    • "Trigger Mode" = "On"
    • "Trigger Source" = "Timer1End"
Figure 2: wxPropView - 25 Hz timer for external trigger

No oscillation occurs, regardless of DC ambient vs. AC indoor light.

This operation mode is known as flicker-free or flicker-less operation.

What it mainly does is to adjust the frame frequency to precisely the frequency of the power line. Usually the line frequency is very stable and therefore is the harmonic frequency difference of the two signals are very slow; probably in the range of < 0.1 Hz.

The fact that we do not know the phase relation between the two frequencies means that we scan the alternating ambient light source with our camera. The shorter the exposure time, the more we see a slow change in brightness.

Using AutoExposure/AutoGain can completely eliminate this change because the frequency of change is very low. That means it will be legal if we calculate a brightness difference in one picture and apply it for the next one, because the change is also valid in the next one; as we fulfill the Nyquist theorem.

If we use an arbitrary scanning frequency like 20 fps or whatever your algorithm and data flow is accepting, is wrong in this aspect and leads to oscillations and undesired flicker.

Pointing to a 60 Hz display with flashing backlight an oscillation of 10 Hz can be seen of course.

Figure 3: wxPropView - Intensity plot while pointing the camera to a 60 Hz display

Conclusion

To avoid oscillations, it is necessary to adapt the camera frequency to the frequency of AC light. When using auto expose a flicker-free mode (timer based external trigger) is needed. If the camera is used throughout the world it is necessary that the frequency of AC light can be set in the software and the software adapts the camera to this specific environment.



Working with binning

With binning it is possible to combine adjacent pixels vertically and/or horizontally. Depending on the sensor, up to 16 adjacent pixels can be combined.

See also
https://www.matrix-vision.com/manuals/SDK_CPP/Binning_modes.png

Binning will lighten the image at the expense of the resolution. This is a neat solution for applications with low light and low noise.

The following results were achieved with the mvBlueFOX3-2124G, however, binning is also available with the mvBlueCOUGAR-X camera family.

Exposure [in us] Binning Gain [in dB] Averager Image
2500 - 0 -
2500 - 30 -
2500 2H 2V 30 -
2500 2H 2V 30 Averaging using 24 frames

The last image shows, that you can reduce the noise, caused by the increased gain, using frame averaging.



Minimizing sensor pattern of mvBlueFOX3-1100G

Sometimes the gray scale version of Aptinas sensor MT9J003 shows structures comparable with Bayer patterns of color sensors. This pattern is particularly apparent in scaled images:

Figure 1: Bayer pattern like structures of the MT9J003(gray scale version)

To minimized this pattern, you can balance the sensor patterns since Firmware version 2.3.70.0. This procedure works like the white balancing used with color sensors. For this reason the same terms are used (red, blue, green).

The balance reference is the "green" pixel value from the "blue-green" line of the sensor.

See: Output sequence of color sensors (RGB Bayer)

I.e. all gray scale values of the these "green" pixels are averaged.

With "Setting -> Base -> Camera -> GenICam -> Analog Control -> Balance Ration Selector" you can select each color to set the "Balance Ratio":

  • "Red" averages the "red" pixel values from the "red-green" line of the sensor.
  • "Green" averages the "green" pixel values from the "red-green" line of the sensor, too.
  • "Blue" averages the "blue" pixel values from the "blue-green" line of the sensor.

I.e. there are 4 average values (reference value, red value, green value, blue value). The lowest value will be unchanged, the other values are increased using each "Balance Ratio".

However by using the property "Balance White Auto" you can balance the sensor automatically:

Figure 2: Balance White Auto

After balancing, we recommend to save these settings to a UserSet.

Figure 3: Calibrated sensor



Working with the dual gain feature of mvBlueFOX3-2071/2071a

Introduction

The IMX425/428 used in the mvBlueFOX3-2071/2071a are Pregius sensors of the third generation.

Those sensors feature a dual gain mode, i.e. after a trigger event, for example, different image areas can be amplified at the same time differently.

To activate the dual gain mode it is necessary to enable the multi area mode by using the mvMultiAreaMode property. At least two different AOIs have to be defined.

Note
Both AOIs must not overlap or touch each other!

Once the AOIs are configured the GainSelector has to be set to one of the new implemented options called mvHorizontalZone0 and mvHorizontalZone1.

The gain value of the different zones can be specified using the "Gain Selector". And the corresponding gain property. Possible zones are

  • mvHorizontalZone0
  • mvHorizontalZone1

If this mode is selected you can set the "mv Gain Horizontal Zone Divider". This property indicates where the zones are divided horizontally once more than two AOIs are configured. In this case the e.g. 25% means that the upper 25% of the image are defined by the gain value of mvHorizontalZone0 and the lower 75% are defined by the gain value of mvHorizontalZone1.

Note
Some sensors may only allow to change the gain at certain positions e.g. the last line of a defined ROI. In this case the first possible switching point above the actual line will be used.

Setting the dual gain using wxPropView

To activate the dual gain, just

  1. Use the Multi AOI Wizard to adjust the different AOIs. (They must not overlap or touch each other!)
  2. Select mvMultiZone in "Setting -> Base -> Camera -> GenICam -> Analog Control -> mvGainMode".
  3. Select mvHorizontalZone0 in "Setting -> Base -> Camera -> GenICam -> Analog Control -> GainSelector".
  4. Adjust the gain value for the first AOI in "Setting -> Base -> Camera -> GenICam -> Analog Control -> GainSelector -> Gain".
  5. Adjust the gain divider position "Setting -> Base -> Camera -> GenICam -> Analog Control -> GainSelector -> mvGainHorizontalZoneDivider".
  6. Select mvHorizontalZone1 in "Setting -> Base -> Camera -> GenICam -> Analog Control -> GainSelector".
  7. Adjust the gain value for the second AOI in "Setting -> Base -> Camera -> GenICam -> Analog Control -> GainSelector -> Gain".
Figure 1: wxPropView - configuring multiple AOIs
Figure 2: wxPropView - configuring the gain of the first zone
Figure 3: Example image with two different amplified zones

Programming the dual gain mode

As an example the IMX425 sensor is used for the sample. The goal is to configure three AOIs which have a similar height. Since the AOIs must not overlap or touch each other, it is important to increase the offset of the next AOI by the smallest increment size. Which is 8 in this case.

#include <mvIMPACT_CPP/mvIMPACT_acquire_GenICam.h>

  // more code
    GenICam::ImageFormatControl ifc( pDev );
    
    ifc.mvMultiAreaMode.writeS( "mvMultiAreasCombined" );

    ifc.mvAreaSelector.writeS( "mvArea0" );
    ifc.mvAreaOffsetX.write( 0 );
    ifc.mvAreaOffsetY.write( 0 );
    ifc.mvAreaWidth.write( ifc.mvAreaWidth.getMaxValue( ) );
    ifc.mvAreaHeight.write( 360 );
    ifc.mvAreaEnable.writeS( "1" );

    ifc.mvAreaSelector.writeS( "mvArea1" );
    ifc.mvAreaOffsetX.write( 0 );
    ifc.mvAreaOffsetY.write( 368 );
    ifc.mvAreaWidth.write( ifc.mvAreaWidth.getMaxValue( ) );
    ifc.mvAreaHeight.write( 360 );
    ifc.mvAreaEnable.writeS( "1" );

    ifc.mvAreaSelector.writeS( "mvArea2" );
    ifc.mvAreaOffsetX.write( 0 );
    ifc.mvAreaOffsetY.write( 736 );
    ifc.mvAreaWidth.write( ifc.mvAreaWidth.getMaxValue( ) );
    ifc.mvAreaHeight.write( 360 );
    ifc.mvAreaEnable.writeS( "1" );

    GenICam::AnalogControl anc( pDev );
    anc.mvGainMode.writeS( "mvMultiZone" );

    anc.gainSelector.writeS( "mvHorizontalZone0" );
    anc.gain.write( 12 );
    anc.mvGainHorizontalZoneDivider.write( 80 );

    anc.gainSelector.writeS( "mvHorizontalZone1" );
    anc.gain.write( 0 );
  // more code



Working with triggers

There are several use cases concerning trigger:



Processing triggers from an incremental encoder

Basics

The following figure shows the principle of an incremental encoder:

Figure 1: Principle of an incremental encoder

This incremental encoder will send a A, B, and Z pulse. With these pulses there are several ways to synchronize an image with an incremental encoder.

Using Encoder Control

To create an external trigger event by an incremental encoder, please follow these steps:

  1. Connect the incremental encoder output signal A, for example, to the digital input 0 ("Line4") of the mvBlueFOX3.
    This line counts the forward pulses of the incremental encoder.
  2. Depening on the signal quality, it could be necessary to set a debouncing filter at the input (red box in Figure 3):
    Adapt in "Setting -> Base -> Camera -> GenICam -> Digital I/O Control" the "Line Selector" to "Line4" and set "mv Line Debounce Time Falling Edge" and "mv Line Debounce Time Rising Edge" according to your needs.
  3. Set the trigger "Setting -> Base -> Camera -> GenICam -> Acquisition Control -> Trigger Selector" "FrameStart" to the "Encoder0" ("Trigger Source") signal.
  4. Then set "Setting -> Base -> Camera -> GenICam -> Encoder Control -> Encoder Selector" to "Encoder0" and
  5. adapt the parameters to your needs.
    See also
    Encoder Control
Figure 2: wxPropView settings
Note
The max. possible frequency is 5 KHz.

Programming the Encoder Control

#include <mvIMPACT_CPP/mvIMPACT_acquire_GenICam.h>

  // more code
  GenICam::EncoderControl ec( pDev );
  ec.encoderSelector.writeS( "Encoder0" );
  ec.encoderSourceA.writeS( "Line4" );
  ec.encoderMode.writeS( "FourPhase" );
  ec.encoderOutputMode.writeS( "PositionUp" );
  // more code

Using Counter

It is also possible to use Counter and CounterEnd as the trigger event for synchronizing images with an incremental encoder

To create an external trigger event by an incremental encoder, please follow these steps:

  1. Connect the incremental encoder output signal A, for example, to the digital input 0 ("Line4") of the mvBlueFOX3.
    This line counts the forward pulses of the incremental encoder.
  2. Set "Setting -> Base -> Camera -> GenICam -> Counter and Timer Control -> Counter Selector" to "Counter1" and
  3. "Counter Event Source" to "Line4" to count the number of pulses e.g. as per revolution (e.g. "Counter Duration" to 3600).
  4. Then set the trigger "Setting -> Base -> Camera -> GenICam -> Acquisition Control -> Trigger Selector" "FrameStart" to the "Counter1End" ("Trigger Source") signal.
Figure 3: wxPropView setting

To reset "Counter1" at Zero degrees, you can connect the digital input 1 ("Line5") to the encoder Z signal.



Generating a pulse width modulation (PWM)

Basics

To dim a laser line generator, for example, you have to generate a pulse width modulation (PWM).

For this, you will need

  • 2 timers and
  • the active signal of the second timer at an output line

Programming the pulse width modulation

You will need two timers and you have to set a trigger.

  • Timer1 defines the interval between two triggers.
  • Timer2 generates the trigger pulse at the end of Timer1.

The following sample shows a trigger

  • which is generated every second and
  • the pulse width is 10 ms:
#include <mvIMPACT_CPP/mvIMPACT_acquire.h>
#include <mvIMPACT_CPP/mvIMPACT_acquire_GenICam.h>

... 

// Master: Set timers to trig image: Start after queue is filled
    GenICam::CounterAndTimerControl catcMaster(pDev);
    catcMaster.timerSelector.writeS( "Timer1" );
    catcMaster.timerDelay.write( 0. );
    catcMaster.timerDuration.write( 1000000. );
    catcMaster.timerTriggerSource.writeS( "Timer1End" );

    catcMaster.timerSelector.writeS( "Timer2" );
    catcMaster.timerDelay.write( 0. );
    catcMaster.timerDuration.write( 10000. );
    catcMaster.timerTriggerSource.writeS( "Timer1End" );
See also
Counter And Timer Control
Note
Make sure the Timer1 interval must be larger than the processing time. Otherwise, the images are lost.

Now, the two timers will work like the following figure illustrates, which means

  • Timer1 is the trigger event and
  • Timer2 the trigger pulse width:
Figure 1: Timers

The timers are defined, now you have to set the digital output, e.g. "Line 0":

// Set Digital I/O
    GenICam::DigitalIOControl io(pDev);
    io.lineSelector.writeS( "Line0" );
    io.lineSource.writeS( "Timer2Active" );
See also
Digital I/O Control

This signal has to be connected with the digital inputs of the application.

Programming the pulse width modulation with wxPropView

The following figures show, how you can set the timers using the GUI tool wxPropView

  1. Setting of Timer1 (blue box) on the master camera:

    Figure 2: wxPropView - Setting of Timer1

  2. Setting of Timer2 (purple on the master camera):

    Figure 3: wxPropView - Setting of Timer2

  3. Assigning timer to DigOut (orange box in Figure 2).



Outputting a pulse at every other external trigger

To do this, please follow these steps:

  1. Switch "Trigger Mode" to "On" and
  2. Select the "Trigger Source", e.g. "Line5".
  3. Use "Counter1" and count the number of input trigger by setting the "Counter Duration" to "2".
  4. Afterwards, start "Timer1" at the end of "Counter1":

    Figure 1: wxPropView - Setting the sample

The "Timer1" appears every second image.

Now, you can assign "Timer1Active" to a digital output e.g. "Line3":

Figure 2: Assigning the digital output
Note
You can delay the pulse if needed.



Creating different exposure times for consecutive images

If you want to create a sequence of exposure times, you have to trigger the camera "externally" via pulse width:

  1. Use Timer and Counter to build a sequence of different pulse widths.
  2. Use the Counter for the time between the exposures (with respect to the readout times).
  3. Afterwards, use an AND gate followed by OR gate to combine different exposure times.
Note
Please be sure that the sensor can output the complete image during Counter1 or Counter2. Otherwise, only one integration time will be used.
Figure 1: wxPropView - Logic gate principle

You can set this sample in wxPropView. E.g. the sensor makes 22.7 frames per second in Continuous Mode. This means that the sensor needs 44 ms to output the complete image.

    1
--------- = approx. 44 ms
  22.7

We take 55 ms to be sure. Now, as different exposure times we take 1 ms (Timer1) and 5 ms (Timer2). To get the 55 ms, we have to add 54000 us (Counter1) and 50000 us (Counter2).

Finally, you have to set the logic gate as shown in the figure:

Figure 2: wxPropView - Logic gate setting
Note
Because there are 4 counters and 2 timers you can only add one further exposure time using one counter as a timer.

So if you want other sequences, you have to use the counters and timers in a flexible way as show in the next sample:

Sequence with 4 times exposure A followed by 1 time exposure B

If you have an external trigger, you can use the counter and timer to create longer exposure sequences.

For example, if you want a sequence with 4 times exposure A followed by 1 time exposure B you can count the trigger events. That means practically:

  1. Use Counter1 to count 5 trigger signals then
  2. issuing Timer2 for the long exposure time (re-triggered by Counter1End).
  3. Every trigger issues Timer1 for the short exposure.
  4. Afterwards, an AND gate followed by OR gate combines the different exposure times.

In wxPropView it will look like this:

Figure 3: wxPropView - Logic gate setting 2



Detecting overtriggering

Scenario

The image acquisition of a camera consists of two steps:

  • exposure of the sensor and
  • readout of the sensor data

During these steps, a trigger signal will be skipped:

Figure 1: Trigger counter increases but the start exposure counter not

To notice overtriggering, you can use counters:

  • One counter counts the incoming trigger signals, the
  • second counter counts the ExposureStart signals.

Using the chunk data you can overlay the counters in the live image.

Setting the overtrigger detector using wxPropView

First of all, we have to set the trigger in "Setting -> Base -> Camera -> GenICam -> Acquisition Control" with following settings:

Property name wxPropView Setting
Trigger Selector FrameStart
Trigger Mode On
Trigger Source Line4
Trigger Activation RisingEdge
Exposure Mode Timed

This trigger will start an acquisition after a rising edge signal on line 4 (= DigIn0 ).

Now, set the two counters. Both counters (Counter1 and Counter2) will be reset and start after the acquisition (AcquisitionStart) has started.

While Counter1 increases with every ExposureStart event (see figure above for the event and acquisition details) ...

Figure 2: Setting Counter1

... Counter2 increases with every RisingEdge of the trigger signal:

Figure 3: Setting Counter2

Now, you can check if the trigger signal is skipped (when a rising edge signal is active during readout) or not by comparing the two counters.

Enable the inclusion of the selected chunk data ("Chunk Mode Active = 1") in the payload of the image in "Setting -> Base -> Camera -> GenICam -> Chunk Data Control":

Figure 4: Enable chunk data

Activate the info overlay in the display area. Right-click on the live display and select: "Request Info Overlay"

Figure 5: Show chunk data

The following figure shows that no trigger signal is skipped:

Figure 6: Trigger Signal counter equals ExposureStart counter

The following figure shows that the acquisition is overtriggered:

Figure 7: Trigger Signal counter is higher than ExposureStart counter



Triggering of an indefinite sequence with precise starting time

Scenario

Especially in the medical area, there are applications where a triggered acquisition is started, for example, with a foot switch. Following challenges have to be solved in combination with these applications:

  • The user wants the acquired image immediately (precise starting time).
  • It is not known, when the user stops the acquisition (indefinite sequence).

Using AcquisitionStart as the trigger source, it could take between 10 and 40 ms until the camera acquires the first frame. That's not really an immediately acquisition start. It is recommended to use FrameStart as the trigger source instead. However, according to the time the trigger event occurs, there will be a timeout during the first frame in nearly all cases.

You can avoid this using a timer which generates a "high" every 100 us and which is connected to the trigger input Line4 using a logical AND gate. I.e. if the timer is "high" and there is a trigger signal at Line4 then the logical conjunction is true. The AND gate result is then connected as TriggerSource of the FrameStart trigger using a logical OR gate. I.e. as soon as the logical AND conjunction is true, the trigger source is true and the image acquisition will start.

The following figure illustrates the settings:

Figure 1: Schematic illustration of the settings

With this setting, there is still an acceptable time delay of approx. 100 to 130 us possible.

Creating the use case using wxPropView

First of all, we have to set the timer in "Setting -> Base -> Camera -> GenICam -> Counter And Timer Control" with following settings:

Property name wxPropView Setting
Timer Selector Timer1
Timer Trigger Source Timer1End
Timer Duration 100.000

Afterwards, we have to set the logical gates in "Setting -> Base -> Camera -> GenICam -> mv Logic Gate Control" with following settings:

Property name wxPropView Setting
mv Logic Gate AND Selector mvLogicGateAND1
mv Logic Gate AND Source 1 Line4
mv Logic Gate AND Source 2 Time1Active
mv Logic Gate OR Selector mvLogicGateOR1
mv Logic Gate OR Source 1 mvLogicGateAND1Output

Finally, we have to set the trigger in "Setting -> Base -> Camera -> GenICam -> Acquisition Control" with following settings:

Property name wxPropView Setting
Trigger Selector FrameStart
Trigger Mode On
Trigger Source mvLogicGateAND1Output
Trigger Activation RisingEdge
Exposure Mode Timed
Figure 2: Sample settings



Working with I/Os

There are several use cases concerning I/Os:



Controlling strobe or flash at the outputs

Of course, the mvBlueFOX3supports strobe or flash lights. However, there are several things you have to keep in mind when using strobes or flash:

  1. Be sure that the illumination fits with the movement of the device under test.
  2. Bright illumination and careful control of exposure time are usually required.
  3. To compensate blur in the image, short exposure times are needed.

Alternatively, you can use flash with short burn times. For this, you can control the flash using the camera. The following figures show, how you can do this using wxPropView

  1. Select in "Setting -> Base -> Camera -> Digital I/O Control" the output line with the "Line Selector" to which the strobe or flash is connected.
  2. Now, set the "Line Source" to "mvExposureAndAcquisitionActive".
    This means that the signal will be high for the exposure time and only while acquisition of the camera is active.
Figure 1: Setting the "Line Source" to "mvExposureAndAcquisitionActive"
Note
This can be combined using an external trigger.

Compensating delay of strobe or flash

Normally, the input circuitry of flash has a delay (e.g. low pass filtering). Using "ExposureActive" to fire strobe would actually illuminate delayed with respect to exposure of the sensor. Figure 2 shows the problem:

Figure 2: Flash delay with "ExposureActive"

To solve this issue, you can use following procedure:

  1. Do not use "ExposureActive" for triggering strobe.
  2. Build flash signal with Timer,
  3. trigger Timer with external trigger (e.g. "Line5").
  4. Use "Trigger Delay" to delay exposure of the sensor accordingly.

In wxPropView it will look like this:

Figure 3: Working with Timer and "Trigger Delay"



Creating a debouncing filter at the inputs

In some cases, it is necessary to eliminate noise on trigger lines. This can become necessary when either

  • the edges of a trigger signal are not perfect in terms of slope or
  • if, because of the nature of the trigger signal source, multiple trigger edges are generated within a very short period of time even if there has just been a single trigger event.

The latter one is also called bouncing.

Bouncing is the tendency of any two metal contacts in an electronic device to generate multiple signals as the contacts close or open; now debouncing is any kind of hardware device or software that ensures that only a single signal will be acted upon for a single opening or closing of a contact.

To address problems that can arise from these kinds of trigger signals MATRIX VISION offers debouncing filters at the digital inputs of a device.

The debouncing filters can be found under "Setting -> Base -> Camera -> GenICam -> Digital I/O Control -> LineSelector" (red box in Figure 1) for each digital input:

Figure 1: wxPropView - Configuring Digital Input Debounce Times

Each digital input (LineMode equals Input) that can be selected via the LineSelector property will offer its own property to configure the debouncing time for falling edge trigger signals ("mv Line Debounce Time Falling Edge") and rising edge ("mv Line Debounce Time Rising Edge") trigger signals.

The line debounce time can be configured in micro-seconds with a maximum value of up to 5000 micro-seconds. Internally each time an edge is detected at the corresponding digital input a timer will start (orange * in figure 2 and 3), that is reset whenever the signal applied to the input falls below the threshold again. Only if the signal stays at a constant level for a full period of the defined mvLineDebounceTime the input signal will be considered as a valid trigger signal.

Note
Of course this mechanism delays the image acquisition by the debounce time.
Figure 2: mvLineDebounceTimeRisingEdge Behaviour
Figure 3: mvLineDebounceTimeFallingEdge Behaviour



Working with HDR (High Dynamic Range Control)

There are several use cases concerning High Dynamic Range Control:



Adjusting sensor -x02d (-1012d)

Introduction

The HDR (High Dynamic Range) mode of the Aptina sensor increases the usable contrast range. This is achieved by dividing the integration time in three phases. The exposure time proportion of the three phases can be set independently.

Functionality

To exceed the typical dynamic range, images are captured at 3 exposure times with given ratios for different exposure times. The figure shows a multiple exposure capture using 3 different exposure times.

Figure 1: Multiple exposure capture using 3 different exposure times
Note
The longest exposure time (T1) represents the Exposure_us parameter you can set in wxPropView.

Afterwards, the signal is fully linearized before going through a compander to be output as a piece-wise linear signal. the next figure shows this.

Figure 2: Piece-wise linear signal

Description

Exposure ratios can be controlled by the program. Two rations are used: R1 = T1/T2 and R2 = T2/T3.

Increasing R1 and R2 will increase the dynamic range of the sensor at the cost of lower signal-to-noise ratio (and vice versa).



Adjusting sensor -x02e (-1013) / -x04e (-1020)

Introduction

The HDR (High Dynamic Range) mode of the e2v sensors increases the usable contrast range. This is achieved by adjusting the logarithmic response of the pixels.

Functionality

MATRIX VISION offers the "mv Linear Logarithmic Mode" to use the HDR mode of the e2v sensors. With this mode you can set the low voltage of the reset signal at pixel level.

Figure 1: Knee-Point of the e2v HDR mode shifts the linear / logarithmic level

You can find the "mv Linear Logarithmic Mode" in "Setting -> Base -> Camera -> GenICam -> Analog Control":

Figure 2: wxPropView - mv Linear Logarithmic Mode

The following figure shows the measured curves at 2 ms and 20 ms exposure time with four different "mv Linear Logarithmic Mode" settings:

Figure 3: Measured curves

The curves are at Gain 1, lambda = 670 nm (40 nm width), room temperature, nominal power supply values, on a 100 x 100 pixel cantered area.

"mv Linear Logarithmic Mode" value Dynamic max (dB)
T = 2 ms T = 20 ms
4 47 65
5 74 93
6 85 104
7 92 111



Adjusting sensor -1031C

Introduction

The AR0331 Aptina sensor supports two dynamic modes:

  • linear dynamic mode which is activated by default.
    This mode uses two interleaved rest-exposure pointers to create a 16 bit linear dynamic range. The first twelve bits are used for the long exposure (T1) while the remaining four bits are used for the short exposure (T2 = T1 / Ratio). With wxPropView you can shift through the bits.
  • high dynamic mode (HDR) which compresses the 16 bit value to 12 bits using Adaptive Local Tone Mapping (ALTM) or by companding to 12 or 14 bits (figure 1).
Figure 1: Compression from 16 to 12 bits

The HDR can be combined with

  • image average for low noise images,
  • the camera's auto exposure, and
  • the camera's auto gain.

Adaptive Local Tone Mapping (ALTM)

The Adaptive Local Tone Mapping is used to compress the HDR image so that it can be nicely displayed on a low dynamic range display (i.e. LCD with a contrast ratio about 1000:1). The AR0331 does ALTM internally and fully automatic. The sensor also compensates both motion artifacts which occur because of the two exposures and noise artifacts which occur because of the clipping.

Enabling the HDR mode with wxPropView

Using the HDR mode, you have to do the following step:

  1. Start wxPropView and
  2. connect to the camera.
  3. Then in "Setting -> Base -> Camera -> GenICam -> mv High Dynamic Range Control" you can enable the "mv HDR Enable".

In this case

  • the Adaptive Local Tone Mapping is on,
  • the motion compensation is off (but it can be switched on via "mv HDR Motion Compensation Enable"),
  • the adaptive color difference noise filtering is on, and
  • you can set the exposure ratio:
    • mvRatio4x (~ 84dB)
    • mvRatio8x (~ 90dB)
    • mvRatio16x (~ 96dB, recommended)
    • mvRatio32x (~ 100dB)
Figure 2: wxPropView - mv High Dynamic Range Control



Working with LUTs

There are several use cases concerning LUTs (Look-Up-Tables):



Introducing LUTs

Introduction

Look-Up-Tables (LUT) are used to transform input data into a desirable output format. For example, if you want to invert an 8 bit image, a Look-Up-Table will look like the following:

Figure 1: Look-Up-Table which inverts a pixel of an 8 bit mono image

I.e., a pixel which is white in the input image (value 255) will become black (value 0) in the output image.

All MATRIX VISION devices use a hardware based LUT which means that

  • no host CPU load is needed and
  • the LUT operations are independent of the transmission bit depth.

Setting the hardware based LUTs via LUT Control

On the mvBlueFOX3 using wxPropView, you will find the LUT Control via "Setting -> Base -> Camera -> GenICam -> LUT Control".

wxPropView offers a wizard for the LUT Control usage:

  1. Click on "Setting -> Base -> Camera -> GenICam -> LUT Control".
    Now, the "Wizard" button becomes active.

    Figure 2: wxPropView - LUT Control wizard button

  2. Click on the "Wizard" button to start the LUT Control wizard tool.
    The wizard will load the LUT data from the camera.

    Figure 3: wxPropView - LUT Control wizard dialog

It is easy to change settings like the Gamma value of the Luminance or of each color channel (in combination with a color sensor of course) with the help of the wizard. You can also invert the values of each pixel with the wizard. It is not possible to set a LUT mode and the "mv LUT Mapping" is fixed.

Make your changes and do not forget to

  1. click on "Copy to..." and select "All" or the color channel you need, to
  2. click on "Enable All", and finally, to
  3. click on Synchronize and play the settings back to the device (via "Cache -> Device").
Note
If you select "Enable All" without entering any value the image will be inverted.

Setting the Host based LUTs via LUTOperations

Host based LUTs are also available via "Setting -> Base -> ImageProcessing -> LUTOperations"). Here, the changes will affect the 8 bit image data and the processing needs the CPU of the host system.

Three "LUTMode"s are available:

  • "Gamma"
    You can use "Gamma" to lift darker image areas and to flatten the brighter ones. This compensates the contrast of the object. The calculation is described here. It makes sense to set the "GammaStartThreshold" higher than 0 to avoid a too extreme lift or noise in the darker areas.
  • "Interpolated"
    With "Interpolated" you can set the key points of a characteristic line. You can defined the number of key points. The following figure shows the behavior of all 3 LUTInterpolationModes with 3 key points:
Figure 4: LUTMode "Interpolated" -> LUTInterpolationMode
  • "Direct"
    With "Direct" you can set the LUT values directly.

Example 1: Inverting an Image

To get an inverted 8 bit mono image like shown in Figure 1, you can set the LUT using wxPropView. After starting wxPropView and using the device,

  1. Set "LUTEnable" to "On" in "Setting -> Base -> ImageProcessing -> LUTOperations".
  2. Afterwards, set "LUTMode" to "Direct".
  3. Right-click on "LUTs -> LUT-0 -> DirectValues[256]" and select "Set Multiple Elements... -> Via A User Defined Value Range".
    This is one way to get an inverted result. It is also possible to use the "LUTMode" - "Interpolated".
  4. Now you can set the range from 0 to 255 and the values from 255 to 0 as shown in Figure 2.
Figure 5: Inverting an image using wxPropView with LUTMode "Direct"



Working with LUTValueAll

Working with the LUTValueAll feature requires a detailed understanding on both Endianess and the cameras internal format for storing LUT data. LUTValueAll typically references the same section in the cameras memory as when accessing the LUT via the features LUTIndex and LUTValue.

LUT data can either be written to a device like this (C++ syntax):

const size_t LUT_VALUE_COUNT = 256;
int64_type LUTData[LUT_VALUE_COUNT] = getLUTDataToWriteToTheDevice();
mvIMPACT::acquire::GenICam::LUTControl lut(getDevicePointerFromSomewhere());
for(int64_type i=0; i< static_cast<int64_type>(LUT_VALUE_COUNT); i++ )
{
  lut.LUTIndex.write( i );
  lut.LUTValue.write( LUTData[i] );
}

When using this approach all the Endianess related issues will be handled completely by the GenICam runtime library. So this code is straight forward and easy to understand but might be slower than desired as it requires a lot of direct register accesses to the device.

In order to allow a fast efficient way to read/write LUT data from/to a device the LUTValueAll feature has been introduced. When using this feature the complete LUT can be written to a device like this:

const size_t LUT_VALUE_COUNT = 256;
int LUTData[LUT_VALUE_COUNT] = getLUTDataToWriteToTheDevice();
mvIMPACT::acquire::GenICam::LUTControl lut(getDevicePointerFromSomewhere());
std::string buf(reinterpret_cast<std::string::value_type*>(&LUTData), sizeof(LUTData));
lut.LUTValueAll.writeBinary( buf );

BUT as this simply writes a raw block of memory to the device it suddenly becomes important to know exactly how the LUT data is stored inside the camera. This includes:

  • The size of one individual LUT entry (this could be anything from 1 up to 8 bytes)
  • The Endianess of the device
  • The Endianess of the host system used for sending/receiving the LUT data

The first item has impact on how the memory must be allocated for receiving/sending LUT data. For example when the LUT data on the device uses a 'one 32-bit integer per LUT entry with 256 entries' layout then of course this is needed on the host system:

const size_t LUT_VALUE_COUNT = 256;
int LUTData[LUT_VALUE_COUNT];

When the Endianess of the host system differs from the Endianess used by the device the application communicates with, then before sending data assembled on the host might require Endianess swapping. For the example from above this would e.g. require something like this:

#define SWAP_32(l) \
  ((((l) & 0xff000000) >> 24) | \
   (((l) & 0x00ff0000) >> 8)  | \
   (((l) & 0x0000ff00) << 8)  | \
   (((l) & 0x000000ff) << 24))

void fn()
{
 const size_t LUT_VALUE_COUNT = 256;
 int LUTData[LUT_VALUE_COUNT] = getLUTDataToWriteToTheDevice();
 mvIMPACT::acquire::GenICam::LUTControl lut(getDevicePointerFromSomewhere());
 for( size_t i=0; i<LUT_VALUE_COUNT; i++ )
 {
  LUTData[i] = SWAP_32(LUTData[i]);
 }
 std::string buf(reinterpret_cast<std::string::value_type*>(&LUTData), sizeof(LUTData));
 lut.LUTValueAll.writeBinary( buf );
}

For details on how the LUT memory is organized for certain sensors please refers to the Sensor overview. Please note that all mvBlueCOUGAR-S, mvBlueCOUGAR-X and mvBlueCOUGAR-XD devices are using Big Endian while almost any Windows or Linux distribution on the market uses Little Endian, thus the swapping of the data will most certainly be necessary when using the LUTValueAll feature.



Implementing a hardware-based binarization

If you like to have binarized images from the camera, you can use the hardware-based Look-Up-Tables (LUT) which you can access via "Setting -> Base -> Camera -> GenICam -> LUT Control".

To get binarized images from the camera, please follow these steps:

  1. Set up the camera and the scenery, e.g.

    Figure 1: Scenery

  2. Open the LUT wizard via the menu "Wizards -> LUT Control...".
  3. Export the current LUT as a "*.csv" file.

The "*.csv" file contains just one column for the output gray scale values. Each row of the "*.csv" represents the input gray scale value. In our example, the binarization threshold is 1024 in a 12-to-9 bit LUT. I.e., we have 4096 (= 12 bit) input values (= rows) and 512 (= 9 bit) output values (column values). To binarize the image according to the threshold, you have to

  1. set all values below the binarization threshold to 0.
  2. Set all values above the binarization threshold to 511:

    Figure 2: The binarization LUT

  3. Now, save the "*.csv" file and
  4. import it via the LUT Control wizard.
  5. Click on synchronize and
  6. finally check "Enable".

Afterwards the camera will output binarized images like the following:

Figure 3: Binarized image



Saving data on the device

Note
As described in Storing and restoring settings, it is also possible to save the settings as an XML file on the host system. You can find further information about for example the XML compatibilities of the different driver versions in the mvIMPACT Acquire SDK manuals and the according setting classes: https://www.matrix-vision.com/manuals/SDK_CPP/classmvIMPACT_1_1acquire_1_1FunctionInterface.html (C++)

There are several use cases concerning device memory:



Creating user data entries

Basics about user data

It is possible to save arbitrary user specific data on the hardware's non-volatile memory. The amount of possible entries depends on the length of the individual entries as well as the size of the devices non-volatile memory reserved for storing:

  • mvBlueFOX,
  • mvBlueFOX-M,
  • mvBlueFOX-MLC,
  • mvBlueFOX3, and
  • mvBlueCOUGAR-X

currently offer 512 bytes of user accessible non-volatile memory of which 12 bytes are needed to store header information leaving 500 bytes for user specific data.

One entry will currently consume:
1 + <length_of_name (up to 255 chars)> + 2 + <length_of_data (up to 65535 bytes)> + 1 (access mode) bytes

as well as an optional:
1 + <length_of_password> bytes per entry if a password has been defined for this particular entry

It is possible to save either String or Binary data in the data property of each entry. When storing binary data please note, that this data internally will be stored in Base64 format thus the amount of memory required is 4/3 time the binary data size.

The UserData can be accessed and created using wxPropView (the device has to be closed). In the section "UserData" you will find the entries and following methods:

  • "CreateUserDataEntry"
  • "DeleteUserDataEntry"
  • "WriteDataToHardware"
Figure 1: wxPropView - section "UserData -> Entries"

To create a user data entry, you have to

  • Right click on "CreateUserDataEntry"
  • Select "Execute" from the popup menu.
    An entry will be created.
  • In "Entries" click on the entry you want to adjust and modify the data fields.
    To permanently commit a modification made with the keyboard the ENTER key must be pressed.
  • To save the data on the device, you have to execute "WriteDataToHardware". Please have a look at the "Output" tab in the lower right section of the screen as shown in Figure 2, to see if the write process returned with no errors. If an error occurs a message box will pop up.
Figure 2: wxPropView - analysis tool "Output"

Coding sample

If you e.g. want to use the UserData as dongle mechanism (with binary data), it is not suggested to use wxPropView. In this case you have to program the handling of the user data.

See also
mvIMPACT::acquire::UserDataEntry in mvIMPACT_Acquire_API_CPP_manual.chm.



Creating user set entries

With mvBlueFOX3it is possible to store up to five configuration sets (4 user plus one factory default) in the camera.

This feature is similar to the storing settings functionality, which saves the settings in the registry. However, as mentioned before the user sets are stored in the camera.

The user set stores

permanently and is independent of the computer which in used.

Additionally, you can select, which user set comes up after hard reset.

Attention
The storage of user data in the registry can still override user set data!
User sets are cleared after firmware change.

List of ignored properties

Following properties are not stored in the user set:

                    - DecimationHorizontalMode
                    - DecimationVerticalMode
                    - DeviceTLType
                    - DeviceUserID
                    - EventExposureEndData
                    - EventFrameEndData
                    - EventLine4RisingEdgeData
                    - EventLine5RisingEdgeData
                    - EventTestData
                    - FileAccessBuffer
                    - FileAccessLength
                    - FileAccessOffset
                    - FileOpenMode
                    - LUTIndex
                    - LUTValueAll
                    - mvADCGain
                    - mvDefectivePixelCount
                    - mvDefectivePixelOffsetX
                    - mvDefectivePixelOffsetY
                    - mvDeviceClockPLLPhaseShift
                    - mvDevicePowerMode
                    - mvDeviceStandbyTimeoutEnable
                    - mvDigitalGainOffset
                    - mvFFCAutoLoadMode
                    - mvI2cInterfaceASCIIBuffer
                    - mvI2cInterfaceBinaryBuffer
                    - mvI2cInterfaceBytesToRead
                    - mvI2cInterfaceBytesToWrite
                    - mvPreGain
                    - mvSerialInterfaceASCIIBuffer
                    - mvSerialInterfaceBinaryBuffer
                    - mvSerialInterfaceBytesToRead
                    - mvSerialInterfaceBytesToWrite
                    - mvUserData
                    - mvVRamp
                    - UserSetDefault

Working with the user sets

You can find the user set control in "Setting -> Base -> Camera -> GenICam -> User Set Control":

Figure 1: User Set Control

With "User Set Selector" you can select the user set ("Default", "UserSet1 - UserSet4"). To save or load the specific user set, you have two functions:

  • "int UserSetLoad()" and
  • "int UserSetSave()".

"User Set Default" is the property, where you can select the user set, which comes up after hard reset.

Finally, with "mv User Data" you have the possibility to store arbitrary user data.



Working with the UserFile section (Flash memory)

The mvBlueFOX3offers a 64 KByte section in the Flash memory that can be used to upload a custom file to (UserFile).

To read or write this file you can use the following GenICam File Access Control and its interfaces:

  • IDevFileStream (read)
  • ODevFileStream (write)
Attention
The UserFile is lost each time a firmware update is applied to the device.

Using wxPropView

wxPropView offers a wizard for the File Access Control usage:

  1. Click on "Setting -> Base -> Camera -> GenICam -> File Access Control -> File Selector -> File Operator Selector".
    Now, the "Wizard" button becomes active.

    Figure 1: wxPropView - UserFile wizard

  2. Click on the "Wizard" button.
    Now, a dialog appears where you can choose either to upload or download a file.

    Figure 2: wxPropView - Download / Upload dialog

  3. Make your choice and click on "OK".
    Now, a dialog appears where you can select the File.

    Figure 3: wxPropView - Download / Upload dialog

  4. Select "UserFile" follow the instructions.

Manually control the file access from an application (C++)

The header providing the file access related classes must be included into the application:

#include <mvIMPACT_CPP/mvIMPACT_acquire_GenICam_FileStream.h>

A write access then will look like:

const string fileNameDevice("UserFile");

// uploading a file
mvIMPACT::acquire::GenICam::ODevFileStream file;
file.open( pDev, fileNameDevice.c_str() );

if( !file.fail() )
{
  // Handle the successful upload.
}
else
{
  // Handle the error.
}

A read access will look like:

const string fileNameDevice("UserFile");

// downloading a file works in a similar way
mvIMPACT::acquire::GenICam::IDevFileStream file;
file.open( pDev, fileNameDevice.c_str() );

if( !file.fail() )
{
  // Handle the successful upload.
}
else
{
  // Handle the error.
}

You can find a detailed code example in the C++ API manual in the documentation of the classes mvIMPACT::acquire::GenICam::IDevFileStream and mvIMPACT::acquire::GenICam::ODevFileStream

Working with device features

There are several use cases concerning device features:



Reset timestamp by hardware

This feature can be used

  • for precise control of timestamp
    • for one camera or
    • to synchronize timestamp of multitude of cameras.

The latter sample, can be achieved by following steps:

  1. Define the input line ("TriggerSource") to reset the timestamp, e.g. "Line5" and
  2. set the "Trigger Selector" to "mvTimestampReset".
  3. Connect all input lines of all cameras together.
  4. Finally, use one output of one camera to generate reset edge:

    Figure 1: wxPropView - Setting the sample

Note
Be aware of the drift of the individual timestamps.

The timestamp is generated via FPGA in the camera which itself is clocked by a crystal oscillator. This is done independently in each camera and by default not synchronized among cameras or the host system.

Typical stability of crystal oscillators is in the range 100ppm (parts per million).

I.e. for longer operation times (say in excess of hours) there is a tendency that timestamps of individual cameras drift against each other and against the time in the operating system of the host.

Customers wishing to use the individual camera timestamps for synchronization and identification of images via timestamps for multi-camera systems will have in the meantime
- to reset all timestamps either by hardware signal or by command and regularly resynchronize or check the drift algorithmically
- in order to make sure that the drift is less half an image frame time.



Synchronizing camera timestamps without IEEE 1588

Introduction

Camera timestamps are a recommended Genicam / SFNC feature to add the information when an image was taken (exactly: when the exposure of the image started).

Without additional synchronization it is merely a camera individual timer with a vendor specific increment and implementation dependent accuracy. Each camera starts its own timestamp beginning with zero and there are no means to adjust or synchronize them among cameras or host PCs. There is effort ongoing to widely establish the precision timestamp according to "IEEE 1588" into GigE cameras. This involves cameras which are able to perform the required synchronization as well as specific network hardware and driver software and procedures to do and maintain the synchronization.

There are many applications which do not or cannot profit from "IEEE 1588" but have certain synchronization needs. Solutions for these scenarios are describes as follows.

Resetting timestamp using mvTimestampReset

First of all the standard does not provide hardware means to reset the timestamp in a camera other than plug off and on again. Therefore MATRIX VISION has created its own mechanism mvTimestampReset to reset the timestamp by a hardware input.

Figure 1: mvTimestampReset

This can be elegantly used for synchronization purposes by means of wiring an input of all cameras together and reset all camera timestamps at the beginning by a defined signal edge from the process. From this reset on all cameras start at zero local time and will increment independently their timestamp so that we achieve a basic accuracy only limited by drift of the clock main frequency (e.g. a 1 MHz oscillator in the FPGA) over time.

In order to compensate for this drift we can in addition reset the timestamp every second or minute or so and count the reset pulse itself by a counter in each camera. Assuming this reset pulse is generated by the master camera itself by means of a timer and output as the hardware reset signal for all cameras, we now can count the reset pulse with all cameras and put both the count and the reset timestamp as so called chunk data in the images.

We thus have achieved a synchronized timestamp with the precision of the master camera among all connected cameras.

Settings required are shown using MATRIX VISION’s wxPropView tool:

Figure 2: Reset the timestamp every second

An example of the chunk data attached to the image can be seen below. The timestamp is in µs and Counter1 counts the reset pulses, in this case itself generated by the camera via Timer1.

Figure 3: ChunkData

The task of resetting the counter at the beginning of the acquisition can be done by setting the reset property accordingly. Of course is this all independent whether the camera is acquiring images in triggered or continuous mode.

Synchronizing timestamp using a pulse-per-second signal

In order to eliminate the unknown drifts of different devices, a PPS (pulse per second) signal can be fed into each camera using a PC with NTP (network time protocol software), GPS devices or even a looped-back camera timer.

From these pulses the device will find out how long one second is. When a device detects that it is no longer running precisely it will adapt it's internal clock leading to a "stabilized oscillator".

The device would then maintain their timestamp-differences over long times and stay synchronized. The initial difference between the timers - before the PPS was used - remains however. If you aim to eliminate that as well, you can use the mvTimestampReset up front with the same PPS input signal. In an application this can be configured like this (C# syntax):

// --------------------------------------------------------------------------
bool waitForNextPulse(Device pDev, String triggerLine)
// --------------------------------------------------------------------------
{
    GenICam.CounterAndTimerControl ctc = new GenICam.CounterAndTimerControl(pDev);
    ctc.counterEventSource.writeS(triggerLine);
    long momentaryValue = ctc.counterValue.read();
    for (int i=0; i<12; i++)
    {
        System.Threading.Thread.Sleep(100);
        if (momentaryValue != ctc.counterValue.read()) 
        {
            return true;
        }
    }
    return false;
}

// --------------------------------------------------------------------------
void SetupPPS(Device[] pDevs)
// --------------------------------------------------------------------------
{
    string TriggerLine = "Line4";
    
    if (!waitForNextPulse(pDevs[0],TriggerLine)) 
    {
        Console.WriteLine("No pulse seems to be present");
        return;
    }

    // No configure all the devices to reset their timestamp with each pulse coming
    // on the trigger line that the PPS signal is connected to.
    foreach(Device aDevice in pDevs) 
    {
        GenICam.AcquisitionControl ac = new GenICam.AcquisitionControl(aDevice);
        ac.triggerSelector.writeS("mvTimestampReset");
        ac.triggerSource.writeS(TriggerLine);
        ac.triggerMode.writeS("On");
    }

    // wait for the next pulse that will then reset the timestamps of all the devices
    if (!waitForNextPulse(pDevs[0],TriggerLine)) 
    {
        Console.WriteLine("the pulses aren't coming fast enough ...");
        return;
    }

    // Now switch off the reset of the timestamp again. All devices did restart their
    // timestamp counters and will stay in sync using the PPS signal now
    foreach(Device aDevice in pDevs)
    {
        GenICam.AcquisitionControl ac = new GenICam.AcquisitionControl(aDevice);
        ac.triggerMode.writeS("Off");
        GenICam.DeviceControl dc = new GenICam.DeviceControl(aDevice);
        dc.mvTimestampPPSSync.writeS("Line4");
    }
}

Using a looped-back camera timer for the PPS signal

To reduce the amount of hardware needed you might want to sacrifice some timestamp validity and use one of the cameras as a master clock. This can be done like this:

  • setting Timer1 to duration 1s
  • starting Timer2 with every Timer1End and generating a short pulse (duration = 1000 us)
  • placing Timer2Active on one of the digital I/O lines and using that as the source for PPS signal
Figure 4: Setup for looped-back Timer

Setting the Timer1 to 1 s seems like an easy task but due to some internal dependencies you should be carefully here At the moment two different timer implementations are present in our products.

  • Type 1: For mvBlueCOUGAR-X cameras with sensors other than the IMX family from Sony please set the duration to the theoretical value of 1000000.
  • Type 2: For all the other cameras please use 999997 us duration since the self-triggering will consume the other 3 us
  • Please refrain from switching on PPSSync inside the Master camera since (at least in Type 1 cameras) since this will lead to an instable feedback loop.



Using the standby mode

System requirements

  • "Firmware version" at least "1.6.188.0"
  • "mvIMPACT Acquire driver version" at least "2.10.1"

Using mvDeviceStandbyTimeout:

  • "Firmware version" at least "2.12.406.0"
  • "mvIMPACT Acquire driver version" at least "2.17.1"

Introduction

It is possible to switch the mvBlueFOX3 into a power down mode (standby) either

  • by changing the property "mvDevicePowerMode" to "mvStandby" or
  • by enabling the automatic power down mode by setting the property "mvDeviceStandbyTimeoutEnable" to "bTrue" and by specifying the timeout using the property "mvDeviceStandbyTimeout".

The latter one will switch the camera to the power down mode as soon as no register was read (i.e. neither the camera is used nor images were acquired) during a specified time period (in seconds) specified by the property "mvDeviceStandbyTimeout".

Note
As soon as the device stays open by an active driver instance, the driver will periodically read small chunks of data from the device to keep it open when auto standby is active. However if the application terminates or crashes the device will automatically move into standby mode after the specified timeout has elapsed.

The power down mode has following characteristics:

If you change state again to power on, it will take about 7 seconds for the camera to wake up. During the wake up, all user settings will to be restored, except for the LUTs. Afterwards, the LED will turn green again.

Programming the power down mode

You can set the power down mode via the Device Control :

#include <mvIMPACT_CPP/mvIMPACT_acquire.h>
#include <mvIMPACT_CPP/mvIMPACT_acquire_GenICam.h>

... 

    GenICam::DeviceControl device(pDev);
    device.mvDevicePowerMode.writeS( "mvStandby" );

Or switch to the power down mode automatically via "mv Device Standby Timeout":

#include <mvIMPACT_CPP/mvIMPACT_acquire.h>
#include <mvIMPACT_CPP/mvIMPACT_acquire_GenICam.h>

... 

    GenICam::DeviceControl device(pDev);
    device.mvDeviceStandbyTimeoutEnable.write( bTrue );
    device.mvDeviceStandbyTimeout.write( 10 );
Note
If the power mode of the camera is set to "mvStandby" and the process which operates the camera stops for any reason the next time a process detects the camera it will automatically wake up again.

Changing the power down mode with wxPropView

Using the power down mode, you have to do the following step:

  1. Start wxPropView and
  2. connect to the camera.
  3. Then in "Setting -> Base -> Camera -> GenICam -> Device Control" you can set the power mode "mv Device Power Mode" to "mvActive" or "mvStandby".
Figure 1: wxPropView: mvDevice Power Mode

Or switch to the power down mode automatically via "mv Device Standby Timeout":

  1. Start wxPropView and
  2. connect to the camera.
  3. Then in "Setting -> Base -> Camera -> GenICam -> Device Control" you can enable the standby timeout "mv Device Standby Timeout Enable" and
  4. set the time in seconds via "mv Device Standby Timeout" after which the camera switches to standby if no register was read.
Figure 2: wxPropView: mv Device Standby Timeout Enable



Working with the serial interface (mv Serial Interface Control)

Introduction

As mentioned in GenICam and Advanced Features section of this manual, the mv Serial Interface Control is a feature which allows an easy integration of motor lenses or other peripherals based on RS232.

  • Available message buffer size: 128 Bytes
Note
Use the Power GND for the RS232 signal.

Setting up the device

Follow these steps to prepare the camera for the communication via RS232:

Figure 1: wxPropView - mv Serial Interface Control
  1. Start wxPropView
  2. Connect to the camera
  3. Under "Setting -> Base -> Camera -> GenICam -> mv Serial Interface Control" activate the serial interface by enabling
    "mv Serial Interface Enable" (1).
    Afterwards "mv Serial Interface Control" is available.
  4. Set up the connection settings to your needs (2).
  5. To test the settings you can send a test message (3).
  6. Send messages by executing the function "int mvSerialInterfaceWrite( void )" by either clicking on the 3 dots next to the function name or by right-clicking on the command and then selecting Execute from the context menu.(4)

If you are listening to the RS232 serial line using a tool like PuTTY with matching settings...

Figure 2: PuTTY - Setting up the serial interface

you will see the test message:

Figure 3: PuTTY - Receiving the test message

Programming the serial interface

#include <mvIMPACT_CPP/mvIMPACT_acquire_GenICam.h>

  // more code
  GenICam::mvSerialInterfaceControl sic( pDev );
  sic.mvSerialInterfaceBaudRate.writeS( "Hz_115200" );
  sic.mvSerialInterfaceASCIIBuffer.writeS( "Test Test Test" );
  sic.mvSerialInterfaceWrite();
  // more code



Working with the I2C interface (mv I2C Interface Control)

Introduction

As mentioned in GenICam and Advanced Features section of this manual, the mv I2C Interface Control is a feature which allows to communicate with custom-specific peripherals via I2C.

Setting up the device

Follow these steps to prepare the camera for the communication via I2C:

Figure 1: wxPropView - mv I2C Interface Control
  1. Start wxPropView
  2. Connect to the camera
  3. Under "Setting -> Base -> Camera -> GenICam -> mv I2c Interface Control" activate the serial interface by enabling
    "mv I2C Interface Enable" (1).
    Afterwards "mv I2C Interface Control" is available.
  4. Set up the connection settings to your needs (2).
    E.g. to get the temperature of the sensor set "mv I2C Interface Device Address", depending on the device, either to "0x30" or "0x32".
    Afterwards there are two ways to set the resolution of the temperature value (0.5°C [0], 0.25°C [1], 0.125°C [2], or 0.0625°C [3]):

    Using the BinaryBuffer:
    1. Set "mv I2C Interface Device Sub Address" to "0x08".
    2. Set "mv I2C Interface Binary Buffer" e.g. to "1" i.e. 0.25°C (3).
    3. Set "mv I2C Interface Bytes To Write" to "1" to send one byte from the binary buffer (4).
    4. Send messages by executing the function "int mvI2CInterfaceWrite( void )" by either clicking on the 3 dots next to the function name or by right-clicking on the command and then selecting Execute from the context menu.(5)


    Without the BinaryBuffer (which is faster):
    1. Set "mv I2C Interface Device Sub Address" to "0x10801" i.e. 0x1xxxx means 16Bit SubAddress; message: register 8 to 1 (resolution 0.25°C).
    2. Disable the "mv I2C Interface Bytes To Write" using "0" (4).
    3. Send messages by executing the function "int mvI2CInterfaceWrite( void )" by either clicking on the 3 dots next to the function name or by right-clicking on the command and then selecting Execute from the context menu.(5)

You can now read the temperature the following way:

  1. Set "mv I2C Interface Device Sub Address" to "0x05".
  2. Set "mv I2C Interface Bytes To Read" to "0x02", i.e. two bytes.
  3. Send messages by executing the function "int mvI2CInterfaceRead( void )" by either clicking on the 3 dots next to the function name or by right-clicking on the command and then selecting Execute from the context menu.(6)

Programming the I2C interface

#include <mvIMPACT_CPP/mvIMPACT_acquire_GenICam.h>

  // more code
  
  GenICam::mvI2cInterfaceControl iic( pDev );
  iic.mvI2cInterfaceEnable.write( bTrue );
  iic.mvI2cInterfaceSpeed.writeS("kHz_400");
  iic.mvI2cInterfaceDeviceAddress.write("0x30");
  
  // Set the I2C communication
  
  // Using the BinaryBuffer
  iic.mvI2cInterfaceDeviceSubAddress.write(0x08);
  pWrBuf[0] = 1;
  iic.mvI2cInterfaceBinaryBuffer.writeBinary(pWrBuf);
  iic.mvI2cInterfaceBytesToWrite.write("1");
  iic.mvI2cInterfaceWrite.call();
  
  // Without BinaryBuffer
  iic.mvI2cInterfaceDeviceSubAddress.write("0x10800"); 
  iic.mvI2cInterfaceBytesToWrite.write("0");
  iic.mvI2cInterfaceWrite.call();
  
  // Read the temperature
  
  iic.mvI2cInterfaceDeviceSubAddress.write("0x05");
  iic.mvI2cInterfaceBytesToRead.write("0x02");
  iic.mvI2cInterfaceRead.call();
  
  i2cReadBinaryData = iic.mvI2cInterfaceBinaryBuffer.readBinary();
  
  // more code
See also
GenICamI2cUsage.cs sample in the sample folder of the mvIMPACT Acquire SDK installation.



Working with several cameras simultaneously

There are several use cases concerning multiple cameras:



Creating synchronized acquisitions using timers

Basics

Getting images from several cameras exactly at the same time is a major task in

  • 3D image acquisitions
    (the images must be acquired at the same time using two cameras) or
  • acquisitions of larger objects
    (if more than one camera is required to span over the complete image, like in the textile and printing industry).

To solve this task, the mvBlueFOX3offers timers that can be used to generate pulse at regular intervals. This pulse can be connected to a digital output. The digital output can be connected digital to the digital input of one or more cameras to use it as a trigger.

Connecting the hardware

One camera is used as master (M), which generates the trigger signal. The other ones receive the trigger signal and act as slaves (S).

Connecting the cameras

The connection of the mvBlueFOX3 cameras should be like this:

Figure 1: Master - Slave connecting
Symbol Comment Input voltage Min Type Max Unit
Uext. External power   3.3   30 V
Rout Resistor digital output     2   kOhm

Programming the acquisition

You will need two timers and you have to set a trigger.

Start timer

Two timers are used for the "start timer". Timer1 defines the interval between two triggers. Timer2 generates the trigger pulse at the end of Timer1.

The following sample shows a trigger

  • which is generated every second and
  • the pulse width is 10 ms:
#include <mvIMPACT_CPP/mvIMPACT_acquire.h>
#include <mvIMPACT_CPP/mvIMPACT_acquire_GenICam.h>

... 

// Master: Set timers to trig image: Start after queue is filled
    GenICam::CounterAndTimerControl catcMaster(pDev);
    catcMaster.timerSelector.writeS( "Timer1" );
    catcMaster.timerDelay.write( 0. );
    catcMaster.timerDuration.write( 1000000. );
    catcMaster.timerTriggerSource.writeS( "Timer1End" );

    catcMaster.timerSelector.writeS( "Timer2" );
    catcMaster.timerDelay.write( 0. );
    catcMaster.timerDuration.write( 10000. );
    catcMaster.timerTriggerSource.writeS( "Timer1End" );
See also
Counter And Timer Control
Note
Make sure the Timer1 interval must be larger than the processing time. Otherwise, the images are lost.

The timers are defined, now you have to do following steps:

  1. Set the digital output, e.g. "Line 0",
  2. connect the digital output with the inputs of the slave cameras, and finally
  3. set the trigger source to the digital input, e.g. "Line4".

Set digital I/O

In this step, the signal has to be connected to the digital output, e.g. "Line0":

// Set Digital I/O
    GenICam::DigitalIOControl io(pDev);
    io.lineSelector.writeS( "Line0" );
    io.lineSource.writeS( "Timer2Active" );
See also
Digital I/O Control

This signal has to be connected with the digital inputs of the slave cameras as shown in Figure 1 and 2.

Set trigger

"If you want to use Master - Slave":

// Set Trigger of Master camera
    GenICam::AcquisitionControl ac(pDev);
    ac.triggerSelector.writeS( "FrameStart" );
    ac.triggerMode.writeS( "On" );
    ac.triggerSource.writeS( "Timer1Start" );
// or ac.triggerSource.writeS( "Timer1End" );
// Set Trigger of Slave camera 
    GenICam::AcquisitionControl ac(pDev);
    ac.triggerSelector.writeS( "FrameStart" );
    ac.triggerMode.writeS( "On" );
    ac.triggerSource.writeS( "Line4" );
    ac.triggerActivation.writeS( "RisingEdge" ); 
See also
Acquisition Control

Now, the two timers will work like the following figure illustrates, which means

  • Timer1 is the trigger event and
  • Timer2 the trigger pulse width:
Figure 2: Timers

By the way, this is a simple "pulse width modulation (PWM)" example.

Setting the synchronized acquisition using wxPropView

The following figures show, how you can set the timers and trigger using the GUI tool wxPropView

  1. Setting of Timer1 (blue box) on the master camera:

    Figure 3: wxPropView - Setting of Timer1 on the master camera

  2. Setting of Timer2 (purple box) on the master camera:

    Figure 4: wxPropView - Setting of Timer2 on the master camera

  3. Setting the trigger slave camera(s)
    - The red box in Figure 5 is showing "Master - Slave"), which means that the master is triggered internally and the slave camera is set as shown in Figure 4.
  4. Assigning timer to DigOut (orange box in Figure 3).
Figure 5: Trigger setting of the master camera using "Master - Slave"