MATRIX VISION - mvBlueCOUGAR-X/-XD Technical Documentation
Use cases

Table of Contents

GenICam to mvIMPACT Acquire code generator

Using the code generator

As any GenICam™ compliant device for which there is a GenICam™ GenTL compliant capture driver in mvIMPACT Acquire can be used using the mvIMPACT Acquire interface and it can't be known which features are supported by a device until a device has been initialised and its GenICam™ XML file has been processed it is not possible to provide a complete C++ wrapper for every device statically.

Therefore an interface code generator has been embedded into each driver capable of handling arbitrary devices. This code generator can be used to create a convenient C++ interface file that allows access to every feature offered by a device.

Warning
A code generated interface can result in incompatibility issues, because the interface name will be constructed from the version and product information that comes with the GenICam XML file (see comment in code sample below). To avoid incompatibility, please use the common interface from the namespace mvIMPACT::acquire::GenICam whenever possible.

To access features needed to generate a C++ wrapper interface a device needs to be initialized. Code can only be generated for the interface layout selected when the device was opened. If interfaces shall be created for more than a single interface layout the steps that will explain the creation of the wrapper files must be repeated for each interface layout.

Once the device has been opened the code generator can be accessed by navigating to "System Settings -> CodeGeneration".

Figure 1: wxPropView - Code Generation section


To generate code first of all an appropriate file name should be selected. In order to prevent file name clashes the following hints should be kept in mind when thinking about a file name:

  • If several devices from different families or different vendors shall later be included into an application each device or device family will need its own header file thus either the files should be organized in different subfolders or must have unique names.
  • If a device shall be used using different interface layouts again different header files must be generated.

If only a single device family is involved but 2 interface layouts will be used later a suitable file name might be "mvIMPACT_acquire_GenICam_Wrapper_DeviceSpecific.h".

For a more complex application involving different device families using the GenICam interface layout only something like this might make sense:

  • mvIMPACT_acquire_GenICam_Wrapper_MyDeviceA.h
  • mvIMPACT_acquire_GenICam_Wrapper_MyDeviceB.h
  • mvIMPACT_acquire_GenICam_Wrapper_MyDeviceC.h
  • ...

Once a file name has been selected the code generator can be invoked by executing the "int GenerateCode()" method:

Figure 2: wxPropView - GenerateCode() method


The result of the code generator run will be written into the "LastResult" property afterwards:

Figure 3: wxPropView - LastResult property


Using the result of the code generator in an application

Each header file generated by the code generator will include "mvIMPACT_CPP/mvIMPACT_acquire.h" thus when an application is compiled with files that have been automatically generated these header files must have access to this file. This can easily achieved by appropriately settings up the build environment / Makefile.

To avoid problems of multiple includes the file will use an include guard build from the file name.

Within each header file the generated data types will reside in a sub namespace of "mvIMPACT::acquire" in order to avoid name clashes when working with several different created files in the same application. The namespace will automatically be generated from the ModelName tag and the file version tags in the devices GenICam XML file and the interface layout. For a device with a ModelName tag mvBlueIntelligentDevice and a file version of 1.1.0 something like this will be created:

namespace mvIMPACT {
   namespace acquire {
      namespace DeviceSpecific { // the name of the interface layout used during the process of code creation
         namespace MATRIX_VISION_mvBlueIntelligentDevice_1 { // this name will be constructed from the version and product
                                                             // information that comes with the GenICam XML file. As defined
                                                             // by the GenICam standard, different major versions of a devices
                                                             // XML file may not be compatible thus different interface files should be created here


// all code will reside in this inner namespace

         } // namespace MATRIX_VISION_mvBlueIntelligentDevice_1
      } // namespace DeviceSpecific
   } // namespace acquire
} // namespace mvIMPACT

In the application the generated header files can be used like normal include files:

#include <string>
#include <mvIMPACT_CPP/mvIMPACT_acquire.h>
#include <mvIMPACT_CPP/mvIMPACT_acquire_GenICam_Wrapper_DeviceSpecific.h>

Now to access data types from the header files of course the namespaces must somehow be taken into account. When there is just a single interface that has been created automatically the easiest thing to do would probably an appropriate using statement:

using namespace mvIMPACT::acquire::DeviceSpecific::MATRIX_VISION_mvBlueIntelligentDevice_1;

If several files created from different devices shall be used and these devices define similar features in a slightly different way this however might result in name clashes and/or unexpected behaviour. In that case the namespaces should be specified explicitly when creating data type instances from the header file in the application:

//-----------------------------------------------------------------------------
void fn( Device* pDev )
//-----------------------------------------------------------------------------
{
  mvIMPACT::acquire::DeviceSpecific::MATRIX_VISION_mvBlueIntelligentDevice_1::DeviceControl dc( pDev );
  if( dc.timestampLatch.isValid() )
  {
    dc.timestampLatch.call();
  }
}

When working with a using statement the same code can be written like this as well:

//-----------------------------------------------------------------------------
void fn( Device* pDev )
//-----------------------------------------------------------------------------
{
  DeviceControl dc( pDev );
  if( dc.timestampLatch.isValid() )
  {
    dc.timestampLatch.call();
  }
}



Introducing acquisition / recording possibilities

There are several use cases concerning the acquisition / recording possibilities of the camera:



Acquiring a number of images

As described in chapter Acquisition Control, if you want to acquire a number of images, you can use as "Setting -> Base -> Camera -> Acquisition Control -> Acquisition Mode" "MultiFrame" and you have to set the "Acquisition Frame Count".

Afterwards, if you start the acquisition via the button "Acquire", the camera will deliver the number of images.

The "MultiFrame" functionality can be combined with an external signal to start the acquisition.

There are several ways and combinations possible, e.g.:

  • A trigger starts the acquisition (Figure 1).
  • A trigger starts the acquisition start event and a second trigger starts the grabbing itself (Figure 2).
Figure 1: Starting an acquisition after one trigger event

For this scenario, you have to use the "Setting -> Base -> Camera -> Acquisition Control -> Trigger Selector" "AcquisitionStart".

The following figure shows, how to set the scenario shown in Figure 1 with wxPropView

Figure 2: wxPropView - Setting acquisition of a number of images started by an external signal

A rising edge at line 4 will start the acquisition of 20 images.



Recording sequences in the camera

Introduction

Beside the mvPretrigger, there are two further mv Acquisition Memory Mode available:

  1. mvRecord
    which uses frame rate and trigger settings.
  2. mvPlayback
    which disregards the frame rate and trigger settings and outputs the images as fast as possible (throttling by bandwidth setting).

To use mvRecord, just

  1. set mv Acquisition Memory Mode to mvRecord.
  2. Start the camera with "Use" and "Acquire".

The camera will record mv Acquisition Memory Max Frame Count images into the memory.

To use mvPlayback, just

  1. set mv Acquisition Memory Mode to mvPlayback.
  2. Start the camera with "Use" and "Acquire".

The camera will playback (transfer) mv Acquisition Memory Max Frame Count images into the PC memory.

Note
mv Acquisition Memory Max Frame Count can be increased by reducing the image height.



Recording sequences with pre-trigger

What is pre-trigger?

With pre-trigger it is possible to record frames before and after a trigger event.

How it works

To use this functionality you have to set the mv Acquisition Memory Mode to "mvPretrigger". This can be used to define the number of frames which will be recorded before the trigger event occurs:

Figure 1: wxPropView - setting pre-trigger

Afterwards, you have to define the "AcquisitionStart" or "AcquisitionActive" event. In figure 1 this is Line4 as trigger event, which starts the regular camera streaming.

Now, start the camera by pressing "Live" and generate the acquisition event.

The camera will output the number of pre-trigger frames as fast as possible followed by the frames in live mode as fast as possible until the frame rate is in sync:

Figure 2: wxPropView - recorded / output images



Creating acquisition sequences (Sequencer Control)

Introduction

As mentioned in GenICam and Advanced Features section of this manual, the Sequencer Mode is a feature to define feature sets with specific settings. The sets are activated by a user-defined trigger source and event.

Note
At the moment, the Sequencer Mode is only available for MATRIX VISION cameras with CCD sensors and Sony's CMOS sensors. Please consult the "Device Feature and Property List"s to get a summary of the actually supported features of each sensor.

The following features are currently available for using them inside the sequencer control:

  • BinningHorizontal
  • BinningVertical
  • CounterDuration (can be used to configure a certain set of sequencer parameters to be applied for the next CounterDuration frames)
  • DecimationHorizontal
  • DecimationVertical
  • ExposureTime
  • Gain
  • Height
  • OffsetX
  • OffsetY
  • Width
  • UserOutputValueAll
  • UserOutputValueAllMask
  • Multiple conditional sequencer paths.

The Sequencer Control uses Counter And Timer Control Counter1 to work. If you already preset Counter1 and you save a new acquisition sequence, the settings of Counter1 will be overwritten.

Note
Configured sequencer programs are stored as part of the User Sets like any other feature.

Creating a sequence using the Sequencer Control in wxPropView

In this sample, we define an acquisition sequence with five different exposure times on the device whereby the last step should be repeated five times. We also activate the digital outputs 0..3 to the sets accordingly - this, for example, can be used as flash signals. All the configuration is done on the device itself so after finishing the configuration and starting the acquisition the device itself will apply the parameter changes when necessary. The host application only needs to acquire the images then. This results in a much faster overall frame rate compared to when applying these changes on a frame to frame basis from the host application.

  • 1000 us
  • 5000 us
  • 10000 us
  • 20000 us
  • 50000 us (5x)

This will result in the following flow diagram:

Figure 1: Working diagram of the sample

As a consequence the following exposure times will be used to expose images inside an endless loop once the sequencer has been started:

  • Frame 1: 1000 us
  • Frame 2: 5000 us
  • Frame 3: 10000 us
  • Frame 4: 20000 us
  • Frame 5: 50000 us
  • Frame 6: 50000 us
  • Frame 7: 50000 us
  • Frame 8: 50000 us
  • Frame 9: 50000 us
  • Frame 10: 1000 us
  • Frame 11: 5000 us
  • Frame 12: 10000 us
  • ...

So the actual sequence that will be executed on the device later on will be like this

while( sequencerMode == On )
{
  take 1 image using set 0
  take 1 image using set 1
  take 1 image using set 2
  take 1 image using set 3
  take 5 images using set 4
}
  • There are 2 C++ examples called GenICamSequencerUsage and GenICamSequencerParameterChangeAtRuntime that show how to control the sequencer from an application. They can be found in the Examples section of the C++ interface documentation

The following steps are needed to configure the device as desired:

Note
This is the SFNC way how to create an acquisition sequence and consequently how you have to program it. However, wxPropView offers a wizard to define an acquisition sequence in a much easier way.
  1. First, switch into the "Configuration Mode": "Sequencer Configuration Mode" = "On". Only then the sequencer on a device can be configured.

    Figure 2: wxPropView - Sequencer Configuration Mode = On


  2. Set the "Sequencer Feature Selector", if it is not already active (pink box, figure 3): "Sequencer Feature Selector" = "ExposureTime" and "Sequencer Feature Enable" = "1".
  3. Set the "Sequencer Feature Selector" for the duration counter (pink box, figure 4): "Sequencer Feature Selector" = "CounterDuration" and "Sequencer Feature Enable" = "1".
  4. Then, each sequencer set must be selected by the "Sequencer Set Selector" (orange box, figure 3): "Sequencer Set Selector" = "0".

    Figure 3: wxPropView - Sequencer set 0


  5. Set the following sequencer set using "Sequencer Set Next" (brown box, figure 3): "Sequencer Set Next" = "1".
  6. Set the "Exposure Time" (red box, figure 3): "Exposure Time" = "1000".
  7. Finally, save the "Sequencer Set" (green box, figure 3): "int SequencerSetSave()". This ends the configuration of this sequencer set and all the relevant parameters have been stored inside the devices RAM.
  8. Set the "UserOutputValueAllMask" (purple box, figure 4) to a suitable value. In this case we want to use all UserOutputs, so we set it to "0xf".
  9. Set the "UserOutputValueAll" (red box, figure 4): "UserOutputValueAll" = "0x1".

    Figure 4: wxPropView - DigitalIOControl - Set UserOutputValueAll of Line3


  10. Repeat these steps for the following 3 sequencer sets (Exposure Times 5000, 10000, 20000; UserOutputValueAll 0x2, 0x4, 0x8).
  11. For the last sequencer set, set the desired sequencer set with "Sequencer Set Selector" (orange box, figure 5): "Sequencer Set Selector" = "4".
  12. Set the following sequencer set with "Sequencer Set Next" and trigger source with "Sequencer Trigger Source" (brown box, figure 5):
    "Sequencer Set Next" = "0". This will close the loop of sequencer sets by jumping from here back to the first one.
    "Sequencer Trigger Source" = "Counter1End".
  13. Set the "Exposure Time" (red box, figure 5): "Exposure Time" = "50000".

    Figure 5: wxPropView - Sequencer set 4


  14. Set the "Counter Duration" in "Counter And Timer Control" (red box, figure 6): "Counter Duration" = "5".

    Figure 6: wxPropView - "Sequencer Mode" = "On"


  15. As there are only four UserOutputs we decided not to show sequencer set "4" on the output lines.
  16. Finally, save the "Sequencer Set" (green box, figure 3): "int SequencerSetSave()".
  17. Leave the "Configuration Mode" (red box, figure 4: "Sequencer Configuration Mode" = "Off".
  18. Activate the "Sequencer Mode" (red box, figure 4): "Sequencer Mode" = "On".

    Figure 7: wxPropView - "Sequencer Mode" = "On"


Note
The "Sequencer Mode" will overwrite the current device settings.

You will now see that the sequencer sets are processed endlessly. Via the chunk data (activate chunk data via "Setting -> Base -> Camera -> GenICam -> Chunk Data Control" - activate "Chunk Mode Active"), the "Info Plot" of the analysis grid can be used to visualize the exposure times:

Figure 7: wxPropView - Info Plot shows the exposure times

Adapting the active time on the output lines via logic gates

If you do not want to see the whole active time of a given sequencer set but only the exposure time of a given sequencer set, you can combine your signal with the logic gates in mvLogicGateControl. Figure 8 shows sample settings of Line3:

Figure 8: wxPropView - mvLogicGateControl

This produces the following output on the output lines:

Figure 9: UserOutputs via mvLogicGateControl on output lines

These signals could be used as flash signals for separate sequencer sets.

You can program this as follows:

#include <mvIMPACT_CPP/mvIMPACT_acquire_GenICam.h>

GenICam::DigitalIOControl dio = new GenICam::DigitalIOControl(pDev);
dio.userOutputValueAllMask.write( 0xF );
dio.userOutputValueAll.write( 0x1 ); // 0x2, 0x4, 0x8, 0x0

GenICam::mvLogicGateControl mvlgc = new GenICam::mvLogicGateControl(pDev);
mvlgc.mvLogicGateANDSelector.writeS("mvLogicGateAND1");
mvlgc.mvLogicGateANDSource1.writeS("UserOutput0");
mvlgc.mvLogicGateANDSource2.writeS("ExposureActive");
mvlgc.mvLogicGateORSelector.writeS("mvLogicGateOR1");
mvlgc.mvLogicGateORSource1.writeS("mvLogicGateAND1Output");
mvlgc.mvLogicGateORSource2.writeS("Off");
mvlgc.mvLogicGateORSource3.writeS("Off");
mvlgc.mvLogicGateORSource4.writeS("Off");

dio.lineSelector.writeS("Line0");
dio.lineSource.writeS("mvLogicGateOR1Output");

// To output the UserOutputs directly on the output lines would be like this:
// dio.lineSource.writeS("UserOutput0");

Using the Sequencer Control wizard

Since
mvIMPACT Acquire 2.18.0

wxPropView offers a wizard for the Sequencer Control usage:

Figure 10: wxPropView - Wizard button

The wizard can be used to get a comfortable overview about the settings of the sequence sets and to create and set a sequence sets in a much easier way:

Figure 11: wxPropView - Sequencer Control wizard

Just

  • select the desired properties,
  • select the desired Set tabs,
  • set the properties in the table directly, and
  • "Auto-Assign Displays To Sets", if you like to show each sequence set in a different display.

Do not forget to save the settings at the end.

Working with sequence paths

Since
mvIMPACT Acquire 2.28.0

It is possible to define sets with a maximum of two active paths. The following diagram shows that two paths are defined in "Set 0". "Path 0" ("SequencePathSelector = 0") is the "standard" path that in this configuration loops and "Path 1" ("SequencePathSelector = 1") will jump to "Set 1" after it is activated via a "RisingEdge" ("SequencerTriggerActivation = RisingEdge") signal at "UserOutput0" ("SequencerTriggerSource = 0"). "UserOutput0" can be connected, for example, to one of the digital input lines of the camera:

Figure 12: Working diagram of a sample with sequence paths

There are some specifications concerning the sequencer path feature:

  • A path is inactive as soon as the "SequencerTriggerSource" is Off.
  • If none of the paths are triggered or both parts are inactive, the sequencer will remain in the current set.
  • If both paths were triggered, the path with the trigger that happened first, will be followed.
  • As the next sequencer set parameters (like ExposureTime) need to be prepared beforehand, the set sequence might not seem straight forward at first glance. The sequencer will always need one frame to switch to the new set; this frame will be the already prepared set.

Programming a sequence with paths using the Sequencer Control

First the sequencer has to be configured.

  #include <mvIMPACT_CPP/mvIMPACT_acquire_GenICam.h>

  GenICam::SequencerControl sc = new GenICam::SequencerControl( pDev );
  GenICam::AcquisitionControl ac = new GenICam::AcquisitionControl( pDev );
  TDMR_ERROR result = DMR_NO_ERROR;

  // general sequencer settings
  sc.sequencerMode.writeS( "Off" );
  sc.sequencerConfigurationMode.writeS( "On" );
  sc.sequencerFeatureSelector.writeS( "ExposureTime" );
  sc.sequencerFeatureEnable.write( bTrue );
  sc.sequencerFeatureSelector.writeS( "CounterDuration" );
  sc.sequencerFeatureEnable.write( bFalse );
  sc.sequencerFeatureSelector.writeS( "Gain" );
  sc.sequencerFeatureEnable.write( bFalse );
  sc.sequencerSetStart.write( 0 );

  // set0
  sc.sequencerSetSelector.write( 0LL );
  ac.exposureTime.write( 1000 );
  // set0 path0
  sc.sequencerPathSelector.write( 0LL );
  sc.sequencerTriggerSource.writeS( "ExposureEnd" );
  sc.sequencerSetNext.write( 0LL );
  // set0 path1
  sc.sequencerPathSelector.write( 1LL );
  sc.sequencerTriggerSource.writeS( "UserOutput0" );
  sc.sequencerTriggerActivation.writeS( "RisingEdge" );
  sc.sequencerSetNext.write( 1LL );
  // save set
  if( ( result = static_cast<TDMR_ERROR>( sc.sequencerSetSave.call() ) ) != DMR_NO_ERROR )
  {
    std::cout << "An error was returned while calling function '" << sc.sequencerSetSave.displayName() << "' on device " << pDev->serial.read()
              << "(" << pDev->product.read() << "): " << ImpactAcquireException::getErrorCodeAsString( result ) << endl;
  }

  // set1
  sc.sequencerSetSelector.write( 1LL );
  ac.exposureTime.write( 5000 );
  // set1 path0
  sc.sequencerPathSelector.write( 0LL );
  sc.sequencerTriggerSource.writeS( "ExposureEnd" );
  sc.sequencerSetNext.write( 0LL );
  // set1 path1
  sc.sequencerPathSelector.write( 1LL );
  sc.sequencerTriggerSource.writeS( "Off" );
  // save set
  if( ( result = static_cast<TDMR_ERROR>( sc.sequencerSetSave.call() ) ) != DMR_NO_ERROR )
  {
    std::cout << "An error was returned while calling function '" << sc.sequencerSetSave.displayName() << "' on device " << pDev->serial.read()
              << "(" << pDev->product.read() << "): " << ImpactAcquireException::getErrorCodeAsString( result ) << endl;
  }

  // final general sequencer settings
  sc.sequencerConfigurationMode.writeS( "Off" );
  sc.sequencerMode.writeS( "On" );

Then it can later be triggered during runtime.

  GenICam::DigitalIOControl dic = new GenICam::DigitalIOControl( pDev );
  dic.userOutputSelector.write( 0 );
  dic.userOutputValue.write( bTrue );
  dic.userOutputValue.write( bFalse );

This will set an internal event that will cause the sequencer to use set0-path1 at the next possible time, i.e. the next time we are in set0.

There is a C++ example called GenICamSequencerUsageWithPaths that shows how to control the sequencer with paths from an application. It can be found in the Examples section of the C++ interface documentation



Generating very long exposure times

Basics

At the moment the exposure time is limited to a maximum of 1 up to 20 seconds depending on certain internal sensor register restrictions. So each device might report a different maximum exposure time.

Since
mvIMPACT Acquire 2.28.0

Firmware version 2.28 did contain a major overhaul here so updating to at least this version can result in a much higher maximum exposure time. However, current sensor controllers can be configured to use even longer exposure times if needed using one of the devices timers to create an external exposure signal that can be fed back into the sensor. This use case will explain how this can be done.

This approach of setting up long exposure times requires the sensor of the camera to allow the configuration of an external signal to define the length of the exposure time, so only devices offering the ExposureMode TriggerWidth can be used for this setup.

Note
The maximum exposure time in microseconds that can be achieved in this configuration is the maximum value offered by the timer used.

With GenICam compliant devices that support all the needed features the setup is roughly like this:

  1. Select "Setting -> Base -> Camera -> GenICam -> Counter And Timer Control -> Timer Selector -> Timer 1" and .
  2. set "Timer Trigger Source" = "UserOutput0".
  3. set "Timer Trigger Activation" = "RisingEdge".
    I.e. a rising edge on UserOutput0 will start Timer1.
  4. Then set the "Timer Duration" property to the desired exposure time in us.
  5. In "Setting -> Base -> Camera -> GenICam -> Acquisition Control" set the Trigger Selector = "FrameStart".
    I.e. the acquisition of one frame will start when
  6. Timer1 is active: "Trigger Source" = "Timer1Active".
  7. Exposure time will be the trigger width: "Exposure Mode" = "TriggerWidth".

The following diagram illustrates all the signals involved in this configuration:

Figure 1: Long exposure times using GenICam

To start the acquisition of one frame a rising edge must be detected on UserOutput0 in this example but other configurations are possible as well.

Setting up the device

The easiest way to define a long exposure time would be by using a single timer. The length of the timer active signal is then used as trigger signal and the sensor is configured to expose while the trigger signal is active. This allows to define exposure time with micro-second precision up the the maximum value of the timer register. With a 32 bit timer register this results in a maximum exposure time of roughly 4295 seconds (so roughly 71.5 minutes). When writing code e.g. in C# this could look like this:

private static void configureDevice(Device pDev, int exposureSec, GenICam.DigitalIOControl ioc)
{
  try
  {
    // establish access to the CounterAndTimerControl interface
    GenICam.CounterAndTimerControl ctc = new mv.impact.acquire.GenICam.CounterAndTimerControl(pDev);
    // set TimerSelector to Timer1 and TimerTriggerSource to UserOutput0
    ctc.timerSelector.writeS("Timer1");
    ctc.timerTriggerSource.writeS("UserOutput0");
    ctc.timerTriggerActivation.writeS("RisingEdge");

    // Set timer duration for Timer1 to value from user input
    ctc.timerDuration.write(exposureSec * 1000000);

    // set userOutputSelector to UserOutput0 and set UserOutput0 to inactive. We will later generate a pulse here to initiate the exposure
    ioc.userOutputSelector.writeS("UserOutput0");
    ioc.userOutputValue.write(TBoolean.bFalse);

    // establish access to the AcquisitionCotrol interface
    GenICam.AcquisitionControl ac = new mv.impact.acquire.GenICam.AcquisitionControl(pDev);
    // set TriggerSelector to FrameStart and try to set ExposureMode to TriggerWidth
    ac.triggerSelector.writeS("FrameStart");
    // set TriggerSource for FrameStart to Timer1Active and activate TriggerMode
    ac.triggerSource.writeS("Timer1Active");
    ac.triggerMode.writeS("On");

    // expose as long as we have a high level from Timer1
    ac.exposureMode.writeS("TriggerWidth");
  }
  catch (Exception e)
  {
    Console.WriteLine("ERROR: Selected device does not support all features needed for this long time exposure approach: {0}, terminating...", e.Message);
    System.Environment.Exit(1);
  }
}
Note
Make sure that you adjust the ImageRequestTimeout_ms either to 0 (infinite)(this is the default value) or to a reasonable value that is larger than the actual exposure time in order not to end up with timeouts resulting from the buffer timeout being smaller than the actual time needed for exposing, transferring and capturing the image:
ImageRequestTimeout_ms = 0 # or reasonable value
See also
Counter And Timer Control
Digital I/O Control
Acquisition Control



Working with multiple AOIs (mv Multi Area Mode)

Since
mvIMPACT Acquire 2.18.3

Introduction

A special feature of Pregius sensors (a.k.a. IMX) from Sony is the possibility to define multiple AOIs (Areas of Interests - a.k.a. ROI - Regions of Interests) and to transfer them at the same time. Because many applications just need one or several specific parts in an image to be checked, this functionality can increase the frame rate.

Once activated, the "mv Multi Area Mode" allows you, depending on the sensor, to define up to eight AOIs (mvArea0 to mvArea7) in one image. There are several parameters in combination with the AOIs which are illustrated in the following figure:

Figure 1: Multiple AOIs principle

The "Resulting Offset X" and "Resulting Offset Y" indicates the starting point of the specific AOI in the output image. To complete the rectangular output image, the "missing" areas are filled up with the image data horizontally and vertically. We recommend to use the wizard as a starting point - the wizard provides a live preview of the final merged output image.

Using wxPropView

To create multiple AOIs with wxPropView, you have to do the following step:

  1. Start wxPropView and
  2. connect to the camera.
  3. Then change in "Setting -> Base -> Camera -> GenICam -> Image Format Control" the
    "mv Multi Area Mode"
    to "mvMultiAreasCombined".
    Afterwards, "mv Area Selector" is available.
  4. Now, select the area a.k.a. AOI you want to create via "mv Area Selector", e.g. "mvArea3" and
  5. set the parameters "mv Area Width", "mv Area Height", "mv Area Offset X", and "mv Area Offset Y" to your needs.
  6. Activate the area a.k.a. AOI by checking the box of "mv Area Enable".

    Figure 2: wxPropView - Multiple AOIs

  7. Finally, start the acquisition by clicking the button "Acquire".

Using the Multi AOI wizard

Since
mvIMPACT Acquire 2.19.0

wxPropView offers a wizard for the Multi AOI usage:

Figure 3: wxPropView - Wizard menu

The wizard can be used to get a comfortable overview about the settings of the AOIs and to create and set the AOIs in a much easier way:

Figure 4: wxPropView - Multi AOI wizard

Just

  • select the desired mvArea tabs,
  • set the properties like offset, width, and height in the table directly, and
  • confirm the changes at the end using the Ok button.

The live image shows the created AOIs and the merged or "missing" areas which are used to get the final rectangular output image.

Figure 4: wxPropView - Multi AOI wizard - Live image

Programming multiple AOIs

#include <mvIMPACT_CPP/mvIMPACT_acquire.h>
#include <mvIMPACT_CPP/mvIMPACT_acquire_GenICam.h>

... 
    GenICam::ImageFormatControl ifc( pDev );
    ifc.mvMultiAreaMode.writeS( "mvMultiAreasCombined" );
    ifc.mvAreaSelector.writeS( "mvArea0" );
    ifc.mvAreaEnable.write( bTrue );
    ifc.mvAreaOffsetX.write( 0 );
    ifc.mvAreaOffsetY.write( 0 );
    ifc.mvAreaWidth.write( 256 );
    ifc.mvAreaHeight.write( 152 );
    ifc.mvAreaSelector.writeS( "mvArea1" );
    ifc.mvAreaEnable.write( bFalse );
    ifc.mvAreaSelector.writeS( "mvArea2" );
    ifc.mvAreaEnable.write( bFalse );
    ifc.mvAreaSelector.writeS( "mvArea3" );
    ifc.mvAreaEnable.write( bTrue );
    ifc.mvAreaOffsetX.write( 0 );
    ifc.mvAreaOffsetY.write( 0 );
    ifc.mvAreaWidth.write( 256 );
    ifc.mvAreaHeight.write( 152 );
    ifc.mvAreaOffsetX.write( 1448 );
    ifc.mvAreaOffsetY.write( 912 );
...



Working with burst mode buffer

If you want to acquire a number of images at sensor's maximum frame rate while at the same time the image transfer should be at a lower frame rate, you can use the internal memory of the mvBlueCOUGAR-X .

Figure 1: Principle of burst mode buffering of images
Note
The maximum buffer size can be found in "Setting -> Base -> Camera -> GenICam -> Acquisition Control -> mv Acquisition Memory Max Frame Count".

To create a burst mode buffering of images, please follow these steps:

  1. Set image acquisition parameter ("Setting -> Base -> Camera -> GenICam -> Acquisition Control -> mv Acquisition Frame Rate Limit Mode") to "mvDeviceMaxSensorThroughput".
  2. Finally, set the acquisition parameter "mv Acquisition Frame Rate Enable" to "Off".

    Figure 2: wxPropView - Setting the bandwidth using "mv Acquisition Frame Rate Limit Mode"

Alternatively, you can set the burst mode via the desired input frames and the desired output bandwidth:

  1. Set image acquisition parameters to the desired input frames per second value ("Setting -> Base -> Camera -> GenICam -> Acquisition Control").

    Figure 3: wxPropView - Setting the "Acquisition Frame Rate"

  2. Set bandwidth control to the desired MByte/s out value in "Setting -> Base -> Camera -> GenICam -> Device Control -> Device Link Selector -> Device Link Troughput Limit Mode" to "On" and
  3. set the desired "Device Link Troughput Limit" in Bits per second (Bps).

    Figure 4: wxPropView - Setting the bandwidth using "Device Link Troughput Limit Mode"

Now, the camera will buffer burst number of images in internal memory and readout at frames per second out.

See also
Limiting the bandwidth of the imaging device

Triggered frame burst mode

With the triggerSelector "FrameBurstStart", you can also start a frame burst acquisition by a trigger. A defined number of images ("AcquisitionBurstFrameCount") will be acquired directly one after the other. With the "mv Acquisition Frame Rate Limit Mode" set to mvDeviceMaxSensorThroughput , the there won't be hardly any gap between these images.

As shown in figure 5, "FrameBurstStart" can be trigger by a software trigger, too.

Figure 5: wxPropView - Setting the frame burst mode triggered by software
Figure 6: Principle of FrameBurstStart



Using the SmartFrameRecall feature

Since
mvIMPACT Acquire 2.18.0

Introduction

The SmartFrameRecall is a new, FPGA based smart feature which takes the data handling of industrial cameras to a new level.

So far, the entire data amount has to transferred to the host PC whereby the packetizer in the camera split the data packages and distributed them to the two Gigabit Ethernet lines. On the host PC, the data was merged again, shrank, and this AOI was possibly processed (Figure 1).

Figure 1: Data handling so far

This procedure has several disadvantages:

  • Both data lines (in case of Dual GigE cameras for example) for each camera are required which
    • leads to high cabling efforts.
  • A high end PC is needed to process the data which
    • leads to high power consumptions.
  • In USB 3 multi-camera solutions (depending on the resolution) each camera requires a separate connection line to the host PC which
    • limits the cabling possibilities, the possible distances, and makes an installation more complex (without the possibilities to use hubs, for example).
  • Last but not least, the frame rates a limited to the bandwidth.

The SmartFrameRecall is a new data handling approach which buffers the hi-res images in the camera and only transfers thumbnails. You or your software decides on the host PC which AOI should be sent to the host PC (Figure 2).

Figure 2: SmartFrameRecall working method

This approach allows

  • higher bandwidths,
  • less CPU load and power consumption,
  • higher frame rates, and
  • less complex cabling.
Figure 3: Connection advantages in combination with SmartFrameRecall

Implementing a SmartFrameRecall application

First of all, clarify if the SmartFrameRecall makes sense to be used in your application:

  • Taking a short look on the thumbnails, could you specify which frames don't interest you at all or which portions of the frame suffice to you for further processing?

If you can, your application can do so too, and then SmartFrameRecall is the right framework for you. To use the SmartFrameRecall follow these steps:

  • Activate the Chunk Data Control:
#include <mvIMPACT_CPP/mvIMPACT_acquire.h>
#include <mvIMPACT_CPP/mvIMPACT_acquire_GenICam.h>

... 
GenICam::ChunkDataControl cdc( pDev );
cdc.chunkModeActive.write( bTrue );
cdc.chunkSelector.writeS( "Image" );
cdc.chunkEnable.write( bTrue );
cdc.chunkSelector.writeS( "mvCustomIdentifier" );
cdc.chunkEnable.write( bTrue );
...

It is necessary to activate the chunk data that your application can easily distinguish frames belonging to the normal stream from the ones requested by the host application.

  • Reduce the size of the streamed images. This can reduce the size using Decimation, both horizontally and vertically. E.g. setting decimation to 16, a normal image will only consume 16*16 thus 1/256th of the bandwidth:
...
GenICam::ImageFormatControl ifc( pDev );
ifc.decimationHorizontal.write( 16 );
ifc.decimationVertical.write( 16 );
...
  • Make sure that the resulting image width is a multiple of 8! If this is not the case the SmartFrameRecall feature cannot be activated.
  • Activate the SmartFrameRecall feature:
...

GenICam::AcquisitionControl ac( pDev );
ac.mvSmartFrameRecallEnable.write( bTrue );
...

This will configure the devices internal memory to store each frame (that gets transmitted to the host) in full resolution. These images can be requested by an application when needed. As soon as the memory is full, the oldest one will be removed from the memory whenever a new one becomes ready (FIFO).

  • Analyze the images of the reduced data stream.
  • If necessary, request the desired image in full resolution:
...
struct ThreadParameter
{
    Device* pDev;
    GenICam::CustomCommandGenerator ccg;
    ...
}
...
unsigned int DMR_CALL liveThread( void* pData )
{
    ThreadParameter* pThreadParameter = reinterpret_cast<ThreadParameter*>( pData );
    ...
    pThreadParameter->ccg.requestTransmission( pRequest, x, y, w, h, rtmFullResolution, cnt );
    ...
}
...

The last parameter of requestTransmission will be written into the chunkmvCustomIdentifier and your application can recognize the request.

  • Finally do you analysis/processing with the requested image in full resolution.

We provide



Using VLC Media Player

With the DirectShow Interface MATRIX VISION devices become a (acquisition) video device for the VLC Media Player.

Figure 1: VLC Media Player with a connected device via DirectShow

System requirements

It is necessary that following drivers and programs are installed on the host device (laptop or PC):

  • Windows 7, 32 bit or 64 bit
  • up-do-date VLC Media Player, 32 bit or 64 bit (here: version 2.0.6)
  • up-do-date MATRIX VISION driver, 32 bit or 64 bit (here: version 2.5.6)
Attention
Using Windows 10: VLC Media Player with versions 2.2.0 or smaller have been tested successfully. Newer versions do NOT work with mvIMPACT Acquire! There are some bug tickets in the VLC repository that might be related (At the time of writing none of these seem to have a fix):

Installing VLC Media Player

  1. Download a suitable version of the VLC Media Player from the VLC Media Player website mentioned below.
  2. Run the setup.
  3. Follow the installation process and use the default settings.

A restart of the system is not required.

See also
http://www.videolan.org/

Setting up MV device for DirectShow

Note
Please be sure to register the MV device for DirectShow with the right version of mvDeviceConfigure . I.e. if you have installed the 32 bit version of the VLC Media Player, you have to register the MV device with the 32 bit version of mvDeviceConfigure ("C:/Program Files/MATRIX VISION/mvIMPACT Acquire/bin") !
  1. Connect the MV device to the host device directly or via GigE switch using an Ethernet cable.
  2. Power the camera using a power supply at the power connector.
  3. Wait until the status LED turns blue.
  4. Open the tool mvDeviceConfigure ,
  5. set a friendly name ,
  6. and register the MV device for DirectShow .
Note
In some cases it could be necessary to repeat step 5.

Working with VLC Media Player

  1. Start VLC Media Player.
  2. Click on "Media -> Open Capture Device..." .

    Figure 2: Open Capture Device...

  3. Select the tab "Device Selection" .
  4. In the section "Video device name" , select the friendly name of the MV device:
    Figure 3: Video device name

  5. Finally, click on "Play" .
    After a short delay you will see the live image of the camera.



Using the linescan mode

Both CMOS sensors from e2v offer a line scan mode. One (gray scale sensor) or two lines (in terms of color sensor) can be selected to be read out of the full line height of 1024 or 1200 lines. This or these lines are grouped to a pseudo frame of selectable height in the internal buffer of the camera.

Complete instructions for using the line scan mode are provided here:

System requirements

  • mvBlueCOUGAR-X
    • "firmware version" at least "1.6.32.0"
  • mvBlueFOX3
    • "firmware version" at least "1.6.139.0"
  • mvBlueLYNX-X
    • free running line scan mode is available; the complete line scan functionality will be available in one of the next firmware updates

Initial situation and settings

Generally, line scan cameras are suitable for inspections of moving, continuous materials. In order that the line scan camera acquires the line at the right time, an incremental encoder, for example, at a conveyor belt triggers the line scan camera. Normally, an incremental encoder does this using a specific frequency like 1:1 which means that there is a signal for every line. However, during the adjustment of a line trigger application or while choosing a specific incremental encoder you have to keep the specific frequency in mind.

Note
Using timers and counters it is possible to skip trigger signals.

In line scan mode, the camera adds the single lines to one image of the height of max. 1024 or 1200 lines (according to the used sensor). The images are provided with no gap.

Note
While using the line scan mode with a gray scale sensor, one trigger signal will lead to one acquired image line. Using a color sensor, one trigger signal will lead to two acquired image lines.

Due to aberrations at the edges of the lens, you should set an offset in the direction of Y ("Offset Y", see the following figure), generally around the half of the sensor's height (a.k.a. sensor's Y center). With Offset Y you can adjust the scan line in the direction of motion.

Figure 1: Sensor's view and settings with a sensor with max. height of 1024 pixels/lines (e.g. -x02e / -1013)

Scenarios

With regards to the external trigger signals provided by an incremental encoder, there are two possible scenarios:

  1. A conveyor belt runs continuously and so does the incremental encoder, or - like in a reverse vending machine,
  2. a single item is analyzed and the conveyor belt and so the incremental encoder stops after the inspection and restarts for the next item.

In the first scenario you can use the standard settings of the MATRIX VISION devices. Please have a look at the sample Sample 2: Triggered line scan acquisition with a specified number of image blocks and pausing trigger signals scan_Sample "Triggered line scan acquisition with exposure time of 250 us" which shows how you can set the line scan mode with continuous materials and signals from the encoder. However, it is absolutely required that the external trigger is always present. During a trigger interruption, controlling or communication to the camera is not possible.

In the second scenario, the external trigger stops. If there is a heartbeat functionality in the system (e.g. with GigE Vision), there can be a halt of the system. Please have a look at the sample Triggered line scan acquisition with a specified number of image blocks and pausing trigger signals which shows how you can handle pausing trigger signals.

Sample 1: Triggered linescan acquisition with exposure time of 250 us

This sample will show you how to use the line scan mode of the sensors -x02e and -x02eGE using an external trigger provided by an incremental encoder which sends a "trigger for every line" (1:1)

Note
You can also use the sensor -x04e. However, the sensor is slower due to the higher number of pixels.

In this sample, we chose an exposure time of "250 us" and to ease the calculations we used "1000 px image height".

Note
To get suitable image results, it might be necessary to increase the gain or the illumination.

These settings result in a max. "frame rate" of "2.5 frames per second".

To adjust the opto-mechanics (focus, distance, illumination, etc.), you can use the area mode of the sensor. That's a main advantage of an area sensor with line scan mode compared to a line scan camera!

You will need the following pins of the mvBlueCOUGAR-X :

Pin. Signal (Standard version) Description
1 GND Common ground
2 12V .. 24V Power supply
4 Opto DigIn0 (line4) The output signal A of the incremental encoder

Setting the application in wxPropView

Summary of our sample:

Property name wxPropView Setting GenICam Control Comment
Device Scan Type line scan Device Control  
 
Height (in pixels) 1000 Image Format Control  
Offset Y (in pixels) 500 Image Format Control  
 
Exposure Time (in microseconds) 250.000 Acquisition Control  
Trigger Mode On Acquisition Control  
Trigger Source Line4 Acquisition Control  
 
ImageRequestTimeout_ms (in milliseconds) 0 ms - This is necessary otherwise there will be error counts and no frames are created.


Figure 2: Settings in wxPropView

Sample 2: Triggered line scan acquisition with a specified number of image blocks and pausing trigger signals

This section will provide you with some information you have to keep in mind while working with pausing triggers and specified number of image blocks.

First of all, using mvBlueCOUGAR-X or mvBlueCOUGAR-XD it is necessary to disable the heartbeat of the GigE Vision control protocol (GVCP) ("Device Link Heartbeat Mode = On") otherwise a paused trigger signal can be misinterpreted as a lost connection:

Figure 3: wxPropView - Disabling the heartbeat

Secondly, since the conveyor belt stops sometime, the trigger will do so, too. Be sure, that the trigger signal is available until the last image block was received.

Thirdly, if you know the number of image blocks, you can use the How to see the first image MultiFrame functionality (in "Setting -> Base -> Camera -> GenICam -> Acquisition Control" set "Acquisition Mode = MultiFrame" and "Acquisition Frame Count"). This will acquire the number of image blocks and will stop the acquisition afterwards.



Working with Event Control

The mvBlueCOUGAR-X camera generates Event notifications. An Event is a message that is sent to the host application to notify it of the occurrence of an internal event. With "Setting -> Base -> Camera -> GenICam -> Event Control" you can handle these notifications.

At the moment, it is possible to handle

  • Exposure End (= sensor's exposure end)
  • Line 4 (= DigIn0) Rising Edge
  • Line 5 (= DigIn1) Rising Edge
  • Frame End (= the camera is ready for a new trigger)

Setting Event notifications using wxPropView

To activate the notifications, just

  1. Select the Event via "Setting -> Base -> Camera -> GenICam -> Event Control -> Event Selector", e.g. ExposureEnd .
  2. Set the "Event Notification" to "On" .

Afterwards, it is possible to attach a custom callback that gets called whenever the property is modified. E.g. if you want to attach a callback to the Frame ID after the exposure was finished, you have to

  1. select "Setting -> Base -> Camera -> GenICam -> Event Control -> Event Exposure End Data -> Event Exposure End Frame ID",
  2. right-click on the property, and
  3. click on "Attach Callback".

    Figure 1: wxPropView - "Attach Callback" to Event Exposure End Frame ID

Now, you can track the property modifications in the output window:

Figure 2: wxPropView - Output window with the Event notifications

You can find a detailed Callback code example in the C++ API manual.



Improving the acquisition / image quality

There are several use cases concerning the acquisition / image quality of the camera:



Correcting image errors of a sensor

Due to random process deviations, technical limitations of the sensors, etc. there are different reasons that image sensors have image errors. MATRIX VISION provides several procedures to correct these errors, by default these are host-based calculations.

However, the mvBlueCOUGAR-X, for example, also supports a camera-based Flat-Field Correction, which saves dozens of % CPU load and lowers latency.

Provided image corrections procedures are

  1. Defective Pixels Correction,
  2. Dark Current Correction, and
  3. Flat-Field Correction.
Note
If you execute all correction procedures, you have to keep this order. All gray value settings of the corrections below assume an 8-bit image.
Figure 1: Host-based image corrections

The path "Setting -> Base -> ImageProcessing -> ..." indicates that these corrections are host-based corrections.

Before starting consider the following hints:

  • To correct the complete image, you have to make sure no user defined AOI has been selected: Right-click "Restore Default" on the devices AOI parameters Width and Height or "Setting -> Base -> Camera -> GenICam -> Image Format Control" using the GenICam interface layout.
  • You have several options to save the correction data. The chapter Storing and restoring settings describes the different ways.
See also
There is a white paper about image error corrections with extended information available on our website: http://www.matrix-vision.com/tl_files/mv11/Glossary/art_image_errors_sensors_en.pdf

Defective Pixels Correction

Due to random process deviations, not all pixels in an image sensor array will react in the same way to a given light condition. These variations are known as blemishes or defective pixels.

There are two types of defective pixels:

  1. leaky pixel (in the dark)
    which indicates pixels that produce a higher read out code than average
  2. cold pixel (in standard light conditions)
    which indicates pixels that produce a lower read out code than average when the sensor is exposed (e.g. caused by dust particles on the sensor)
Note
Please use either an 8 bit Mono or Bayer image format when correcting the image. After the correction, all image formats will be corrected.

Correcting leaky pixels

To correct the leaky pixels following steps are necessary:

  1. Set gain ("Setting -> Base -> Camera -> GenICam -> Analog Control" "Gain = 0 dB") and exposure time "Setting -> Base -> Camera -> GenICam -> Acquisition Control" "ExposureTime = 360 msec" to the given operating conditions
    The total number of defective pixels found in the array depend on the gain and the exposure time.
  2. Black out the lens completely
  3. Set the (Filter-) "Mode = Calibrate leaky pixel"
  4. Snap an image ("Acquire" with "Acquisition Mode = SingleFrame")
  5. To activate the correction, choose one of the neighbor replace methods: "Replace 3x1 average" or "Replace 3x3 median"
  6. Save the settings including the correction data via "Action -> Capture Settings -> Save Active Device Settings"
    (Settings can be saved in the Windows registry or in a file)
Note
After having re-started the camera you have to reload the capture settings vice versa.

The filter checks:

Pixel > LeakyPixelDeviation_ADCLimit // (default value: 50) 

All pixels above this value are considered as leaky pixel.

Correcting cold pixels

To correct the cold pixels following steps are necessary:

  1. You will need a uniform sensor illumination approx. 50 - 70 % saturation (which means an average gray value between 128 and 180)
  2. Set the (Filter-) "Mode = Calibrate cold pixel" (Figure 2)
  3. Snap an image ("Acquire" with "Acquisition Mode = SingleFrame")
  4. To activate the correction, choose one of the neighbor replace methods: "Replace 3x1 average" or "Replace 3x3 median"
  5. Save the settings including the correction data via "Action -> Capture Settings -> Save Active Device Settings"
    (Settings can be saved in the Windows registry or in a file)
Note
After having re-started the camera you have to reload the capture settings vice versa.

The filter checks:

Pixel < T[cold] // (default value: 15 %) 

// T[cold] = deviation of the average gray value (ColdPixelDeviation_pc)

All pixels below this value have a dynamic below normal behavior.

Figure 2: Image corrections: DefectivePixelsFilter
Note
Repeating the defective pixel corrections will accumulate the correction data which leads to a higher value in "DefectivePixelsFound". If you want to reset the correction data or repeat the correction process you have to set the (Filter-) "Mode = Reset Calibration Data".

Storing pixel data on the device

To save and load the defective pixel data, appropriate functions are available:

  • int mvDefectivePixelDataLoad( void )
  • int mvDefectivePixelDataSave( void )

The section "Setting -> Base -> ImageProcessing -> DefectivePixelsFilter" was also extended (see Figure 2a). First, the DefectivePixelsFound indicates the number of found defective pixels. The coordinates are available through the properties DefectivePixelOffsetX and DefectivePixelOffsetY now. In addition to that it is possible to edit, add and delete these values manually (via right-click on the "DefectivePixelOffset" and select "Append Value" or "Delete Last Value"). Second, with the function

  • int mvDefectivePixelReadFromDevice( void )
  • int mvDefectivePixelWriteToDevice( void )

you can exchange the data from the filter with the camera and vice versa

Figure 2a: Image corrections: DefectivePixelsFilter (since driver version 2.17.1 and firmware version 2.12.406)

Just right-click on mvDefectivePixelWriteToDevice and click on "Execute" to write the data to the camera (and hand over the data to the mv Defective Pixel Correction Control). To permanently store the data inside the cameras non-volatile memory afterwards mvDefectivePixelDataSave must be called as well!

Figure 2b: Defective pixel data are written to the camera (since driver version 2.17.1 and firmware version 2.12.406)

While opening the camera, the camera will load the defective pixel data from the camera. If there are pixels in the filter available (via calibration), nevertheless you can load the values from the camera. In this case the values will be merged with the existing ones. I.e., new ones are added and duplicates are removed.

Dark Current Correction

Dark current is a characteristic of image sensors, which means, that image sensors also deliver signals in total darkness by warmness, for example, which creates charge carriers spontaneously. This signal overlays the image information. Dark current depends on two circumstances:

  1. Exposure time
    The longer the exposure, the greater the dark current part. I.e. using long exposure times, the dark current itself could lead to an overexposed sensor chip
  2. Temperature
    By cooling the sensor chips the dark current production can be highly dropped (approx. every 6 °C the dark current is cut in half)

Correcting Dark Current

The dark current correction is a pixel wise correction where the dark current correction image removes the dark current from the original image. To get a better result it is necessary to snap the original and the dark current images with the same exposure time and temperature.

Note
Dark current snaps generally show noise.

To correct the dark current pixels following steps are necessary:

  1. Black out the lens completely
  2. Set exposure time according to the application
  3. Set the number of image for calibration in "Setting -> Base -> ImageProcessing -> DarkCurrentFilter -> CalibrationImageCount" (Figure 3).
  4. Set "Setting -> Base -> ImageProcessing -> DarkCurrentFilter -> Mode" to "Calibrate" (Figure 3)
  5. Snap an image ("Acquire" with "Acquisition Mode = SingleFrame")
  6. Finally, you have to activate the correction: Set "Setting -> Base -> ImageProcessing -> DarkCurrentFilter -> Mode" to "On"
  7. Save the settings including the correction data via "Action -> Capture Settings -> Save Active Device Settings"
    (Settings can be saved in the Windows registry or in a file)

The filter snaps a number of images and averages the dark current images to one correction image.

Note
After having re-started the camera you have to reload the capture settings vice versa.
Figure 3: Image corrections: CalibrationImageCount
Figure 4: Image corrections: Calibrate
Figure 5: Image corrections: Dark current

Flat-Field Correction

Each pixel of a sensor chip is a single detector with its own properties. Particularly, this pertains to the sensitivity as the case may be the spectral sensitivity. To solve this problem (including lens and illumination variations), a plain and equally "colored" calibration plate (e.g. white or gray) as a flat-field is snapped, which will be used to correct the original image. Between flat-field correction and the future application you must not change the optic. To reduce errors while doing the flat-field correction, a saturation between 50 % and 75 % of the flat-field in the histogram is convenient.

Note
Flat-field correction can also be used as a destructive watermark and works for all f-stops.

To make a flat field correction following steps are necessary:

  1. You need a plain and equally "colored" calibration plate (e.g. white or gray)
  2. No single pixel may be saturated - that's why we recommend to set the maximum gray level in the brightest area to max. 75% of the gray scale (i.e., to gray values below 190 when using 8-bit values)
  3. Choose a BayerXY in "Setting -> Base -> Camera -> GenICam -> Image Format Control -> PixelFormat".
  4. Set the (Filter-) "Mode = Calibrate" (Figure 6)
  5. Start a Live snap ("Acquire" with "Acquisition Mode = Continuous")
  6. Finally, you have to activate the correction: Set the (Filter-) "Mode = On"
  7. Save the settings including the correction data via "Action -> Capture Settings -> Save Active Device Settings"
    (Settings can be saved in the Windows registry or in a file)
Note
After having re-started the camera you have to reload the capture settings vice versa.

The filter snaps a number of images (according to the value of the CalibrationImageCount, e.g. 5) and averages the flat-field images to one correction image.

Figure 6: Image corrections: Host-based flat field correction

Camera-based Flat-Field Correction

The camera-based Flat-Field Correction feature supports full AOI and 14 bit to 14 bit correction (12 bit coefficients.). This enables a pixel to pixel correction and saves dozens of % CPU load and lowers latency. To reduce noise averaging of a number of "mv Flat-Field Correction Calibration Images" is possible. A correction image in the camera is then calculated. This may take some time / number of images, however, the camera blinks green. One correction image can be stored for all user settings.

The camera-based Flat-Field Correction is independent of the offset and uses a run-mode trigger (e.g. external trigger).

You can set the "camera-based flat field correction" in the following way:

  1. Set "mv Flat-Field Correction Calibration Image Count" to, for example, 5.
  2. This will average 5 images before calculating the FFC factors to reduce impact of noise.
  3. Stop "Continuous" acquisition mode, then right click on "int mvFFCCalibrate()" -> "Execute".
  4. Finally, you have to activate the correction: Set the "mv Flat-Field Correction Enable = 1".

Depending on the sensor, this need some time, because the data is stored in the internal flash memory (yellow LED lights up).

Figure 7: wxPropView - settings

Example

Figure 8: wxPropView - inhomogeneous light / pixel histogram
Figure 9: wxPropView - inhomogeneous light / pixel histogram (horizontal)
Figure 10: wxPropView - compensated light / pixel histogram
Figure 11: wxPropView - compensated light / pixel histogram (horizontal)



Optimizing the color fidelity of the camera

Purpose of this chapter is to optimize the color image of a camera, so that it looks as natural as possible on different displays and for human vision.

This implies some linear and nonlinear operations (e.g. display color space or Gamma viewing LUT) which are normally not necessary or recommended for machine vision algorithms. A standard monitor offers, for example, several display modes like sRGB, "Adobe RGB", etc., which reproduce the very same color of a camera color differently.

It should also be noted that users can choose for either

  • camera based settings and adjustments or
  • host based settings and adjustments or
  • a combination of both.

Camera based settings are advantageous to achieve highest calculating precision, independent of the transmission bit depth, lowest latency, because all calculations are performed in FPGA on the fly and low CPU load, because the host is not invoked with these tasks. These camera based settings are

Host based settings save transmission bandwidth at the expense of accuracy or latency and CPU load. Especially performing gain, offset, and white balance in the camera while outputting RAW data to the host can be recommended.

Of course host based settings can be used with all families of cameras (e.g. also mvBlueFOX).

Host based settings are:

  • look-up table (LUTOperations)
  • color correction (ColorTwist)

The block diagram of the camera shows where the different settings can be found in the data flow of the image data.

To show the different color behaviors, we take a color chart as a starting point:

Figure 1: Color chart as a starting point

If we take a SingleFrame image without any color optimizations, an image can be like this:

Figure 2: SingleFrame snap without color optimization
Figure 3: Corresponding histogram of the horizontal white to black profile

As you can see,

  • saturation is missing,
  • white is more light gray,
  • black is more dark gray,
  • etc.
Note
You have to keep in mind that there are two types of images: the one generated in the camera and the other one displayed on the computer monitor. Up-to-date monitors offer different display modes with different color spaces (e.g. sRGB). According to the chosen color space, the display of the colors is different.

The following figure shows the way to a perfect colored image

Figure 4: The way to a perfect colored image

including these process steps:

  1. Do a Gamma correction (Luminance),
  2. make a White balance and
  3. Improve the Contrast.
  4. Improve Saturation, and use a "color correction matrix" for both
    1. the sensor and / or
    2. the monitor.

The following sections will describe the single steps in detail.

Step 1: Gamma correction (Luminance)

First of all, a Gamma correction (Luminance) can be performed to change the image in a way how humans perceive light and color.

For this, you can change either

  • the exposure time,
  • the aperture or
  • the gain.

You can change the gain via wxPropView like the following way:

  1. Click on "Setting -> Base -> Camera -> GenICam -> LUT Control -> LUT Selector".
  2. Afterwards, click on "Wizard" to start the LUT Control wizard tool.
    The wizard will load the data from the camera.

    Figure 5: Selected LUT Selector and click on wizard will start wizard tool
    Figure 6: LUT Control
  3. Now, click on the "Gamma..." button
  4. and enter e.g. "2.2" as the Gamma value:

    Figure 7: Gamma Parameter Setup
  5. Then, click on "Copy to..." and select "All" and
  6. and click on "Enable All".
  7. Finally, click on Synchronize and play the settings back to the device (via "Cache -> Device").

    Figure 8: Synchronize

After gamma correction, the image will look like this:

Figure 9: After gamma correction
Figure 10: Corresponding histogram after gamma correction
Note
As mentioned above, you can do a gamma correction via ("Setting -> Base -> ImageProcessing -> LUTControl"). Here, the changes will affect the 8 bit image data and the processing needs the CPU of the host system:

Figure 11: LUTControl dialog

Just set "LUTEnable" to "On" and adapt the single LUTs like (LUT-0, LUT-1, etc.).

Step 2: White Balance

As you can see in the histogram, the colors red and blue are above green. Using green as a reference, we can optimize the white balance via "Setting -> Base -> Camera -> GenICam -> Analog Control -> Balance Ratio Selector" ("Balance White Auto" has to be "Off"):

  1. Just select "Blue" and
  2. adjust the "Balance Ratio" value until the blue line reaches the green one.

    Figure 12: Optimizing white balance
  3. Repeat this for "Red".

After optimizing white balance, the image will look like this:

Figure 13: After white balance
Figure 14: Corresponding histogram after white balance

Step 3: Contrast

Still, black is more a darker gray. To optimize the contrast you can use "Setting -> Base -> Camera -> GenICam -> Analog Control -> Black Level Selector":

  1. Select "DigitalAll" and
  2. adjust the "Black Level" value until black seems to be black.

    Figure 15: Back level adjustment

The image will look like this now:

Figure 16: After adapting contrast
Figure 17: Corresponding histogram after adapting contrast

Step 4: Saturation and Color Correction Matrix (CCM)

Still saturation is missing. To change this, the "Color Transformation Control" can be used ("Setting -> Base -> Camera -> GenICam -> Color Transformation Control"):

  1. Click on "Color Transformation Enable" and
  2. click on "Wizard" to start the saturation via "Color Transformation Control" wizard tool (since firmware version 1.4.57).

    Figure 18: Selected Color Transformation Enable and click on wizard will start wizard tool
  3. Now, you can adjust the saturation e.g. "1.1".

    Figure 19: Saturation Via Color Transformation Control dialog
  4. Afterwards, click on "Enable".
  5. Since driver version 2.2.2, it is possible to set the special color correction matrices at
    1. the input (sensor),
    2. the output side (monitor) and
    3. the saturation itself using this wizard.
  6. Select the specific input and output matrix and
  7. click on "Enable".
  8. As you can see, the correction is done by the host by default ("Host Color Correction Controls"). However, you can decide, if the color correction is done by the device by clicking on "Write To Device And Switch Off Host Processing". The wizard will take the settings of the "Host Color Correction Controls" and will save it in the device.
  9. Finally, click on "Apply".

After the saturation, the image will look like this:

Figure 20: After adapting saturation
Figure 21: Corresponding histogram after adapting saturation
Note
As mentioned above, you can change the saturation and the color correction matrices via ("Setting -> Base -> ImageProcessing -> ColorTwist"). Here, the changes will affect the 8 bit image data and the processing needs the CPU of the host system:
Figure 22: ColorTwist dialog
Figure 23: Input and output color correction matrix



Reducing noise by frame averaging

What is frame average?

As the name suggests, the functionality averages the gray values of each pixel using the information of subsequent frames. This can be used to

  • reduce the noise in an image and
  • compensate motion in an image.

MATRIX VISION implemented a dynamic version of the frame averaging in the FPGA, which not need any CPU of the host system. However, this mode is only available for the

  • mvBlueCOUGAR-X2xx cameras with larger FPGAs.

Dynamic mode of mvBlueCOUGAR-X2xx

This mode uses an adaptive recursive filter with an average slope. The slope sets the amount of new image versus averaged image in relation to the gray scale variation of the pixel. With it, static noise can be removed at full bit depth and full frame rate:

Figure 2: Frame average: Functional principle

This method is well known and is used in the same or a similar way in all flat screen televisions. The amount of de-noising can be set with the slope factor: the smaller the value, the greater the feedback and therefore the de-noising but also the motion blur in the image:

Slope: 256 = 100 % pixel_difference = 100 % signal_in
Figure 3: Frame average: Slope

There are no delays with this option because de-noising is recursive and the SignalOUT is extracted before the frame memory.

Using wxPropView

Using the dynamic frame average mode, you have to do the following step:

  1. Start wxPropView and
  2. connect to the camera.
  3. Then specify in "Setting -> Base -> Camera -> GenICam -> Device Control" which processing unit of the camera should do the frame averaging, e.g. unit 0 should do the frame averaging
    "mv Device Processing Unit Selector = 0"
    "mv Device Processing Unit = mvFrameAverage".
    Afterwards, "mv Frame Average Control" is available.
  4. Now open "Setting -> Base -> Camera -> GenICam -> mv Frame Average Control" and
  5. set the slope, e.g. 5000: "mv Frame Average Slope = 5000".
  6. Activate frame averaging by setting "mv Frame Average Enable = 1".
Figure 4: wxPropView: Setting the dynamic frame average

For "static images", setting average slope to small numbers (10 .. 1000) gives the best noise enhancement at the expense of motion blur.

Figure 5: wxPropView - noisy image

For "dynamic images", setting average slope to higher values (1000 .. 5000) reduces motion blur at the expense of noise enhancement. This is especially needed in video application to enhance noise.

Figure 6: wxPropView - reduced noise in image



Optimizing the bandwidth

Calculating the needed bandwidth

In some applications, for example, when multiple cameras share a single network path, you have to keep the bandwidth in mind.

You can calculate the needed bandwidth with the following formula:

Network traffic calculator
  Extended block ID (GEV 2.x)
Image width (pixels):
Image height (pixels):
Bytes per pixel:
Network MTU (max. value of the device with the smallest MTU):
Number of payload packets "on-wire": packets
Network traffic: bytes "on-wire"
Theoretical max. frame rate (at Dual Gigabit Ethernet): fps
Theoretical max. frame rate (at Gigabit Ethernet): fps
Theoretical max. frame rate (at 100 MBit): fps
  

Note
Within a GigE network you have a bandwidth of 125 MByte/s; a 100 MBit network has 12.5 MByte/s. The result of this formula is a rough guideline only. Some additional bandwidth is needed by the communication protocol and some other non GigE Vision related network traffic. Apart from that not every network controller can cope with a full 1 GBit/s stream of data, thus "real" results may vary.

Limiting the bandwidth of the imaging device

It is possible to limit the used bandwidth like the following way:

Since
mvIMPACT Acquire 2.25.0
  1. In "Setting -> Base -> Camera -> GenICam -> Device Control -> Device Link Selector" set property "Device Link Throughput Limit Mode" to "On".
  2. Now, you can set the bandwidth with "Device Link Throughput Limit" to your desired bandwidth in bits per second

    Figure 1: wxPropView - Setting Device Link Throughput Limit


Since
mvIMPACT Acquire < 2.25.0
  1. In "Setting -> Base -> Camera -> GenICam -> Transport Layer Control -> Gev Stream Channel Selector" set property "mv Gev SCBW Control" to "mvGevSCBW".

    Figure 2: wxPropView - Setting mvGevSCBW to mvGevSCBWControl


  2. Now, you can set the bandwidth with "mvGevSCBW". E.g. 10000 for 10 MB.
    According to the image size and acquisition settings, the frame rate will be adjusted.

    Figure 3: wxPropView - Setting bandwidth size

In contrast to this smart bandwidth control mechanism of mvBlueCOUGAR-X cameras, with other cameras you have to know and optimize the Inter-Packet Delay of the camera to avoid congestion in the switch (the loss of packages is an indicator of congestion). You can get the Inter-Packet Delay with following calculator:

Inter-Packet Delay calculator
  Extended block ID (GEV 2.x)
GevSCPS (bytes):
PayloadSize (bytes):
Frames (per second):
Overall bandwidth (bytes):
GevTimestampTickFrequency (Hz):
GevSCPD:
  



Setting a flicker-free auto expose and auto gain

Introduction

In order to prevent oscillations it is important to adapt the camera frequency to the frequency of AC light.

This is, for example, in

  • Europe 50 cycles (100 fluctuations/s) whereas in
  • USA, Japan and other countries it is 60 Hz.

This means the camera must strictly be coupled to this frequency. In conjunction with auto exposure this can only be maintained by using a timer based generation of external trigger pulses. This is a behavior of both sensor types: CCD and CMOS.

Note
It is not enough to use "Setting -> Base -> Camera -> GenICam -> Acquisition Control -> Acquisition Frame Rate" for this, as there are small fluctuations in this frame rate if the exposure time changes. These fluctuations lead to oscillations (see settings marked with red boxes in Figure 1). The "Acquisition Frame Rate" will only provide the exact frame rate if auto exposure is turned off.

As shown in Figure 1, it is possible to set ("mv Exposure Auto Mode") which part of the camera handles the auto expose (device or the sensor itself; pink boxes). Using "mvSensor" as "mv Exposure Auto Mode", it is possible to avoid oscillations in some cases. The reason for this behavior is that you can set more parameters like "mv Exposure Auto Delay Images" in contrast to "mvDevice". However, as mentioned above it is recommended to use a timer based trigger when using auto expose together with continuous acquisition.

Figure 1: wxPropView - Auto expose is turned on and the frame rate is set to 25 fps

Example of using a timer for external trigger

Figure 2 shows how to generate a 25 Hz signal, which triggers the camera:

  • "Setting -> Base -> Camera -> GenICam -> Counter & Timer Control -> Timer Selector -> Timer 1":
    • "Timer Trigger Source" = "Timer1End"
    • "Timer Duration" = "40000"
                   1
      FPS_max =  -----     = 25
                 40000 us
      
  • "Setting -> Base -> Camera -> GenICam -> Acquisition Control -> Trigger Selector -> FrameStart":
    • "Trigger Mode" = "On"
    • "Trigger Source" = "Timer1End"
Figure 2: wxPropView - 25 Hz timer for external trigger

No oscillation occurs, regardless of DC ambient vs. AC indoor light.

This operation mode is known as flicker-free or flicker-less operation.

What it mainly does is to adjust the frame frequency to precisely the frequency of the power line. Usually the line frequency is very stable and therefore is the harmonic frequency difference of the two signals are very slow; probably in the range of < 0.1 Hz.

The fact that we do not know the phase relation between the two frequencies means that we scan the alternating ambient light source with our camera. The shorter the exposure time, the more we see a slow change in brightness.

Using AutoExposure/AutoGain can completely eliminate this change because the frequency of change is very low. That means it will be legal if we calculate a brightness difference in one picture and apply it for the next one, because the change is also valid in the next one; as we fulfill the Nyquist theorem.

If we use an arbitrary scanning frequency like 20 fps or whatever your algorithm and data flow is accepting, is wrong in this aspect and leads to oscillations and undesired flicker.

Pointing to a 60 Hz display with flashing backlight an oscillation of 10 Hz can be seen of course.

Figure 3: wxPropView - Intensity plot while pointing the camera to a 60 Hz display

Conclusion

To avoid oscillations, it is necessary to adapt the camera frequency to the frequency of AC light. When using auto expose a flicker-free mode (timer based external trigger) is needed. If the camera is used throughout the world it is necessary that the frequency of AC light can be set in the software and the software adapts the camera to this specific environment.



Working with binning

With binning it is possible to combine adjacent pixels vertically and/or horizontally. Depending on the sensor, up to 16 adjacent pixels can be combined.

See also
https://www.matrix-vision.com/manuals/SDK_CPP/Binning_modes.png

Binning will lighten the image at the expense of the resolution. This is a neat solution for applications with low light and low noise.

The following results were achieved with the mvBlueFOX3-2124G, however, binning is also available with the mvBlueCOUGAR-X camera family.

Exposure [in us] Binning Gain [in dB] Averager Image
2500 - 0 -
2500 - 30 -
2500 2H 2V 30 -
2500 2H 2V 30 Averaging using 24 frames

The last image shows, that you can reduce the noise, caused by the increased gain, using frame averaging.



Working with Sony's 4 Tap CCD sensors

To achieve high frame rates, Sony's high-resolution sensors offer a built-in four channel output a.k.a Taps. I.e., the sensor area is divided into 4 parts in 4-Tap mode and 2 parts in 2-Tap mode:

Figure 1: Tap areas in 4-Tap mode (left) and 2-Tap mode (right)

Each Tap has its own gain control and for this reason each image area could behave differently. The following image shows an uncalibrated image where you can see the four different Tap areas of the sensor:

Figure 2: Extreme settings show sensor Tap areas

Therefore it is necessary to calibrate the Taps to get a good image. We can use wxPropView to calibrate the camera.

Note
Before calibrating, you should fix the acquisition parameters like number of Taps, device clock frequency, pixel format and exposure time.
  1. Open wxPropView and
  2. open the Analog Control ("Setting -> Base -> Camera -> GenICam -> Analog Control").
  3. Now, you can adapt the Gain of each Tap ("AnalogTap1", "AnalogTap2", "AnalogTap3", "AnalogTap4"):

    Figure 3: wxPropView - Setting the gain of each Tap area (you will find the exposure time under acquisition control)

In our example the calibrated image would look like this:

Figure 4: wxPropView - Calibrated Taps
Note
Using 2 Taps, the adapting "AnalogTap3" and "AnalogTap4" will have no effect.

The mvBlueCOUGAR-XD sensor range contains three 4-Taps sensors by Sony:



Minimizing sensor pattern of mvBlueCOUGAR-X1010G

Sometimes the gray scale version of Aptinas sensor MT9J003 shows structures comparable with Bayer patterns of color sensors. This pattern is particularly apparent in scaled images:

Figure 1: Bayer pattern like structures of the MT9J003(gray scale version)

To minimized this pattern, you can balance the sensor patterns since Firmware version 2.3.70.0. This procedure works like the white balancing used with color sensors. For this reason the same terms are used (red, blue, green).

The balance reference is the "green" pixel value from the "blue-green" line of the sensor.

See: Output sequence of color sensors (RGB Bayer)

I.e. all gray scale values of the these "green" pixels are averaged.

With "Setting -> Base -> Camera -> GenICam -> Analog Control -> Balance Ration Selector" you can select each color to set the "Balance Ratio":

  • "Red" averages the "red" pixel values from the "red-green" line of the sensor.
  • "Green" averages the "green" pixel values from the "red-green" line of the sensor, too.
  • "Blue" averages the "blue" pixel values from the "blue-green" line of the sensor.

I.e. there are 4 average values (reference value, red value, green value, blue value). The lowest value will be unchanged, the other values are increased using each "Balance Ratio".

However by using the property "Balance White Auto" you can balance the sensor automatically:

Figure 2: Balance White Auto

After balancing, we recommend to save these settings to a UserSet.

Figure 3: Calibrated sensor



Working with the dual exposure feature ("mvMultiZone") of mvBlueCOUGAR-XD107

Introduction

The IMX420 used in the mvBlueCOUGAR-XD107 is a Pregius sensor of the third generation. This sensor features a dual exposure mode, i.e. after a trigger event, for example, different image areas can be exposed to light at the same time differently.

To activate the dual exposure mode we introduced a new exposure mode called mvMultiZone.

If this mode is selected you can set the "mv Exposure Horizontal Zone Divider". This property indicates where the zones are divided horizontally. The step size is 12.5%.

The exposure time of the different zones can be specified using the "Exposure Time Selector". Possible zones are

  • mvHorizontalZone0
  • mvHorizontalZone1

Setting the dual exposure using wxPropView

To activate the dual exposure, just

  1. Select mvMultiZone in "Setting -> Base -> Camera -> GenICam -> Acquisition Control -> Exposure Mode".
  2. Set the "Exposure Time" of each zone by selecting the specific "Exposure Time Selector".
  3. Finally, specify the "mv Exposure Horizontal Zone Divider" (e.g. 50, i.e. you have two equal image halfs).
Figure 1: wxPropView - new Exposure Mode
Figure 2: Example image with two different exposed to light zones

Programming the dual exposure

#include <mvIMPACT_CPP/mvIMPACT_acquire_GenICam.h>

  // more code
  GenICam::AcquisitionControl ac( pDev );
  ac.exposureMode.writeS( "mvMultiZone" );
  ac.exposureTimeSelector.writeS( "mvHorizontalZone1" );
  ac.exposureTime.write( 5000 );
  ac.exposureHorizontalDivider.write( 50 );
  // more code



Working with the dual gain feature of mvBlueCOUGAR-XD107/XD1031 and mvBlueCOUGAR-X102m/X107

Introduction

The IMX420/425/428 used in the mvBlueCOUGAR-XD107/XD1031 and mvBlueCOUGAR-X102m/X107 are Pregius sensors of the third generation.

Those sensors feature a dual gain mode, i.e. after a trigger event, for example, different image areas can be amplified at the same time differently.

To activate the dual gain mode it is necessary to enable the multi area mode by using the mvMultiAreaMode property. At least two different AOIs have to be defined.

Note
Both AOIs must not overlap or touch each other!

Once the AOIs are configured the GainSelector has to be set to one of the new implemented options called mvHorizontalZone0 and mvHorizontalZone1.

The gain value of the different zones can be specified using the "Gain Selector". And the corresponding gain property. Possible zones are

  • mvHorizontalZone0
  • mvHorizontalZone1

If this mode is selected you can set the "mv Gain Horizontal Zone Divider". This property indicates where the zones are divided horizontally once more than two AOIs are configured. In this case the e.g. 25% means that the upper 25% of the image are defined by the gain value of mvHorizontalZone0 and the lower 75% are defined by the gain value of mvHorizontalZone1.

Note
Some sensors may only allow to change the gain at certain positions e.g. the last line of a defined ROI. In this case the first possible switching point above the actual line will be used.

Setting the dual gain using wxPropView

To activate the dual gain, just

  1. Use the Multi AOI Wizard to adjust the different AOIs. (They must not overlap or touch each other!)
  2. Select mvMultiZone in "Setting -> Base -> Camera -> GenICam -> Analog Control -> mvGainMode".
  3. Select mvHorizontalZone0 in "Setting -> Base -> Camera -> GenICam -> Analog Control -> GainSelector".
  4. Adjust the gain value for the first AOI in "Setting -> Base -> Camera -> GenICam -> Analog Control -> GainSelector -> Gain".
  5. Adjust the gain divider position "Setting -> Base -> Camera -> GenICam -> Analog Control -> GainSelector -> mvGainHorizontalZoneDivider".
  6. Select mvHorizontalZone1 in "Setting -> Base -> Camera -> GenICam -> Analog Control -> GainSelector".
  7. Adjust the gain value for the second AOI in "Setting -> Base -> Camera -> GenICam -> Analog Control -> GainSelector -> Gain".
Figure 1: wxPropView - configuring multiple AOIs
Figure 2: wxPropView - configuring the gain of the first zone
Figure 3: Example image with two different amplified zones

Programming the dual gain mode

As an example the IMX425 sensor is used for the sample. The goal is to configure three AOIs which have a similar height. Since the AOIs must not overlap or touch each other, it is important to increase the offset of the next AOI by the smallest increment size. Which is 8 in this case.

#include <mvIMPACT_CPP/mvIMPACT_acquire_GenICam.h>

  // more code
    GenICam::ImageFormatControl ifc( pDev );
    
    ifc.mvMultiAreaMode.writeS( "mvMultiAreasCombined" );

    ifc.mvAreaSelector.writeS( "mvArea0" );
    ifc.mvAreaOffsetX.write( 0 );
    ifc.mvAreaOffsetY.write( 0 );
    ifc.mvAreaWidth.write( ifc.mvAreaWidth.getMaxValue( ) );
    ifc.mvAreaHeight.write( 360 );
    ifc.mvAreaEnable.writeS( "1" );

    ifc.mvAreaSelector.writeS( "mvArea1" );
    ifc.mvAreaOffsetX.write( 0 );
    ifc.mvAreaOffsetY.write( 368 );
    ifc.mvAreaWidth.write( ifc.mvAreaWidth.getMaxValue( ) );
    ifc.mvAreaHeight.write( 360 );
    ifc.mvAreaEnable.writeS( "1" );

    ifc.mvAreaSelector.writeS( "mvArea2" );
    ifc.mvAreaOffsetX.write( 0 );
    ifc.mvAreaOffsetY.write( 736 );
    ifc.mvAreaWidth.write( ifc.mvAreaWidth.getMaxValue( ) );
    ifc.mvAreaHeight.write( 360 );
    ifc.mvAreaEnable.writeS( "1" );

    GenICam::AnalogControl anc( pDev );
    anc.mvGainMode.writeS( "mvMultiZone" );

    anc.gainSelector.writeS( "mvHorizontalZone0" );
    anc.gain.write( 12 );
    anc.mvGainHorizontalZoneDivider.write( 80 );

    anc.gainSelector.writeS( "mvHorizontalZone1" );
    anc.gain.write( 0 );
  // more code



Working with the dual ADC feature of mvBlueCOUGAR-XD107

Introduction

The IMX420 used in the mvBlueCOUGAR-XD107C/G is a Pregius sensor of the third generation. This sensor features a dual ADC mode which allows to increase the dynamic range of the sensor.

To activate the dual ADC mode it is sufficient to use the "mv Dual ADC Mode" property.

Enabling the dual ADC mode using wxPropView

To activate the dual ADC mode for a color model, just

  1. Change the pixel format to "BayerRG16" "Setting -> Base -> Camera -> GenICam -> Image Format Control -> PixelFormat".
  2. Enable "mvDualAdcMode" "Setting -> Base -> Camera -> GenICam -> Image Format Control -> mvDualAdcMode".

To activate the dual ADC mode for a monochrome model, just

  1. Change the pixel format to "Mono16" "Setting -> Base -> Camera -> GenICam -> Image Format Control -> PixelFormat".
  2. Enable "mvDualAdcMode" "Setting -> Base -> Camera -> GenICam -> Image Format Control -> mvDualAdcMode".
Note
Changing the pixel format to a pixel format which utilizes more than one byte per pixel (e.g. BayerRG16 or Mono16) is necessary to present the higher dynamic range in the image. The maximum frame rate will be halved because of the multi byte pixel format. Please be aware that in case of histogram analysis this mode might not be the best option since the transition between both ADCs might show a slight gain offset error.
Figure 1: wxPropView - enabling the dual ADC mode

Programming the dual ADC mode

#include <mvIMPACT_CPP/mvIMPACT_acquire_GenICam.h>

  // more code
    GenICam::ImageFormatControl ifc( pDev );
    ifc.pixelFormat.writeS("BayerRG16");
    ifc.mvDualAdcMode.writeS( "1" );   
  // more code



Working with triggers

There are several use cases concerning trigger:



Processing triggers from an incremental encoder

Basics

The following figure shows the principle of an incremental encoder:

Figure 1: Principle of an incremental encoder

This incremental encoder will send a A, B, and Z pulse. With these pulses there are several ways to synchronize an image with an incremental encoder.

Using Encoder Control

To create an external trigger event by an incremental encoder, please follow these steps:

  1. Connect the incremental encoder output signal A, for example, to the digital input 0 ("Line4") of the mvBlueCOUGAR-X .
    This line counts the forward pulses of the incremental encoder.
  2. Depening on the signal quality, it could be necessary to set a debouncing filter at the input (red box in Figure 3):
    Adapt in "Setting -> Base -> Camera -> GenICam -> Digital I/O Control" the "Line Selector" to "Line4" and set "mv Line Debounce Time Falling Edge" and "mv Line Debounce Time Rising Edge" according to your needs.
  3. Set the trigger "Setting -> Base -> Camera -> GenICam -> Acquisition Control -> Trigger Selector" "FrameStart" to the "Encoder0" ("Trigger Source") signal.
  4. Then set "Setting -> Base -> Camera -> GenICam -> Encoder Control -> Encoder Selector" to "Encoder0" and
  5. adapt the parameters to your needs.
    See also
    Encoder Control
Figure 2: wxPropView settings
Note
The max. possible frequency is 5 KHz.

Programming the Encoder Control

#include <mvIMPACT_CPP/mvIMPACT_acquire_GenICam.h>

  // more code
  GenICam::EncoderControl ec( pDev );
  ec.encoderSelector.writeS( "Encoder0" );
  ec.encoderSourceA.writeS( "Line4" );
  ec.encoderMode.writeS( "FourPhase" );
  ec.encoderOutputMode.writeS( "PositionUp" );
  // more code

Using Counter

It is also possible to use Counter and CounterEnd as the trigger event for synchronizing images with an incremental encoder

To create an external trigger event by an incremental encoder, please follow these steps:

  1. Connect the incremental encoder output signal A, for example, to the digital input 0 ("Line4") of the mvBlueCOUGAR-X .
    This line counts the forward pulses of the incremental encoder.
  2. Set "Setting -> Base -> Camera -> GenICam -> Counter and Timer Control -> Counter Selector" to "Counter1" and
  3. "Counter Event Source" to "Line4" to count the number of pulses e.g. as per revolution (e.g. "Counter Duration" to 3600).
  4. Then set the trigger "Setting -> Base -> Camera -> GenICam -> Acquisition Control -> Trigger Selector" "FrameStart" to the "Counter1End" ("Trigger Source") signal.
Figure 3: wxPropView setting

To reset "Counter1" at Zero degrees, you can connect the digital input 1 ("Line5") to the encoder Z signal.



Generating a pulse width modulation (PWM)

Basics

To dim a laser line generator, for example, you have to generate a pulse width modulation (PWM).

For this, you will need

  • 2 timers and
  • the active signal of the second timer at an output line

Programming the pulse width modulation

You will need two timers and you have to set a trigger.

  • Timer1 defines the interval between two triggers.
  • Timer2 generates the trigger pulse at the end of Timer1.

The following sample shows a trigger

  • which is generated every second and
  • the pulse width is 10 ms:
#include <mvIMPACT_CPP/mvIMPACT_acquire.h>
#include <mvIMPACT_CPP/mvIMPACT_acquire_GenICam.h>

... 

// Master: Set timers to trig image: Start after queue is filled
    GenICam::CounterAndTimerControl catcMaster(pDev);
    catcMaster.timerSelector.writeS( "Timer1" );
    catcMaster.timerDelay.write( 0. );
    catcMaster.timerDuration.write( 1000000. );
    catcMaster.timerTriggerSource.writeS( "Timer1End" );

    catcMaster.timerSelector.writeS( "Timer2" );
    catcMaster.timerDelay.write( 0. );
    catcMaster.timerDuration.write( 10000. );
    catcMaster.timerTriggerSource.writeS( "Timer1End" );
See also
Counter And Timer Control
Note
Make sure the Timer1 interval must be larger than the processing time. Otherwise, the images are lost.

Now, the two timers will work like the following figure illustrates, which means

  • Timer1 is the trigger event and
  • Timer2 the trigger pulse width:
Figure 1: Timers

The timers are defined, now you have to set the digital output, e.g. "Line 0":

// Set Digital I/O
    GenICam::DigitalIOControl io(pDev);
    io.lineSelector.writeS( "Line0" );
    io.lineSource.writeS( "Timer2Active" );
See also
Digital I/O Control

This signal has to be connected with the digital inputs of the application.

Programming the pulse width modulation with wxPropView

The following figures show, how you can set the timers using the GUI tool wxPropView

  1. Setting of Timer1 (blue box) on the master camera:

    Figure 2: wxPropView - Setting of Timer1

  2. Setting of Timer2 (purple on the master camera):

    Figure 3: wxPropView - Setting of Timer2

  3. Assigning timer to DigOut (orange box in Figure 2).



Outputting a pulse at every other external trigger

To do this, please follow these steps:

  1. Switch "Trigger Mode" to "On" and
  2. Select the "Trigger Source", e.g. "Line5".
  3. Use "Counter1" and count the number of input trigger by setting the "Counter Duration" to "2".
  4. Afterwards, start "Timer1" at the end of "Counter1":

    Figure 1: wxPropView - Setting the sample

The "Timer1" appears every second image.

Now, you can assign "Timer1Active" to a digital output e.g. "Line3":

Figure 2: Assigning the digital output
Note
You can delay the pulse if needed.



Creating different exposure times for consecutive images

If you want to create a sequence of exposure times, you have to trigger the camera "externally" via pulse width:

  1. Use Timer and Counter to build a sequence of different pulse widths.
  2. Use the Counter for the time between the exposures (with respect to the readout times).
  3. Afterwards, use an AND gate followed by OR gate to combine different exposure times.
Note
Please be sure that the sensor can output the complete image during Counter1 or Counter2. Otherwise, only one integration time will be used.
Figure 1: wxPropView - Logic gate principle

You can set this sample in wxPropView. E.g. the sensor makes 22.7 frames per second in Continuous Mode. This means that the sensor needs 44 ms to output the complete image.

    1
--------- = approx. 44 ms
  22.7

We take 55 ms to be sure. Now, as different exposure times we take 1 ms (Timer1) and 5 ms (Timer2). To get the 55 ms, we have to add 54000 us (Counter1) and 50000 us (Counter2).

Finally, you have to set the logic gate as shown in the figure:

Figure 2: wxPropView - Logic gate setting
Note
Because there are 4 counters and 2 timers you can only add one further exposure time using one counter as a timer.

So if you want other sequences, you have to use the counters and timers in a flexible way as show in the next sample:

Sequence with 4 times exposure A followed by 1 time exposure B

If you have an external trigger, you can use the counter and timer to create longer exposure sequences.

For example, if you want a sequence with 4 times exposure A followed by 1 time exposure B you can count the trigger events. That means practically:

  1. Use Counter1 to count 5 trigger signals then
  2. issuing Timer2 for the long exposure time (re-triggered by Counter1End).
  3. Every trigger issues Timer1 for the short exposure.
  4. Afterwards, an AND gate followed by OR gate combines the different exposure times.

In wxPropView it will look like this:

Figure 3: wxPropView - Logic gate setting 2



Detecting overtriggering

Scenario

The image acquisition of a camera consists of two steps:

  • exposure of the sensor and
  • readout of the sensor data

During these steps, a trigger signal will be skipped:

Figure 1: Trigger counter increases but the start exposure counter not

To notice overtriggering, you can use counters:

  • One counter counts the incoming trigger signals, the
  • second counter counts the ExposureStart signals.

Using the chunk data you can overlay the counters in the live image.

Setting the overtrigger detector using wxPropView

First of all, we have to set the trigger in "Setting -> Base -> Camera -> GenICam -> Acquisition Control" with following settings:

Property name wxPropView Setting
Trigger Selector FrameStart
Trigger Mode On
Trigger Source Line4
Trigger Activation RisingEdge
Exposure Mode Timed

This trigger will start an acquisition after a rising edge signal on line 4 (= DigIn0 ).

Now, set the two counters. Both counters (Counter1 and Counter2) will be reset and start after the acquisition (AcquisitionStart) has started.

While Counter1 increases with every ExposureStart event (see figure above for the event and acquisition details) ...

Figure 2: Setting Counter1

... Counter2 increases with every RisingEdge of the trigger signal:

Figure 3: Setting Counter2

Now, you can check if the trigger signal is skipped (when a rising edge signal is active during readout) or not by comparing the two counters.

Enable the inclusion of the selected chunk data ("Chunk Mode Active = 1") in the payload of the image in "Setting -> Base -> Camera -> GenICam -> Chunk Data Control":

Figure 4: Enable chunk data

Activate the info overlay in the display area. Right-click on the live display and select: "Request Info Overlay"

Figure 5: Show chunk data

The following figure shows that no trigger signal is skipped:

Figure 6: Trigger Signal counter equals ExposureStart counter

The following figure shows that the acquisition is overtriggered:

Figure 7: Trigger Signal counter is higher than ExposureStart counter



Triggering of an indefinite sequence with precise starting time

Scenario

Especially in the medical area, there are applications where a triggered acquisition is started, for example, with a foot switch. Following challenges have to be solved in combination with these applications:

  • The user wants the acquired image immediately (precise starting time).
  • It is not known, when the user stops the acquisition (indefinite sequence).

Using AcquisitionStart as the trigger source, it could take between 10 and 40 ms until the camera acquires the first frame. That's not really an immediately acquisition start. It is recommended to use FrameStart as the trigger source instead. However, according to the time the trigger event occurs, there will be a timeout during the first frame in nearly all cases.

You can avoid this using a timer which generates a "high" every 100 us and which is connected to the trigger input Line4 using a logical AND gate. I.e. if the timer is "high" and there is a trigger signal at Line4 then the logical conjunction is true. The AND gate result is then connected as TriggerSource of the FrameStart trigger using a logical OR gate. I.e. as soon as the logical AND conjunction is true, the trigger source is true and the image acquisition will start.

The following figure illustrates the settings:

Figure 1: Schematic illustration of the settings

With this setting, there is still an acceptable time delay of approx. 100 to 130 us possible.

Creating the use case using wxPropView

First of all, we have to set the timer in "Setting -> Base -> Camera -> GenICam -> Counter And Timer Control" with following settings:

Property name wxPropView Setting
Timer Selector Timer1
Timer Trigger Source Timer1End
Timer Duration 100.000

Afterwards, we have to set the logical gates in "Setting -> Base -> Camera -> GenICam -> mv Logic Gate Control" with following settings:

Property name wxPropView Setting
mv Logic Gate AND Selector mvLogicGateAND1
mv Logic Gate AND Source 1 Line4
mv Logic Gate AND Source 2 Time1Active
mv Logic Gate OR Selector mvLogicGateOR1
mv Logic Gate OR Source 1 mvLogicGateAND1Output

Finally, we have to set the trigger in "Setting -> Base -> Camera -> GenICam -> Acquisition Control" with following settings:

Property name wxPropView Setting
Trigger Selector FrameStart
Trigger Mode On
Trigger Source mvLogicGateAND1Output
Trigger Activation RisingEdge
Exposure Mode Timed
Figure 2: Sample settings



Working with I/Os

There are several use cases concerning I/Os:



Controlling strobe or flash at the outputs

Of course, the mvBlueCOUGAR-X supports strobe or flash lights. However, there are several things you have to keep in mind when using strobes or flash:

  1. Be sure that the illumination fits with the movement of the device under test.
  2. Bright illumination and careful control of exposure time are usually required.
  3. To compensate blur in the image, short exposure times are needed.

Alternatively, you can use flash with short burn times. For this, you can control the flash using the camera. The following figures show, how you can do this using wxPropView

  1. Select in "Setting -> Base -> Camera -> Digital I/O Control" the output line with the "Line Selector" to which the strobe or flash is connected.
  2. Now, set the "Line Source" to "mvExposureAndAcquisitionActive".
    This means that the signal will be high for the exposure time and only while acquisition of the camera is active.
Figure 1: Setting the "Line Source" to "mvExposureAndAcquisitionActive"
Note
This can be combined using an external trigger.

Compensating delay of strobe or flash

Normally, the input circuitry of flash has a delay (e.g. low pass filtering). Using "ExposureActive" to fire strobe would actually illuminate delayed with respect to exposure of the sensor. Figure 2 shows the problem:

Figure 2: Flash delay with "ExposureActive"

To solve this issue, you can use following procedure:

  1. Do not use "ExposureActive" for triggering strobe.
  2. Build flash signal with Timer,
  3. trigger Timer with external trigger (e.g. "Line5").
  4. Use "Trigger Delay" to delay exposure of the sensor accordingly.

In wxPropView it will look like this:

Figure 3: Working with Timer and "Trigger Delay"



Creating a debouncing filter at the inputs

In some cases, it is necessary to eliminate noise on trigger lines. This can become necessary when either

  • the edges of a trigger signal are not perfect in terms of slope or
  • if, because of the nature of the trigger signal source, multiple trigger edges are generated within a very short period of time even if there has just been a single trigger event.

The latter one is also called bouncing.

Bouncing is the tendency of any two metal contacts in an electronic device to generate multiple signals as the contacts close or open; now debouncing is any kind of hardware device or software that ensures that only a single signal will be acted upon for a single opening or closing of a contact.

To address problems that can arise from these kinds of trigger signals MATRIX VISION offers debouncing filters at the digital inputs of a device.

The debouncing filters can be found under "Setting -> Base -> Camera -> GenICam -> Digital I/O Control -> LineSelector" (red box in Figure 1) for each digital input:

Figure 1: wxPropView - Configuring Digital Input Debounce Times

Each digital input (LineMode equals Input) that can be selected via the LineSelector property will offer its own property to configure the debouncing time for falling edge trigger signals ("mv Line Debounce Time Falling Edge") and rising edge ("mv Line Debounce Time Rising Edge") trigger signals.

The line debounce time can be configured in micro-seconds with a maximum value of up to 5000 micro-seconds. Internally each time an edge is detected at the corresponding digital input a timer will start (orange * in figure 2 and 3), that is reset whenever the signal applied to the input falls below the threshold again. Only if the signal stays at a constant level for a full period of the defined mvLineDebounceTime the input signal will be considered as a valid trigger signal.

Note
Of course this mechanism delays the image acquisition by the debounce time.
Figure 2: mvLineDebounceTimeRisingEdge Behaviour
Figure 3: mvLineDebounceTimeFallingEdge Behaviour



Using motorized lenses with mvBlueCOUGAR-XD

Introduction

In contrast to machine vision applications with constant lightings and fixed focal lengths, outdoor applications such as traffic monitoring, security, or sports demand to control image brightness, field of view, zoom, focus, or iris. Lenses with motors offer the possibility to remotely manipulate these settings and the mvBlueCOUGAR-XD offers the possibility to control motorized lenses.

Types of motorized lenses and their controls

Motorized lenses differ by the fact what element is motorized and how:

  • Zoom
  • Focus
  • Iris
    • Motor
    • Video
    • DC

Lenses with motors differ by the voltage they accept and by certain wiring specialties. Driving voltages may be between 3 and 12V DC, wiring may be 2 wires per motor (a.k.a. bipolar) or with one wire per motor and common ground. Some lenses offer potentiometers so that the actual position can be measured by a resistance. These potentiometers are not supported by the mvBlueCOUGAR-XD camera.

List of usable lenses

Motorized lenses differ by the max. sensor diameter they support and by resolution limits. mvBlueCOUGAR-XD cameras use sensors with 2/3" to 1" diameter. Lens/camera combinations must be selected having these properties in mind. The following is a list of usable lenses. It is provided for reference only. Exclusion from this list does not mean that the product is not usable with the camera per se.

Manufacturer Details Motorized Iris Motorized Focus Motorized Zoom Video Iris
KOWA Motorized LMZ-series up to 1" and 5 MPix resolution X X X X
Goyo Optical GAZ series 2/3" – 1" X X X
Linos Mevis motorized X X
Schneider Optics Cinegon/Xenoplan: motorized iris X X
Computar 2/3" M6Z series X X X
Fujinon 2/3" and 1" lenses X X X X

Connecting the motorized lens to a camera

Connecting the direct drive lens motors

mvBlueCOUGAR-XD offers two connectors at the back. Use the female one on the right side for lens connection. Pinning is shown in the table on the left side below:

Figure 1: mvBlueCOUGAR-XD connectors
  Female connector
Pin. Signal
1 Opto DigIn2 (line6)
2 Opto GND
3 Opto DigIn3 (line7)
4 Focus+
5 Focus-
6 Zoom+
7 Zoom-
8 Iris+
9 Iris-
10 Channel4+
11 Channel4-
12 GND
Figure 2: Wiring of full motorized zoom lens

Image above shows a typical lens wiring. The three independent motors can be seen.

  • Connect Pin4 (Focus+) to CN Pin4.
  • Connect Pin5 (Focus-) to CN Pin3.
  • etc.

mvBlueCOUGAR-XD can deliver up to 100mA current with a selectable voltage to the outputs Focus, Zoom and Iris. Please note that the voltage applied is independent of the supply voltage of the camera. Channel 4 can be left open.

Connecting the video iris

mvBlueCOUGAR-XD camera generates a video like signal containing average brightness information and sync signals to drive a video iris type lens. The diaphragm of the lens closes at increasing brightness keeping the resulting overall brightness reaching the sensor constant.

Advantage of video iris against AutoExposure: Bigger range of brightness variation avoids smear in CCD as it blocks extreme bright light to hit the sensor; but is slower than AutoExposure and AutoGain which is also supported by the camera. Pining of the standardized video iris connector (4 pin EIAJ) is shown below.

Figure 3: Pining video iris

Use the square 4 pin iris connector of the camera to directly connect the video iris.

Controlling the lens via viewer or the API (aka mvIMPACT Acquire)

Usage of the lens control wizard of wxPropView is recommended for setup:

Figure 4: Lens control wizard

Select the "Drive Level" voltage according to the lens type. Focus, Zoom and Iris buttons drive the motors at a selectable speed.

"Video Iris" can be selected to open or completely close the Iris (for setup) and for auto mode.

Note
Additional settings such as level (sensitivity) and/or ALC (peak or average) may be possible directly at the lens (via poti).
ALC settings do not have effect due to digital video signal!
Consult the manual of the lens for more details.

Using AGC/AEC & mvIrisAuto may lead to oscillating brightness.

Setting the video iris (example)

Purpose is to bring the video iris into a usable range so that during operation it can open if brightness goes down and further close if brightness goes up.

  1. Open iris by using mvIrisOpen command: This opens iris to the min. F number (f/N) supported by the lens, e.g. f/1.2 (see lens manual).
  2. Set exposure of camera so that the image is not saturated.
  3. Set exp = 4 x min.
  4. Iris auto will move the lens to f/2.4.
  5. Set working exposure = 16 X min will move the Iris auto to f/4.8.

Controlling the lens via 3rd party libraries or APIs

The properties for "mv Lens Control" are MATRIX VISION specific but appear in the camera’s XML-file according to GigE Vision standards and SFNC thanks to the standard. This makes it possible to use the features from third party applications or programs without problems.

Figure 5: wxPropView - mv Lens Control

The screenshot below shows how the properties appear under MVTec HALCON’s image acquisition assistant:

Figure 6: HALCON’s image acquisition assistant

The next screenshot shows the respective HDevelop example under Halcon for the same settings:

Figure 7: HDevelop example



Working with HDR (High Dynamic Range Control)

There are several use cases concerning High Dynamic Range Control:



Adjusting sensor -x00w

Introduction

The HDR (High Dynamic Range) mode of the sensor -x00w increases the usable contrast range. This is achieved by dividing the integration time in two or three phases. The exposure time proportion of the three phases can be set independently. Furthermore, it can be set, how much signal of each phase is charged.

Functionality

Figure 1: Diagram of the -x00w sensor's HDR mode

Description

  • "Phase 0"
    • During T1 all pixels are integrated until they reach the defined signal level of Knee Point 1.
    • If one pixel reaches the level, the integration will be stopped.
    • During T1 no pixel can reached a level higher than P1.
  • "Phase 1"
    • During T2 all pixels are integrated until they reach the defined signal level of Knee Point 2.
    • T2 is always smaller than T1 so that the percentage compared to the total exposure time is lower.
    • Thus, the signal increase during T2 is lower as during T1.
    • The max. signal level of Knee Point 2 is higher than of Knee Point 1.
  • "Phase 2"
    • During T2 all pixels are integrated until the possible saturation.
    • T3 is always smaller than T2, so that the percentage compared to the total exposure time is again lower here.
    • Thus, the signal increase during T3 is lower as during T2.

For this reason, darker pixels can be integrated during the complete integration time and the sensor reaches its full sensitivity. Pixels, which are limited at each Knee Points, lose a part of their integration time - even more, if they are brighter.

Figure 2: Integration time of different bright pixels

In the diagram you can see the signal line of three different bright pixels. The slope depends of the light intensity , thus it is per pixel the same here (granted that the light intensity is temporally constant). Given that the very light pixel is limited soon at the signal levels S1 and S2, the whole integration time is lower compared to the dark pixel. In practice, the parts of the integration time are very different. T1, for example, is 95% of Ttotal, T2 only 4% and T3 only 1%. Thus, a high decrease of the very light pixels can be achieved. However, if you want to divide the integration thresholds into three parts that is S2 = 2 x S1 and S3 = 3 x S1, a hundredfold brightness of one pixel's step from S2 to S3, compared to the step from 0 and S1 is needed.

Using HDR with mvBlueCOUGAR-X-x00w

Figure 3 is showing the usage of the HDR mode. You can reach the HDR settings via "Setting -> Base -> Camera -> GenICam -> mv High Dynanmic Range Control".

Figure 3: wxPropView HDR screenshot

Notes about the usage of the HDR mode with mvBlueCOUGAR-X-x00w

  • In the HDR mode, the basic amplification is reduced by approx. 0.7, to utilize a huge, dynamic area of the sensor.
  • If the manual gain is raised, this effect will be reversed.
  • Exposure times, which are too low, make no sense. During the third phase, if the exposure time reaches a possible minimum (one line period), a sensible lower limit is reached.

Possible settings using mvBlueCOUGAR-X-x00w

Possible settings of the mvBlueCOUGAR-X-x00w in HDR mode are:

"mv HDR Enable":

  • "Off": Standard mode
  • "On": HDR mode on, reduced amplification:
  • "mv HDR Preset":
    • "mvDualKneePoint0": Fixed setting with 2 Knee Points. modulation Phase 0 .. 33% / 1 .. 66% / 2 .. 100%
    • "mvDualKneePoint1": Phase 1 exposure 12.5% , Phase 2 31.25% of total exposure
    • "mvDualKneePoint2": Phase 1 exposure 6.25% , Phase 2 1.56% of total exposure
    • "mvDualKneePoint3": Phase 1 exposure 3.12% , Phase 2 0.78% of total exposure
    • "mvDualKneePoint4": Phase 1 exposure 1.56% , Phase 2 0.39% of total exposure
    • "mvDualKneePoint5": Phase 1 exposure 0.78% , Phase 2 0.195% of total exposure
    • "mvDualKneePoint6": Phase 1 exposure 0.39% , Phase 2 0.049% of total exposure
    • "mvDualKneePointUser": Variable setting of the Knee Point (1..2), threshold and exposure time proportion
  • "mv HDR Selector":
    • "mv HDR Voltage first knee point": Control voltage for exposure threshold of first Knee Point (3030mV is equivalent to approx. 33%)
    • "mv HDR Voltage second knee point": Control voltage for exposure threshold of first Knee Point (2630mV is equivalent to approx. 66%)
    • "mv HDR Exposure first knee point": Proportion of Phase 0 compared to total exposure in parts per million (ppm)
    • "mv HDR Exposure second knee point": Proportion of Phase 1 compared to total exposure in parts per million (ppm)



Adjusting sensor -x02d (-1012d)

Introduction

The HDR (High Dynamic Range) mode of the Aptina sensor increases the usable contrast range. This is achieved by dividing the integration time in three phases. The exposure time proportion of the three phases can be set independently.

Functionality

To exceed the typical dynamic range, images are captured at 3 exposure times with given ratios for different exposure times. The figure shows a multiple exposure capture using 3 different exposure times.

Figure 1: Multiple exposure capture using 3 different exposure times
Note
The longest exposure time (T1) represents the Exposure_us parameter you can set in wxPropView.

Afterwards, the signal is fully linearized before going through a compander to be output as a piece-wise linear signal. the next figure shows this.

Figure 2: Piece-wise linear signal

Description

Exposure ratios can be controlled by the program. Two rations are used: R1 = T1/T2 and R2 = T2/T3.

Increasing R1 and R2 will increase the dynamic range of the sensor at the cost of lower signal-to-noise ratio (and vice versa).

Possible settings

Possible settings of the mvBlueCOUGAR-X-x02d in HDR mode are:

  • "mv HDR Enable":
    • "Off": Standard mode
    • "On": HDR mode on, reduced amplification
      • "mv HDR Preset":
        • "mvDualKneePoint0": Fixed setting with exposure-time-ratios: T1 -> T2 ratio / T2 -> T3 ratio
        • "mvDualKneePoint1": 8 / 4
        • "mvDualKneePoint2": 4 / 8
        • "mvDualKneePoint3": 8 / 8
        • "mvDualKneePoint4": 8 / 16
        • "mvDualKneePoint5": 16 / 16
        • "mvDualKneePoint6": 16 / 32
Figure 3: wxPropView - Working with the HDR mode



Adjusting sensor -x02e (-1013) / -x04e (-1020)

Introduction

The HDR (High Dynamic Range) mode of the e2v sensors increases the usable contrast range. This is achieved by adjusting the logarithmic response of the pixels.

Functionality

MATRIX VISION offers the "mv Linear Logarithmic Mode" to use the HDR mode of the e2v sensors. With this mode you can set the low voltage of the reset signal at pixel level.

Figure 1: Knee-Point of the e2v HDR mode shifts the linear / logarithmic level

You can find the "mv Linear Logarithmic Mode" in "Setting -> Base -> Camera -> GenICam -> Analog Control":

Figure 2: wxPropView - mv Linear Logarithmic Mode

The following figure shows the measured curves at 2 ms and 20 ms exposure time with four different "mv Linear Logarithmic Mode" settings:

Figure 3: Measured curves

The curves are at Gain 1, lambda = 670 nm (40 nm width), room temperature, nominal power supply values, on a 100 x 100 pixel cantered area.

"mv Linear Logarithmic Mode" value Dynamic max (dB)
T = 2 ms T = 20 ms
4 47 65
5 74 93
6 85 104
7 92 111



Working with LUTs

There are several use cases concerning LUTs (Look-Up-Tables):



Introducing LUTs

Introduction

Look-Up-Tables (LUT) are used to transform input data into a desirable output format. For example, if you want to invert an 8 bit image, a Look-Up-Table will look like the following:

Figure 1: Look-Up-Table which inverts a pixel of an 8 bit mono image

I.e., a pixel which is white in the input image (value 255) will become black (value 0) in the output image.

All MATRIX VISION devices use a hardware based LUT which means that

  • no host CPU load is needed and
  • the LUT operations are independent of the transmission bit depth.

Setting the hardware based LUTs via LUT Control

On the mvBlueCOUGAR-X using wxPropView, you will find the LUT Control via "Setting -> Base -> Camera -> GenICam -> LUT Control".

wxPropView offers a wizard for the LUT Control usage:

  1. Click on "Setting -> Base -> Camera -> GenICam -> LUT Control".
    Now, the "Wizard" button becomes active.

    Figure 2: wxPropView - LUT Control wizard button

  2. Click on the "Wizard" button to start the LUT Control wizard tool.
    The wizard will load the LUT data from the camera.

    Figure 3: wxPropView - LUT Control wizard dialog

It is easy to change settings like the Gamma value of the Luminance or of each color channel (in combination with a color sensor of course) with the help of the wizard. You can also invert the values of each pixel with the wizard. It is not possible to set a LUT mode and the "mv LUT Mapping" is fixed.

Make your changes and do not forget to

  1. click on "Copy to..." and select "All" or the color channel you need, to
  2. click on "Enable All", and finally, to
  3. click on Synchronize and play the settings back to the device (via "Cache -> Device").
Note
If you select "Enable All" without entering any value the image will be inverted.

Setting the Host based LUTs via LUTOperations

Host based LUTs are also available via "Setting -> Base -> ImageProcessing -> LUTOperations"). Here, the changes will affect the 8 bit image data and the processing needs the CPU of the host system.

Three "LUTMode"s are available:

  • "Gamma"
    You can use "Gamma" to lift darker image areas and to flatten the brighter ones. This compensates the contrast of the object. The calculation is described here. It makes sense to set the "GammaStartThreshold" higher than 0 to avoid a too extreme lift or noise in the darker areas.
  • "Interpolated"
    With "Interpolated" you can set the key points of a characteristic line. You can defined the number of key points. The following figure shows the behavior of all 3 LUTInterpolationModes with 3 key points:
Figure 4: LUTMode "Interpolated" -> LUTInterpolationMode
  • "Direct"
    With "Direct" you can set the LUT values directly.

Example 1: Inverting an Image

To get an inverted 8 bit mono image like shown in Figure 1, you can set the LUT using wxPropView. After starting wxPropView and using the device,

  1. Set "LUTEnable" to "On" in "Setting -> Base -> ImageProcessing -> LUTOperations".
  2. Afterwards, set "LUTMode" to "Direct".
  3. Right-click on "LUTs -> LUT-0 -> DirectValues[256]" and select "Set Multiple Elements... -> Via A User Defined Value Range".
    This is one way to get an inverted result. It is also possible to use the "LUTMode" - "Interpolated".
  4. Now you can set the range from 0 to 255 and the values from 255 to 0 as shown in Figure 2.
Figure 5: Inverting an image using wxPropView with LUTMode "Direct"



Working with LUTValueAll

Working with the LUTValueAll feature requires a detailed understanding on both Endianess and the cameras internal format for storing LUT data. LUTValueAll typically references the same section in the cameras memory as when accessing the LUT via the features LUTIndex and LUTValue.

LUT data can either be written to a device like this (C++ syntax):

const size_t LUT_VALUE_COUNT = 256;
int64_type LUTData[LUT_VALUE_COUNT] = getLUTDataToWriteToTheDevice();
mvIMPACT::acquire::GenICam::LUTControl lut(getDevicePointerFromSomewhere());
for(int64_type i=0; i< static_cast<int64_type>(LUT_VALUE_COUNT); i++ )
{
  lut.LUTIndex.write( i );
  lut.LUTValue.write( LUTData[i] );
}

When using this approach all the Endianess related issues will be handled completely by the GenICam runtime library. So this code is straight forward and easy to understand but might be slower than desired as it requires a lot of direct register accesses to the device.

In order to allow a fast efficient way to read/write LUT data from/to a device the LUTValueAll feature has been introduced. When using this feature the complete LUT can be written to a device like this:

const size_t LUT_VALUE_COUNT = 256;
int LUTData[LUT_VALUE_COUNT] = getLUTDataToWriteToTheDevice();
mvIMPACT::acquire::GenICam::LUTControl lut(getDevicePointerFromSomewhere());
std::string buf(reinterpret_cast<std::string::value_type*>(&LUTData), sizeof(LUTData));
lut.LUTValueAll.writeBinary( buf );

BUT as this simply writes a raw block of memory to the device it suddenly becomes important to know exactly how the LUT data is stored inside the camera. This includes:

  • The size of one individual LUT entry (this could be anything from 1 up to 8 bytes)
  • The Endianess of the device
  • The Endianess of the host system used for sending/receiving the LUT data

The first item has impact on how the memory must be allocated for receiving/sending LUT data. For example when the LUT data on the device uses a 'one 32-bit integer per LUT entry with 256 entries' layout then of course this is needed on the host system:

const size_t LUT_VALUE_COUNT = 256;
int LUTData[LUT_VALUE_COUNT];

When the Endianess of the host system differs from the Endianess used by the device the application communicates with, then before sending data assembled on the host might require Endianess swapping. For the example from above this would e.g. require something like this:

#define SWAP_32(l) \
  ((((l) & 0xff000000) >> 24) | \
   (((l) & 0x00ff0000) >> 8)  | \
   (((l) & 0x0000ff00) << 8)  | \
   (((l) & 0x000000ff) << 24))

void fn()
{
 const size_t LUT_VALUE_COUNT = 256;
 int LUTData[LUT_VALUE_COUNT] = getLUTDataToWriteToTheDevice();
 mvIMPACT::acquire::GenICam::LUTControl lut(getDevicePointerFromSomewhere());
 for( size_t i=0; i<LUT_VALUE_COUNT; i++ )
 {
  LUTData[i] = SWAP_32(LUTData[i]);
 }
 std::string buf(reinterpret_cast<std::string::value_type*>(&LUTData), sizeof(LUTData));
 lut.LUTValueAll.writeBinary( buf );
}

For details on how the LUT memory is organized for certain sensors please refers to the Sensor overview. Please note that all mvBlueCOUGAR-S, mvBlueCOUGAR-X and mvBlueCOUGAR-XD devices are using Big Endian while almost any Windows or Linux distribution on the market uses Little Endian, thus the swapping of the data will most certainly be necessary when using the LUTValueAll feature.



Implementing a hardware-based binarization

If you like to have binarized images from the camera, you can use the hardware-based Look-Up-Tables (LUT) which you can access via "Setting -> Base -> Camera -> GenICam -> LUT Control".

To get binarized images from the camera, please follow these steps:

  1. Set up the camera and the scenery, e.g.

    Figure 1: Scenery

  2. Open the LUT wizard via the menu "Wizards -> LUT Control...".
  3. Export the current LUT as a "*.csv" file.

The "*.csv" file contains just one column for the output gray scale values. Each row of the "*.csv" represents the input gray scale value. In our example, the binarization threshold is 1024 in a 12-to-9 bit LUT. I.e., we have 4096 (= 12 bit) input values (= rows) and 512 (= 9 bit) output values (column values). To binarize the image according to the threshold, you have to

  1. set all values below the binarization threshold to 0.
  2. Set all values above the binarization threshold to 511:

    Figure 2: The binarization LUT

  3. Now, save the "*.csv" file and
  4. import it via the LUT Control wizard.
  5. Click on synchronize and
  6. finally check "Enable".

Afterwards the camera will output binarized images like the following:

Figure 3: Binarized image



Saving data on the device

Note
As described in Storing and restoring settings, it is also possible to save the settings as an XML file on the host system. You can find further information about for example the XML compatibilities of the different driver versions in the mvIMPACT Acquire SDK manuals and the according setting classes: https://www.matrix-vision.com/manuals/SDK_CPP/classmvIMPACT_1_1acquire_1_1FunctionInterface.html (C++)

There are several use cases concerning device memory:



Creating user data entries

Basics about user data

It is possible to save arbitrary user specific data on the hardware's non-volatile memory. The amount of possible entries depends on the length of the individual entries as well as the size of the devices non-volatile memory reserved for storing:

  • mvBlueFOX,
  • mvBlueFOX-M,
  • mvBlueFOX-MLC,
  • mvBlueFOX3, and
  • mvBlueCOUGAR-X

currently offer 512 bytes of user accessible non-volatile memory of which 12 bytes are needed to store header information leaving 500 bytes for user specific data.

One entry will currently consume:
1 + <length_of_name (up to 255 chars)> + 2 + <length_of_data (up to 65535 bytes)> + 1 (access mode) bytes

as well as an optional:
1 + <length_of_password> bytes per entry if a password has been defined for this particular entry

It is possible to save either String or Binary data in the data property of each entry. When storing binary data please note, that this data internally will be stored in Base64 format thus the amount of memory required is 4/3 time the binary data size.

The UserData can be accessed and created using wxPropView (the device has to be closed). In the section "UserData" you will find the entries and following methods:

  • "CreateUserDataEntry"
  • "DeleteUserDataEntry"
  • "WriteDataToHardware"
Figure 1: wxPropView - section "UserData -> Entries"

To create a user data entry, you have to

  • Right click on "CreateUserDataEntry"
  • Select "Execute" from the popup menu.
    An entry will be created.
  • In "Entries" click on the entry you want to adjust and modify the data fields.
    To permanently commit a modification made with the keyboard the ENTER key must be pressed.
  • To save the data on the device, you have to execute "WriteDataToHardware". Please have a look at the "Output" tab in the lower right section of the screen as shown in Figure 2, to see if the write process returned with no errors. If an error occurs a message box will pop up.
Figure 2: wxPropView - analysis tool "Output"

Coding sample

If you e.g. want to use the UserData as dongle mechanism (with binary data), it is not suggested to use wxPropView. In this case you have to program the handling of the user data.

See also
mvIMPACT::acquire::UserDataEntry in mvIMPACT_Acquire_API_CPP_manual.chm.



Creating user set entries

With mvBlueCOUGAR-X it is possible to store up to five configuration sets (4 user plus one factory default) in the camera.

This feature is similar to the storing settings functionality, which saves the settings in the registry. However, as mentioned before the user sets are stored in the camera.

The user set stores

permanently and is independent of the computer which in used.

Additionally, you can select, which user set comes up after hard reset.

Attention
The storage of user data in the registry can still override user set data!
User sets are cleared after firmware change.

List of ignored properties

Following properties are not stored in the user set:

                    - ActionUnconditionalMode
                    - DeviceLinkHeartbeatMode
                    - DeviceLinkHeartbeatTimeout
                    - DeviceStreamChannelEndianness
                    - DeviceTLType
                    - EventExposureEndData
                    - EventFrameEndData
                    - EventLine4RisingEdgeData
                    - EventLine5RisingEdgeData
                    - FileAccessBuffer
                    - FileAccessLength
                    - FileAccessOffset
                    - FileOpenMode
                    - GevCCP
                    - GevCurrentIPConfigurationDHCP
                    - GevCurrentIPConfigurationPersistentIP
                    - GevDiscoveryAckDelay
                    - GevGVCPExtendedStatusCodes
                    - GevGVCPHeartbeatDisable
                    - GevGVCPPendingAck
                    - GevGVSPExtendedIDMode
                    - GevHeartbeatTimeout
                    - GevMCDA
                    - GevMCPHostPort
                    - GevMCRC
                    - GevMCTT
                    - GevPersistentDefaultGateway
                    - GevPersistentIPAddress
                    - GevPersistentSubnetMask
                    - GevPhysicalLinkConfiguration
                    - GevPrimaryApplicationSwitchoverKey
                    - GevSCCFGAllInTransmission
                    - GevSCCFGExtendedChunkData
                    - GevSCCFGLargeLeaderTrailer
                    - GevSCCFGMultiPart
                    - GevSCCFGPacketResendDestination
                    - GevSCCFGUnconditionalStreaming
                    - GevSCDA
                    - GevSCPD
                    - GevSCPHostPort
                    - GevSCPInterfaceIndex
                    - GevSCPSBigEndian
                    - GevSCPSDoNotFragment
                    - GevSCPSFireTestPacket
                    - GevSCPSPacketSize
                    - LUTIndex
                    - LUTValueAll
                    - PtpControl
                    - PtpEnable
                    - UserSetDefault
                    - mvADCGain
                    - mvDefectivePixelCount
                    - mvDefectivePixelOffsetX
                    - mvDefectivePixelOffsetY
                    - mvDigitalGainOffset
                    - mvFFCAutoLoadMode
                    - mvI2cInterfaceASCIIBuffer
                    - mvI2cInterfaceBinaryBuffer
                    - mvI2cInterfaceBytesToRead
                    - mvI2cInterfaceBytesToWrite
                    - mvPreGain
                    - mvSerialInterfaceASCIIBuffer
                    - mvSerialInterfaceBinaryBuffer
                    - mvSerialInterfaceBytesToRead
                    - mvSerialInterfaceBytesToWrite
                    - mvUserData
                    - mvVRamp

Working with the user sets

You can find the user set control in "Setting -> Base -> Camera -> GenICam -> User Set Control":

Figure 1: User Set Control

With "User Set Selector" you can select the user set ("Default", "UserSet1 - UserSet4"). To save or load the specific user set, you have two functions:

  • "int UserSetLoad()" and
  • "int UserSetSave()".

"User Set Default" is the property, where you can select the user set, which comes up after hard reset.

Finally, with "mv User Data" you have the possibility to store arbitrary user data.



Working with the UserFile section (Flash memory)

The mvBlueCOUGAR-X offers a 64 KByte section in the Flash memory that can be used to upload a custom file to (UserFile).

To read or write this file you can use the following GenICam File Access Control and its interfaces:

  • IDevFileStream (read)
  • ODevFileStream (write)
Attention
The UserFile is lost each time a firmware update is applied to the device.

Using wxPropView

wxPropView offers a wizard for the File Access Control usage:

  1. Click on "Setting -> Base -> Camera -> GenICam -> File Access Control -> File Selector -> File Operator Selector".
    Now, the "Wizard" button becomes active.

    Figure 1: wxPropView - UserFile wizard

  2. Click on the "Wizard" button.
    Now, a dialog appears where you can choose either to upload or download a file.

    Figure 2: wxPropView - Download / Upload dialog

  3. Make your choice and click on "OK".
    Now, a dialog appears where you can select the File.

    Figure 3: wxPropView - Download / Upload dialog

  4. Select "UserFile" follow the instructions.

Manually control the file access from an application (C++)

The header providing the file access related classes must be included into the application:

#include <mvIMPACT_CPP/mvIMPACT_acquire_GenICam_FileStream.h>

A write access then will look like:

const string fileNameDevice("UserFile");

// uploading a file
mvIMPACT::acquire::GenICam::ODevFileStream file;
file.open( pDev, fileNameDevice.c_str() );

if( !file.fail() )
{
  // Handle the successful upload.
}
else
{
  // Handle the error.
}

A read access will look like:

const string fileNameDevice("UserFile");

// downloading a file works in a similar way
mvIMPACT::acquire::GenICam::IDevFileStream file;
file.open( pDev, fileNameDevice.c_str() );

if( !file.fail() )
{
  // Handle the successful upload.
}
else
{
  // Handle the error.
}

You can find a detailed code example in the C++ API manual in the documentation of the classes mvIMPACT::acquire::GenICam::IDevFileStream and mvIMPACT::acquire::GenICam::ODevFileStream

Working with device features

There are several use cases concerning device features:



Working with the temperature sensors

The mvBlueCOUGAR-X offers two different temperature sensors:

  • on the sensor board (typically lower)
  • on FGPA board (typically higher)
Figure 1: wxPropView - Device Temperature Selector
Note
Avoid temperatures higher than 80 C by lowering thermal resistance of mvBlueCOUGAR-X housing to connecting structure active cooling means.

It is possible to regulate the temperature of the camera. The limits of this feature are

  • upper limit (<= 255 C) and
  • lower limit (>= 0 C).

Furthermore, the hysteresis (0,1.5,3,6 C) difference between on/off prevents oscillation. The temperature out of range will set selected output high (see Figure 2). The output can directly drive a fan or a (modest) heating system (but not both due to limit of IC).

Figure 2: wxPropView - Temperature I/O setting



Disabling the heartbeat

The GigE Vision Control Protocol (GVCP) is an application protocol relying on the UDP transport layer protocol. Because UDP is a connectionless protocol, a recovery mechanism has been defined to detect that one of the two participants has been detached without closing the control channel before. A heartbeat message is used to achieve this.

In the default operation mode, these heartbeat messages are exchanged with a user selectable frequency ("Setting -> Base -> Camera -> GenICam -> Device Control -> Device Link Heartbeat Timeout"; orange box in Figure 1). As a result, during this interval a certain register (GevCCP = Control Channel Privilege) is read several times from the device to reset the heartbeat timer interval in order to ensure the control channel stays open.

Figure 1: wxPropView - Transport Layer Control
Figure 2: Screenshot of a network analyzer - GVCP Protocol

If the heartbeat bit is set to disable ("Device Link Heartbeat Mode = On"; red box in Figure 1), the transport layer lib will stop reading the GevCCP register periodically. To disable the heartbeat makes sense for at least two scenarios:

In both scenarios, one participant pauses the connection on purpose.

If the heartbeat disable bit is reset again, the transport layer lib will start reading the CCP register periodically again.



Reset timestamp by hardware

This feature can be used

  • for precise control of timestamp
    • for one camera or
    • to synchronize timestamp of multitude of cameras.

The latter sample, can be achieved by following steps:

  1. Define the input line ("TriggerSource") to reset the timestamp, e.g. "Line5" and
  2. set the "Trigger Selector" to "mvTimestampReset".
  3. Connect all input lines of all cameras together.
  4. Finally, use one output of one camera to generate reset edge:

    Figure 1: wxPropView - Setting the sample

Note
Be aware of the drift of the individual timestamps.

The timestamp is generated via FPGA in the camera which itself is clocked by a crystal oscillator. This is done independently in each camera and by default not synchronized among cameras or the host system.

Typical stability of crystal oscillators is in the range 100ppm (parts per million).

I.e. for longer operation times (say in excess of hours) there is a tendency that timestamps of individual cameras drift against each other and against the time in the operating system of the host.

Customers wishing to use the individual camera timestamps for synchronization and identification of images via timestamps for multi-camera systems will have in the meantime
- to reset all timestamps either by hardware signal or by command and regularly resynchronize or check the drift algorithmically
- in order to make sure that the drift is less half an image frame time.



Synchronizing camera timestamps without IEEE 1588

Introduction

Camera timestamps are a recommended Genicam / SFNC feature to add the information when an image was taken (exactly: when the exposure of the image started).

Without additional synchronization it is merely a camera individual timer with a vendor specific increment and implementation dependent accuracy. Each camera starts its own timestamp beginning with zero and there are no means to adjust or synchronize them among cameras or host PCs. There is effort ongoing to widely establish the precision timestamp according to "IEEE 1588" into GigE cameras. This involves cameras which are able to perform the required synchronization as well as specific network hardware and driver software and procedures to do and maintain the synchronization.

There are many applications which do not or cannot profit from "IEEE 1588" but have certain synchronization needs. Solutions for these scenarios are describes as follows.

Resetting timestamp using mvTimestampReset

First of all the standard does not provide hardware means to reset the timestamp in a camera other than plug off and on again. Therefore MATRIX VISION has created its own mechanism mvTimestampReset to reset the timestamp by a hardware input.

Figure 1: mvTimestampReset

This can be elegantly used for synchronization purposes by means of wiring an input of all cameras together and reset all camera timestamps at the beginning by a defined signal edge from the process. From this reset on all cameras start at zero local time and will increment independently their timestamp so that we achieve a basic accuracy only limited by drift of the clock main frequency (e.g. a 1 MHz oscillator in the FPGA) over time.

In order to compensate for this drift we can in addition reset the timestamp every second or minute or so and count the reset pulse itself by a counter in each camera. Assuming this reset pulse is generated by the master camera itself by means of a timer and output as the hardware reset signal for all cameras, we now can count the reset pulse with all cameras and put both the count and the reset timestamp as so called chunk data in the images.

We thus have achieved a synchronized timestamp with the precision of the master camera among all connected cameras.

Settings required are shown using MATRIX VISION’s wxPropView tool:

Figure 2: Reset the timestamp every second

An example of the chunk data attached to the image can be seen below. The timestamp is in µs and Counter1 counts the reset pulses, in this case itself generated by the camera via Timer1.

Figure 3: ChunkData

The task of resetting the counter at the beginning of the acquisition can be done by setting the reset property accordingly. Of course is this all independent whether the camera is acquiring images in triggered or continuous mode.

Synchronizing timestamp using a pulse-per-second signal

In order to eliminate the unknown drifts of different devices, a PPS (pulse per second) signal can be fed into each camera using a PC with NTP (network time protocol software), GPS devices or even a looped-back camera timer.

From these pulses the device will find out how long one second is. When a device detects that it is no longer running precisely it will adapt it's internal clock leading to a "stabilized oscillator".

The device would then maintain their timestamp-differences over long times and stay synchronized. The initial difference between the timers - before the PPS was used - remains however. If you aim to eliminate that as well, you can use the mvTimestampReset up front with the same PPS input signal. In an application this can be configured like this (C# syntax):

// --------------------------------------------------------------------------
bool waitForNextPulse(Device pDev, String triggerLine)
// --------------------------------------------------------------------------
{
    GenICam.CounterAndTimerControl ctc = new GenICam.CounterAndTimerControl(pDev);
    ctc.counterEventSource.writeS(triggerLine);
    long momentaryValue = ctc.counterValue.read();
    for (int i=0; i<12; i++)
    {
        System.Threading.Thread.Sleep(100);
        if (momentaryValue != ctc.counterValue.read()) 
        {
            return true;
        }
    }
    return false;
}

// --------------------------------------------------------------------------
void SetupPPS(Device[] pDevs)
// --------------------------------------------------------------------------
{
    string TriggerLine = "Line4";
    
    if (!waitForNextPulse(pDevs[0],TriggerLine)) 
    {
        Console.WriteLine("No pulse seems to be present");
        return;
    }

    // No configure all the devices to reset their timestamp with each pulse coming
    // on the trigger line that the PPS signal is connected to.
    foreach(Device aDevice in pDevs) 
    {
        GenICam.AcquisitionControl ac = new GenICam.AcquisitionControl(aDevice);
        ac.triggerSelector.writeS("mvTimestampReset");
        ac.triggerSource.writeS(TriggerLine);
        ac.triggerMode.writeS("On");
    }

    // wait for the next pulse that will then reset the timestamps of all the devices
    if (!waitForNextPulse(pDevs[0],TriggerLine)) 
    {
        Console.WriteLine("the pulses aren't coming fast enough ...");
        return;
    }

    // Now switch off the reset of the timestamp again. All devices did restart their
    // timestamp counters and will stay in sync using the PPS signal now
    foreach(Device aDevice in pDevs)
    {
        GenICam.AcquisitionControl ac = new GenICam.AcquisitionControl(aDevice);
        ac.triggerMode.writeS("Off");
        GenICam.DeviceControl dc = new GenICam.DeviceControl(aDevice);
        dc.mvTimestampPPSSync.writeS("Line4");
    }
}

Using a looped-back camera timer for the PPS signal

To reduce the amount of hardware needed you might want to sacrifice some timestamp validity and use one of the cameras as a master clock. This can be done like this:

  • setting Timer1 to duration 1s
  • starting Timer2 with every Timer1End and generating a short pulse (duration = 1000 us)
  • placing Timer2Active on one of the digital I/O lines and using that as the source for PPS signal
Figure 4: Setup for looped-back Timer

Setting the Timer1 to 1 s seems like an easy task but due to some internal dependencies you should be carefully here At the moment two different timer implementations are present in our products.

  • Type 1: For mvBlueCOUGAR-X cameras with sensors other than the IMX family from Sony please set the duration to the theoretical value of 1000000.
  • Type 2: For all the other cameras please use 999997 us duration since the self-triggering will consume the other 3 us
  • Please refrain from switching on PPSSync inside the Master camera since (at least in Type 1 cameras) since this will lead to an instable feedback loop.



Working with the 3 head model

The 3 head model behaves like a mvBlueCOUGAR-X standard camera and for this reason it can be used with every GigE Vision compliant software. The information of the three sensors are returned as a packed pseudo RGB image, with each color channel representing one sensor.

While working with trigger signals, a trigger signal will trigger all sensors simultaneously.

Because of the three sensor heads, you can set the gain and the offset (black level) of each sensor separately in "Setting -> Base -> Camera -> GenICam -> Analog Control -> "

  • "Gain Selector"
  • "Black Level Selector"
Figure 1: wxPropView - Gain and Black Level Selector



Working with the serial interface (mv Serial Interface Control)

Introduction

As mentioned in GenICam and Advanced Features section of this manual, the mv Serial Interface Control is a feature which allows an easy integration of motor lenses or other peripherals based on RS232.

  • Available message buffer size: 1 KByte
Note
Use the Power GND for the RS232 signal.

Setting up the device

Follow these steps to prepare the camera for the communication via RS232:

Figure 1: wxPropView - mv Serial Interface Control
  1. Start wxPropView
  2. Connect to the camera
  3. Under "Setting -> Base -> Camera -> GenICam -> mv Serial Interface Control" activate the serial interface by enabling
    "mv Serial Interface Enable" (1).
    Afterwards "mv Serial Interface Control" is available.
  4. Set up the connection settings to your needs (2).
  5. To test the settings you can send a test message (3).
  6. Send messages by executing the function "int mvSerialInterfaceWrite( void )" by either clicking on the 3 dots next to the function name or by right-clicking on the command and then selecting Execute from the context menu.(4)

If you listening to the RS232 serial line using a tool like PuTTY with matching settings...

Figure 2: PuTTY - Setting up the serial interface

you will see the test message:

Figure 3: PuTTY - Receiving the test message

Programming the serial interface

#include <mvIMPACT_CPP/mvIMPACT_acquire_GenICam.h>

  // more code
  GenICam::mvSerialInterfaceControl sic( pDev );
  sic.mvSerialInterfaceBaudRate.writeS( "Hz_115200" );
  sic.mvSerialInterfaceASCIIBuffer.writeS( "Test Test Test" );
  sic.mvSerialInterfaceWrite();
  // more code



Working with several cameras simultaneously

There are several use cases concerning multiple cameras:



Introducing multicasting

GigE Vision supports to stream data over a network via Multicast to multiple destinations simultaneously. Multicast means, that one source streams data to a group of recipients in the same subnet. As long as the recipients are members of this Multicast group they will get the packages. Other members of the same local subnet will skip the packages (but the packages are sent and so they need bandwidth).

See also
http://en.wikipedia.org/wiki/Multicast

To set up a Multicast environment, GigE Vision introduced camera access types. The most important ones are

  • Control and
  • Read.

With Control, a primary application can used to set up the camera which will stream the data via Multicast. With Read you can set up secondary applications which will playback the data stream.

Figure 1: Multicast setup

Because the mvBlueCOUGAR cameras are GigE Vision compliant devices, you can set up a Multicast environment using wxPropView. This use case will show how you can do this.

Sample

On (the primary) application you have to establish "Control" access.

For this,

  1. please start wxPropView and select the camera.
  2. Click on the "Device" section.
  3. Click on "DesiredAccess" and choose "Control".

    Figure 2: wxPropView - Primary application setting DesiredAccess to "Control"

    See also
    desiredAccess and grantedAccess in
    • mvIMPACT::acquire::Device (in mvIMPACT_Acquire_API_CPP_manual.chm)
    • TDeviceAccessMode (in mvIMPACT_Acquire_API_CPP_manual.chm)
  4. Click on "Use".
  5. Now, select the "Setting -> Base -> Camera -> GenICam" section and open the "Transport Layer Control" subsection (using the device specific interface: "System Settings" and "TransportLayer").
  6. In "GevSCDA" enter a Multicast address like "239.255.255.255" .

    Figure 3: wxPropView - Primary application setting GevSCDA in "GevSCDA"

    See also
    http://en.wikipedia.org/wiki/Multicast_address

One or more applications running on different machines can then establish "read-only" access to the very same device.

Note
The machines of the secondary applications have to be connected to the same network as the primary application.
  1. Please start wxPropView on the other machine and click on the "Device" section.

    Figure 4: wxPropView - Secondary application setting DesiredAccess to "Read"

  2. Features will not be writeable as you can see at the "Transport Layer Control" parameters in Figure 4.

    Figure 5: wxPropView - Secondary application read-only "Transport Layer Control" parameters

  3. Once the primary application starts to request images, the secondary applications will be able to receive these images as well. Please click on "Use" and then "Acquire" ("Acquisition Mode" = "Continuous").

    Figure 6: wxPropView - Secondary application receives images from the primary application

Note
The machine that has "Control" access automatically joins the streaming multicast group of the camera it is controlling. If this is not desired, the "mv Auto Join Multicast Groups" property has to be set to false.
Figure 7: wxPropView - Disable mv Auto Join Multicast Groups to prevent the Control machine to receive multicast streaming data

Attention
GigE Vision (GEV) does not allow packet fragmentation for performance reasons. Because of that and the nature of multicasting and network data transfer in general it is crucial to select a packet size every client with respect to its NIC s MTU can handle as otherwise not all clients will be able to receive the image data. See the chapter

for details.



Using Action Commands

GigE Vision specifies so called Action Commands to trigger an action in multiple devices at roughly the same time.

Note
Due to the nature of Ethernet, this is not as synchronous as a hardware trigger since different network segments can have different latencies. Nevertheless in a switched network, the resulting jitter is acceptable for a broad range of applications and this scheme provides a convenient way to synchronize devices by means of a software command.

Action commands can be unicasted or broadcasted by applications with either exclusive, write or read (only when the device is configured accordingly) access to the device. They can be used e.g. to

  • increment or reset counters
  • reset timers
  • act as trigger sources
  • ...

The most typical scenario is when an application desires to trigger a simultaneous action on multiple devices. This case is showed by the figure below. The application fires a broadcast action command that will reach all the devices on the subnet.

Figure 1: Action command sent as a broadcast to all devices in the subnet


Attention
Due to the nature of Ethernet an action command at a given time can only leave a single network interface thus sending action commands via several different network interfaces to different devices that are e.g. directly connected to a given network interface card in a PC only works sequential and therefore these commands will NOT reach the devices at the same time. This can only be achieved using Scheduled Action Commands. Depending on whether an action command is unicasted or broadcasted the command will either reach a single device on a given subnet or multiple devices.
Note
The following diagrams assume the connecting line between the PC and the devices to be the GVCP(GigE Vision control protocol)s control port socket. Therefore these diagram do no make any assumption about the physical connection between the devices and the PC. However for the examples to work they must all be connected to the same network interface of the PC using a network switch as otherwise as stated above it would not be possible to sent the very same action command using a single network packet to all the devices.

But action commands can also be used by secondary applications. This can even be another device on the same subnet. This is depicted in the following figure.

Figure 2: Action command sent as a broadcast by one device to all other devices in the subnet


Upon reception of an action command, the device will decode the information to identify which internal action signal is requested. An action signal is a device internal signal that can be used as a trigger for functional units inside the device (ex: a frame trigger). It can be routed to all signal sinks of the device.

Each action command message contains information for the device to validate the requested operation:

  1. device_key to authorize the action on this device.
  2. group_key to define a group of devices on which actions have to be executed.
  3. group_mask to be used to filter out some of these devices from the group.

Action commands can only be asserted if the device has an open primary control channel (so if an application has established write or exclusive access to a device) or when unconditional action mode is enabled.

A device can define several action commands which can be selected via the ActionSelector in the device features property tree.

The conditions for an action command to be asserted by the device are:

  1. the device has an open primary control channel or unconditional action mode is enabled.
  2. the device_key in the action command sent by the application and the ActionDeviceKey property of the device must be equal.
  3. the group_key in the action command sent by the application and the ActionGroupKey property for the corresponding action of the device must be equal.
  4. the logical AND-wise operation of group_mask in the action command sent by the application against the ActionGroupMask for the corresponding action of the device must be non-zero. Therefore, they must have at least one common bit set at the same position in the register.

When these 4 conditions are met for at least one action signal configured on the device, then the device internally asserts the requested action. As these conditions could be met on more than one action, the device could assert more than one action signal in parallel. When one of the 4 conditions is not met for any supported action signal, then the device ignores the action command.

This first condition asks for the write or exclusive access being established between the application and the device unless the ActionUnconditionalMode has been enabled. When this bit is set, the device can assert the requested action even if no application has established write or exclusive access as long as the 3 other conditions are met.

Scheduled Action Command

Scheduled Action Commands provide a way to trigger actions in a device at a specific time in the future. The typical use case is depicted in the following diagram:

Figure 3: Principle of scheduled action commands


The transmitter of an action command records the exact time, when the source signal is asserted (External signal). This time t 0 is incremented by a delta time DELTA L and transmitted in an action command to the receivers. The delta time DELTA L has to be larger than the longest possible transmission and processing latency of the action command in the network.

If the packet passes the action command filters in the receiver, then the action signal is put into a time queue (the depth of this queue is indicated by the ActionQueueSize property).

When the time of the local clock is greater or equal to the time of an action signal in the queue, the signal is removed from the queue and asserted. Combined with the timestamp precision of IEEE 1588 which can be sub-microseconds, a Scheduled Action Command provides a way to allow low-jitter software trigger. If the sender of an action command is not capable to set a future time into the packet, the action command has a flag to fall back to legacy mode (bit 0 of the flag field). In this mode the signal is asserted in the moment the packet passes the action command filters.

Attention
Scheduled action commands are not supported by every device. A device supporting scheduled action commands should also support time stamp synchronization based on IEEE1588. MATRIX VISION devices currently do NOT support these features.

Examples

The following examples illustrate the behavior of action commands in various scenarios. The figure below shows 4 different action commands sent by an application. The content of the action command packet is illustrated on the left side of the figure.

Figure 4: Action command examples


The content of the action command must be examined against the conditions listed above for each supported action signal.

For the first request (ACTION_CMD #1)

Action Command 1Device 0Device 1
ACTION 0ACTION 1ACTION 2ACTION 0
device_key0x346384520x346384520x346384520x346384520x34638452
group_key 0x000000240x000000240x000000420x123412440x00000024
group_mask0x000000030x000000010xFFFFFFFF0x000000000x00000002

Device 0 receives the request and looks for the 4 conditions.

  1. exclusive or write access has been established between the application and the device (or unconditional action is enabled)
  2. device_key matches
  3. group_key matches
  4. Logical AND-wise comparison of requested group_mask is non-zero

All 4 conditions are met only for ACTION_0, thus the device immediately asserts the internal signal represented by ACTION_0. The same steps are followed by Device 1. Only the group_mask is different, but nevertheless the logical bitwise AND operation produces a non-zero value, leading to the assertion of ACTION_0 by Device 1.

For the second request (ACTION_CMD #2)

Action Command 2Device 0Device 1
ACTION 0ACTION 1ACTION 2ACTION 0
device_key0x346384520x346384520x346384520x346384520x34638452
group_key 0x000000420x000000240x000000420x123412440x00000024
group_mask0x000000F20x000000010xFFFFFFFF0x000000000x00000002

Looking for the 4 conditions, Device 0 will assert ACTION_1 while Device 1 will not assert any signal because the group_key condition is not met. Therefore, Device 1 ignores the request.

For the third request (ACTION_CMD #3)

Action Command 3Device 0Device 1
ACTION 0ACTION 1ACTION 2ACTION 0
device_key0x346384520x346384520x346384520x346384520x34638452
group_key 0x000000240x000000240x000000420x123412440x00000024
group_mask0x000000020x000000010xFFFFFFFF0x000000000x00000002

In the third example, the group_mask and group_key of Device 0 do not match with ACTION_CMD #3 for any of the ACTION_0 to ACTION_2. Therefore, Device 0 ignores the request. Device 1 asserts ACTION_0 since the 4 conditions are met.

The ACTION_CMD is flexible enough to accommodate “simultaneous” triggering of the same functional action in functionally different devices.

For instance, let’s assume the software trigger of Device 0 can only be associated to its ACTION_3 and that the software trigger of Device 1 can only be associated to its ACTION_1. And the action command supports to trigger the same functional action provided that their respective action group key and masks are set in order to meet the conditions from the previous list.

For the fourth request (ACTION_CMD #4)

Action Command 4Device 0Device 1
ACTION 3ACTION 1
device_key0x346384520x346384520x34638452
group_key 0x000000010x000000010x00000001
group_mask0x000000010xFFFFFFFF0xFFFFFFFF

In this case, Device 0 asserts ACTION_3 and Device 1 asserts ACTION_1 since the conditions are met. As a result of this, the software trigger of both devices can be “simultaneously” triggered even though they are associated to different action numbers.

Writing Code Using Action Commands

The following section uses C# code snippets but the same thing can be done using a variety of other programming languages as well.

To set up an action command on the device something like this is needed:

private static void setupActionCommandOnDevice(GenICam.ActionControl ac, Int64 deviceKey, Int64 actionNumber, Int64 groupKey, Int64 groupMask)
{
  if ((deviceKey == 0) && (groupKey == 0) && (groupMask == 0))
  {
    Console.WriteLine("Switching off action {0}.", actionNumber);
  }
  else
  {
    Console.WriteLine("Setting up action {0}. Device key: 0x{1:X8}, group key: 0x{2:X8}, group mask: 0x{3:X8}", actionNumber, deviceKey, groupKey, groupMask);
  }
  ac.actionDeviceKey.write(deviceKey);
  ac.actionSelector.write(actionNumber);
  ac.actionGroupKey.write(groupKey);
  ac.actionGroupMask.write(groupMask);
}
Note
In a typical scenario the deviceKey parameter will only be set up once as there is only one register for it on each device while there can be various action commands.

Now to send action commands to devices connected to a certain interface it might be necessary to locate the correct instance of the InterfaceModule class first. One way to do this would be like this:

private static List<GenICam.InterfaceModule> getGenTLInterfaceListForDevice(Device pDev)
{
  // first get a list of ALL interfaces in the current system
  GenICam.SystemModule systemModule = new GenICam.SystemModule(pDev);
  Dictionary<String, Int64> interfaceIDToIndexMap = new Dictionary<string, long>();
  Int64 interfaceCount = systemModule.interfaceSelector.maxValue + 1;
  for (Int64 i = 0; i < interfaceCount; i++)
  {
    systemModule.interfaceSelector.write(i);
    interfaceIDToIndexMap.Add(systemModule.interfaceID.read(), i);
  }

  // now try to get access to the interfaces the device in question is connected to
  PropertyI64 interfaceID = new PropertyI64();
  DeviceComponentLocator locator = new DeviceComponentLocator(pDev.hDev);
  locator.bindComponent(interfaceID, "InterfaceID");
  if (interfaceID.isValid == false)
  {
    return null;
  }
  ReadOnlyCollection<String> interfacesTheDeviceIsConnectedTo = interfaceID.listOfValidStrings;

  // create an instance of the GenICam.InterfaceModule class for each interface the device is connected to
  List<GenICam.InterfaceModule> interfaces = new List<GenICam.InterfaceModule>();
  foreach (String interfaceIDString in interfacesTheDeviceIsConnectedTo)
  {
    interfaces.Add(new GenICam.InterfaceModule(pDev, interfaceIDToIndexMap[interfaceIDString]));
  }
  return interfaces;
}

Once the desired interface has been located it could be configured to send an action command like this:

private static void setupActionCommandOnInterface(GenICam.InterfaceModule im, Int32 destinationIP, Int64 deviceKey, Int64 groupKey, Int64 groupMask, bool boScheduledAction)
{
  im.mvActionDestinationIPAddress.write(destinationIP);
  im.mvActionDeviceKey.write(deviceKey);
  im.mvActionGroupKey.write(groupKey);
  im.mvActionGroupMask.write(groupMask);
  im.mvActionScheduledTimeEnable.write(boScheduledAction ? TBoolean.bTrue : TBoolean.bFalse);
  // here the desired execution time must also be configured for scheduled action commands if desired by writing to the im.mvActionScheduledTime property
}

Now the interface is set up completely and sending an action command works like this:

private static void sendActionCommand(GenICam.InterfaceModule im)
{
  im.mvActionSend.call();
}

Depending on the value of destinationIP when actually firing the action command either a broadcast or a unicast message will be generated and send to either one or all devices in the subnet and depending on whether one or more action commands on the device are set up to assert the command they will react appropriately or will silently ignore the command.



Creating synchronized acquisitions using timers

Basics

Getting images from several cameras exactly at the same time is a major task in

  • 3D image acquisitions
    (the images must be acquired at the same time using two cameras) or
  • acquisitions of larger objects
    (if more than one camera is required to span over the complete image, like in the textile and printing industry).

To solve this task, the mvBlueCOUGAR-X offers timers that can be used to generate pulse at regular intervals. This pulse can be connected to a digital output. The digital output can be connected digital to the digital input of one or more cameras to use it as a trigger.

Connecting the hardware

Note
We recommend to use the cable "KS-BCX-HR12" to connect an external power supply (at pin 2 and 10) and the I/Os.

One camera is used as master (M), which generates the trigger signal. The other ones receive the trigger signal and act as slaves (S).

On the master camera

  1. Connect power supply GND to "pin 1" and "pin 7".
  2. Connect power supply +12 V to "pin 2" and "pin 10 (power supply for the outputs)".
  3. Connect DigOut0 of master ("pin 6") to DigIn0 of master ("pin 4").

On each slave camera

  1. Connect power supply GND to "pin 1" and "pin 7".
  2. Connect power supply +12 V to "pin 2" and "pin 10 (power supply for the outputs)".

Between the cameras

  1. Connect DigOut0 of master ("pin 6") to DigIn0 of slave ("pin 4").
Note
If more than one slave is used, connect the same "pin 6" of master to all "pin 4" of the slaves.
If each camera has its own power supply, then connect all grounds (GND) together.

For the master camera, there are 2 possibilities how it is triggered:

  1. The master camera triggers itself logically (so called "Master - Slave", see Figure 1), or
  2. the master camera uses the external trigger signal, which was created by itself, via digital input (so called "Slave - Slave", see Figure 2).
Figure 1: Master - Slave connecting
Figure 2: Slave - Slave connecting
Note
With "Master - Slave" and according to the delay of the opto-isolated inputs of the slave cameras, you have to adapted the property "Trigger delay" of the master camera to synchronize the cameras exactly.

Programming the acquisition

You will need two timers and you have to set a trigger.

Start timer

Two timers are used for the "start timer". Timer1 defines the interval between two triggers. Timer2 generates the trigger pulse at the end of Timer1.

The following sample shows a trigger

  • which is generated every second and
  • the pulse width is 10 ms:
#include <mvIMPACT_CPP/mvIMPACT_acquire.h>
#include <mvIMPACT_CPP/mvIMPACT_acquire_GenICam.h>

... 

// Master: Set timers to trig image: Start after queue is filled
    GenICam::CounterAndTimerControl catcMaster(pDev);
    catcMaster.timerSelector.writeS( "Timer1" );
    catcMaster.timerDelay.write( 0. );
    catcMaster.timerDuration.write( 1000000. );
    catcMaster.timerTriggerSource.writeS( "Timer1End" );

    catcMaster.timerSelector.writeS( "Timer2" );
    catcMaster.timerDelay.write( 0. );
    catcMaster.timerDuration.write( 10000. );
    catcMaster.timerTriggerSource.writeS( "Timer1End" );
See also
Counter And Timer Control
Note
Make sure the Timer1 interval must be larger than the processing time. Otherwise, the images are lost.

The timers are defined, now you have to do following steps:

  1. Set the digital output, e.g. "Line 0",
  2. connect the digital output with the inputs of the slave cameras (and master camera if using "Slave - Slave"), and finally
  3. set the trigger source to the digital input, e.g. "Line4".

Set digital I/O

In this step, the signal has to be connected to the digital output, e.g. "Line0":

// Set Digital I/O
    GenICam::DigitalIOControl io(pDev);
    io.lineSelector.writeS( "Line0" );
    io.lineSource.writeS( "Timer2Active" );
See also
Digital I/O Control

This signal has to be connected with the digital inputs of the slave cameras as shown in Figure 1 and 2.

Set trigger

"If you want to use Master - Slave":

// Set Trigger of Master camera
    GenICam::AcquisitionControl ac(pDev);
    ac.triggerSelector.writeS( "FrameStart" );
    ac.triggerMode.writeS( "On" );
    ac.triggerSource.writeS( "Timer1Start" );
// or ac.triggerSource.writeS( "Timer1End" );
// Set Trigger of Slave camera 
    GenICam::AcquisitionControl ac(pDev);
    ac.triggerSelector.writeS( "FrameStart" );
    ac.triggerMode.writeS( "On" );
    ac.triggerSource.writeS( "Line4" );
    ac.triggerActivation.writeS( "RisingEdge" ); 

"If you want to use Slave - Slave":

// Set Trigger of Master and Slave camera 
    GenICam::AcquisitionControl ac(pDev);
    ac.triggerSelector.writeS( "FrameStart" );
    ac.triggerMode.writeS( "On" );
    ac.triggerSource.writeS( "Line4" );
    ac.triggerActivation.writeS( "RisingEdge" ); 
See also
Acquisition Control

Now, the two timers will work like the following figure illustrates, which means

  • Timer1 is the trigger event and
  • Timer2 the trigger pulse width:
Figure 3:Timers

By the way, this is a simple "pulse width modulation (PWM)" example.

Setting the synchronized acquisition using wxPropView

The following figures show, how you can set the timers and trigger using the GUI tool wxPropView

  1. Setting of Timer1 (blue box) on the master camera:

    Figure 4: wxPropView - Setting of Timer1 on the master camera

  2. Setting of Timer2 (purple box) on the master camera:

    Figure 5: wxPropView - Setting of Timer2 on the master camera

  3. Setting the trigger slave camera(s)
    - The red box in Figure 4 is showing "Slave - Slave", which means that both master and slave camera are connected via digital input.
    - The red box in Figure 6 is showing "Master - Slave"), which means that the master is triggered internally and the slave camera is set as shown in Figure 4.
  4. Assigning timer to DigOut (orange box in Figure 4).
Figure 6: Trigger setting of the master camera using "Master - Slave"



Using the primary application switchover functionality

There are scenarios where a second application should take control over a device that is already under control of another (primary) application (e.g. systems which requires redundancy, fault recovery or systems with a higher level management entity).

The switchover procedure will look like this: The primary application

  1. requests (and gets granted) exclusive access,
  2. verifies that the device supports switchover consulting GVCP capability register,
  3. sets the control switchover key register,
  4. requests (and gets granted) control access with enabled switchover (this is done without closing the control channel),

Another application that knows the key, can request (and gets granted) device control.

You can enable the switchover via "Device -> PrimaryApplicationSwitchoverEnable". Set this register to "On" to allow other applications to take control over the device.

Figure 1: wxPropView - Device PrimaryApplicationSwitchoverEnable

If the control access has been granted, "DesiredAccess", "PrimaryApplicationSwitchoverEnable" and "PrimaryApplicationSwitchoverKey" will become read-only.

Now, in "Setting -> Base -> Camera -> GenICam -> Transport Layer Control" the property "Gev Primary Application Switchover Key" can be used by the control application to define a value that must be specified by an application that wants to take over control over the device. E.g. "666":

Figure 2: wxPropView - Gev Primary Application Switchover Key

The other application now tries to take over control with the correct switchover key and this access is granted. As a result the first application can no longer write to the device (executing "int timestampLatch()" fails with "DEV_ACCESS_DENIED").

Figure 3: wxPropView - Application Switchover
Note
If the other application tries to take over control without specifying the correct switchover key, an "DEV_ACCESS_DENIED" error will appear.

Code samples

The following code samples show, how to create functions which

  • bind and set a property,
  • how to configure the switchover access and
  • how to take over the control.
//-----------------------------------------------------------------------------
template<typename _Ty, typename _Tx>
void bindAndSetProperty( _Ty& prop, ComponentLocatorBase& locator, const string& propName, const _Tx& value )
//-----------------------------------------------------------------------------
{
  locator.bindComponent( prop, propName );
  if( prop.isValid() )
  {
    try
    {
      prop.write( value );
      cout << "Property '" << propName << "' set to '" << prop.readS() << "'." << endl; // read back the value from the property in order to notice any rounding(e.g. for properties defining a setp width)
    }
    catch( const ImpactAcquireException& e )
    {
      cout << "Failed to write '" << value << "' to property '" << propName << "'. Error message: " << e.getErrorString() << endl;
      displayPropertyData( prop );
    }
  }
  else
  {
    cout << "Property '" << propName << "' is not supported by this device or could not be located." << endl;
  }
}

//-----------------------------------------------------------------------------
void configureControlAccessWithSwitchoverEnabled( Device* pDev, int64_type key )
//-----------------------------------------------------------------------------
{
  try
  {
    ComponentLocator locator(pDev->hDev());
    PropertyIBoolean primaryApplicationSwitchoverEnable;
    bindAndSetProperty( primaryApplicationSwitchoverEnable, locator, "PrimaryApplicationSwitchoverEnable", bTrue );
    pDev->interfaceLayout.write( dilGenICam );
    pDev->desiredAccess.write( damControl );
    pDev->open();
    DeviceComponentLocator devLocator(pDev, dltSetting);
    PropertyI64 gevPrimaryApplicationSwitchoverKey;
    bindAndSetProperty( gevPrimaryApplicationSwitchoverKey, locator, "GevPrimaryApplicationSwitchoverKey", key );
  }
  catch( const ImpactAcquireException& e )
  {
    cout << string(__FUNCTION__) << ": " << e.getErrorString() << "(" << e.getErrorCodeAsString() << ") occurred at line " << e.getErrorOrigin() << endl;
  }
}

//-----------------------------------------------------------------------------
void takeOverControlAccess( Device* pDev, int64_type key, bool boKeepSwitchoverPossible )
//-----------------------------------------------------------------------------
{
  try
  {
    ComponentLocator locator(pDev->hDev());
    PropertyIBoolean primaryApplicationSwitchoverEnable;
    bindAndSetProperty( primaryApplicationSwitchoverEnable, locator, "PrimaryApplicationSwitchoverEnable", boKeepSwitchoverPossible ? bTrue : bFalse );
    PropertyI64 primaryApplicationSwitchoverKey;
    bindAndSetProperty( primaryApplicationSwitchoverKey, locator, "PrimaryApplicationSwitchoverKey", key );
    pDev->interfaceLayout.write( dilGenICam );
    pDev->desiredAccess.write( damControl );
    pDev->open();
  }
  catch( const ImpactAcquireException& e )
  {
    cout << string(__FUNCTION__) << ": " << e.getErrorString() << "(" << e.getErrorCodeAsString() << ") occurred at line " << e.getErrorOrigin() << endl;
  }
}