MATRIX VISION - mvBlueLYNX-M7 Technical Documentation
Use cases

GenICam to mvIMPACT Acquire code generator

Using the code generator

As any GenICam™ compliant device for which there is a GenICam™ GenTL compliant capture driver in mvIMPACT Acquire can be used using the mvIMPACT Acquire interface and it can't be known which features are supported by a device until a device has been initialised and its GenICam™ XML file has been processed it is not possible to provide a complete C++ wrapper for every device statically.

Therefore an interface code generator has been embedded into each driver capable of handling arbitrary devices. This code generator can be used to create a convenient C++ interface file that allows access to every feature offered by a device.

Warning
A code generated interface can result in incompatibility issues, because the interface name will be constructed from the version and product information that comes with the GenICam XML file (see comment in code sample below). To avoid incompatibility, please use the common interface from the namespace mvIMPACT::acquire::GenICam whenever possible.

To access features needed to generate a C++ wrapper interface a device needs to be initialized. Code can only be generated for the interface layout selected when the device was opened. If interfaces shall be created for more than a single interface layout the steps that will explain the creation of the wrapper files must be repeated for each interface layout.

Once the device has been opened the code generator can be accessed by navigating to "System Settings -> CodeGeneration".

Figure 1: wxPropView - Code Generation section


To generate code first of all an appropriate file name should be selected. In order to prevent file name clashes the following hints should be kept in mind when thinking about a file name:

  • If several devices from different families or different vendors shall later be included into an application each device or device family will need its own header file thus either the files should be organized in different subfolders or must have unique names.
  • If a device shall be used using different interface layouts again different header files must be generated.

If only a single device family is involved but 2 interface layouts will be used later a suitable file name might be "mvIMPACT_acquire_GenICam_Wrapper_DeviceSpecific.h".

For a more complex application involving different device families using the GenICam interface layout only something like this might make sense:

  • mvIMPACT_acquire_GenICam_Wrapper_MyDeviceA.h
  • mvIMPACT_acquire_GenICam_Wrapper_MyDeviceB.h
  • mvIMPACT_acquire_GenICam_Wrapper_MyDeviceC.h
  • ...

Once a file name has been selected the code generator can be invoked by executing the "int GenerateCode()" method:

Figure 2: wxPropView - GenerateCode() method


The result of the code generator run will be written into the "LastResult" property afterwards:

Figure 3: wxPropView - LastResult property


Using the result of the code generator in an application

Each header file generated by the code generator will include "mvIMPACT_CPP/mvIMPACT_acquire.h" thus when an application is compiled with files that have been automatically generated these header files must have access to this file. This can easily achieved by appropriately settings up the build environment / Makefile.

To avoid problems of multiple includes the file will use an include guard build from the file name.

Within each header file the generated data types will reside in a sub namespace of "mvIMPACT::acquire" in order to avoid name clashes when working with several different created files in the same application. The namespace will automatically be generated from the ModelName tag and the file version tags in the devices GenICam XML file and the interface layout. For a device with a ModelName tag mvBlueIntelligentDevice and a file version of 1.1.0 something like this will be created:

namespace mvIMPACT {
   namespace acquire {
      namespace DeviceSpecific { // the name of the interface layout used during the process of code creation
         namespace MATRIX_VISION_mvBlueIntelligentDevice_1 { // this name will be constructed from the version and product
                                                             // information that comes with the GenICam XML file. As defined
                                                             // by the GenICam standard, different major versions of a devices
                                                             // XML file may not be compatible thus different interface files should be created here


// all code will reside in this inner namespace

         } // namespace MATRIX_VISION_mvBlueIntelligentDevice_1
      } // namespace DeviceSpecific
   } // namespace acquire
} // namespace mvIMPACT

In the application the generated header files can be used like normal include files:

#include <string>
#include <mvIMPACT_CPP/mvIMPACT_acquire.h>
#include <mvIMPACT_CPP/mvIMPACT_acquire_GenICam_Wrapper_DeviceSpecific.h>

Now to access data types from the header files of course the namespaces must somehow be taken into account. When there is just a single interface that has been created automatically the easiest thing to do would probably an appropriate using statement:

using namespace mvIMPACT::acquire::DeviceSpecific::MATRIX_VISION_mvBlueIntelligentDevice_1;

If several files created from different devices shall be used and these devices define similar features in a slightly different way this however might result in name clashes and/or unexpected behaviour. In that case the namespaces should be specified explicitly when creating data type instances from the header file in the application:

//-----------------------------------------------------------------------------
void fn( Device* pDev )
//-----------------------------------------------------------------------------
{
  mvIMPACT::acquire::DeviceSpecific::MATRIX_VISION_mvBlueIntelligentDevice_1::DeviceControl dc( pDev );
  if( dc.timestampLatch.isValid() )
  {
    dc.timestampLatch.call();
  }
}

When working with a using statement the same code can be written like this as well:

//-----------------------------------------------------------------------------
void fn( Device* pDev )
//-----------------------------------------------------------------------------
{
  DeviceControl dc( pDev );
  if( dc.timestampLatch.isValid() )
  {
    dc.timestampLatch.call();
  }
}



Correcting image errors of a sensor

Due to random process deviations, technical limitations of the sensors, etc. there are different reasons that image sensors have image errors. MATRIX VISION provides several procedures to correct these errors, by default these are host-based calculations.

Provided image corrections procedures are

  1. Defective Pixels Correction,
  2. Dark Current Correction, and
  3. Flat-Field Correction.
Note
If you execute all correction procedures, you have to keep this order. All gray value settings of the corrections below assume an 8-bit image.
Figure 1: Host-based image corrections

The path "Setting -> Base -> ImageProcessing -> ..." indicates that these corrections are host-based corrections.

Before starting consider the following hints:

  • To correct the complete image, you have to make sure no user defined AOI has been selected: Right-click "Restore Default" on the devices AOI parameters Width and Height or "Setting -> Base -> Camera -> GenICam -> Image Format Control" using the GenICam interface layout.
  • You have several options to save the correction data. The chapter Storing and restoring settings describes the different ways.
See also
There is a white paper about image error corrections with extended information available on our website: http://www.matrix-vision.com/tl_files/mv11/Glossary/art_image_errors_sensors_en.pdf

Defective Pixels Correction

Due to random process deviations, not all pixels in an image sensor array will react in the same way to a given light condition. These variations are known as blemishes or defective pixels.

There are two types of defective pixels:

  1. leaky pixel (in the dark)
    which indicates pixels that produce a higher read out code than average
  2. cold pixel (in standard light conditions)
    which indicates pixels that produce a lower read out code than average when the sensor is exposed (e.g. caused by dust particles on the sensor)
Note
Please use either an 8 bit Mono or Bayer image format when correcting the image. After the correction, all image formats will be corrected.

Correcting leaky pixels

To correct the leaky pixels following steps are necessary:

  1. Set gain ("Setting -> Base -> Camera -> GenICam -> Analog Control" "Gain = 0 dB") and exposure time "Setting -> Base -> Camera -> GenICam -> Acquisition Control" "ExposureTime = 360 msec" to the given operating conditions
    The total number of defective pixels found in the array depend on the gain and the exposure time.
  2. Black out the lens completely
  3. Set the (Filter-) "Mode = Calibrate leaky pixel"
  4. Snap an image ("Acquire" with "Acquisition Mode = SingleFrame")
  5. To activate the correction, choose one of the neighbor replace methods: "Replace 3x1 average" or "Replace 3x3 median"
  6. Save the settings including the correction data via "Action -> Capture Settings -> Save Active Device Settings"
    (Settings can be saved in the Windows registry or in a file)
Note
After having re-started the camera you have to reload the capture settings vice versa.

The filter checks:

Pixel > LeakyPixelDeviation_ADCLimit // (default value: 50) 

All pixels above this value are considered as leaky pixel.

Correcting cold pixels

To correct the cold pixels following steps are necessary:

  1. You will need a uniform sensor illumination approx. 50 - 70 % saturation (which means an average gray value between 128 and 180)
  2. Set the (Filter-) "Mode = Calibrate cold pixel" (Figure 2)
  3. Snap an image ("Acquire" with "Acquisition Mode = SingleFrame")
  4. To activate the correction, choose one of the neighbor replace methods: "Replace 3x1 average" or "Replace 3x3 median"
  5. Save the settings including the correction data via "Action -> Capture Settings -> Save Active Device Settings"
    (Settings can be saved in the Windows registry or in a file)
Note
After having re-started the camera you have to reload the capture settings vice versa.

The filter checks:

Pixel < T[cold] // (default value: 15 %) 

// T[cold] = deviation of the average gray value (ColdPixelDeviation_pc)

All pixels below this value have a dynamic below normal behavior.

Figure 2: Image corrections: DefectivePixelsFilter
Note
Repeating the defective pixel corrections will accumulate the correction data which leads to a higher value in "DefectivePixelsFound". If you want to reset the correction data or repeat the correction process you have to set the (Filter-) "Mode = Reset Calibration Data".

Storing pixel data on the device

To save and load the defective pixel data, appropriate functions are available:

  • int mvDefectivePixelDataLoad( void )
  • int mvDefectivePixelDataSave( void )

The section "Setting -> Base -> ImageProcessing -> DefectivePixelsFilter" was also extended (see Figure 2a). First, the DefectivePixelsFound indicates the number of found defective pixels. The coordinates are available through the properties DefectivePixelOffsetX and DefectivePixelOffsetY now. In addition to that it is possible to edit, add and delete these values manually (via right-click on the "DefectivePixelOffset" and select "Append Value" or "Delete Last Value"). Second, with the function

  • int mvDefectivePixelReadFromDevice( void )
  • int mvDefectivePixelWriteToDevice( void )

you can exchange the data from the filter with the camera and vice versa

Figure 2a: Image corrections: DefectivePixelsFilter (since driver version 2.17.1 and firmware version 2.12.406)

Just right-click on mvDefectivePixelWriteToDevice and click on "Execute" to write the data to the camera (and hand over the data to the Features_section_mvDefectivePixelCorrectionControl). To permanently store the data inside the cameras non-volatile memory afterwards mvDefectivePixelDataSave must be called as well!

Figure 2b: Defective pixel data are written to the camera (since driver version 2.17.1 and firmware version 2.12.406)

While opening the camera, the camera will load the defective pixel data from the camera. If there are pixels in the filter available (via calibration), nevertheless you can load the values from the camera. In this case the values will be merged with the existing ones. I.e., new ones are added and duplicates are removed.

Dark Current Correction

Dark current is a characteristic of image sensors, which means, that image sensors also deliver signals in total darkness by warmness, for example, which creates charge carriers spontaneously. This signal overlays the image information. Dark current depends on two circumstances:

  1. Exposure time
    The longer the exposure, the greater the dark current part. I.e. using long exposure times, the dark current itself could lead to an overexposed sensor chip
  2. Temperature
    By cooling the sensor chips the dark current production can be highly dropped (approx. every 6 °C the dark current is cut in half)

Correcting Dark Current

The dark current correction is a pixel wise correction where the dark current correction image removes the dark current from the original image. To get a better result it is necessary to snap the original and the dark current images with the same exposure time and temperature.

Note
Dark current snaps generally show noise.

To correct the dark current pixels following steps are necessary:

  1. Black out the lens completely
  2. Set exposure time according to the application
  3. Set the number of image for calibration in "Setting -> Base -> ImageProcessing -> DarkCurrentFilter -> CalibrationImageCount" (Figure 3).
  4. Set "Setting -> Base -> ImageProcessing -> DarkCurrentFilter -> Mode" to "Calibrate" (Figure 3)
  5. Snap an image ("Acquire" with "Acquisition Mode = SingleFrame")
  6. Finally, you have to activate the correction: Set "Setting -> Base -> ImageProcessing -> DarkCurrentFilter -> Mode" to "On"
  7. Save the settings including the correction data via "Action -> Capture Settings -> Save Active Device Settings"
    (Settings can be saved in the Windows registry or in a file)

The filter snaps a number of images and averages the dark current images to one correction image.

Note
After having re-started the camera you have to reload the capture settings vice versa.
Figure 3: Image corrections: CalibrationImageCount
Figure 4: Image corrections: Calibrate
Figure 5: Image corrections: Dark current

Flat-Field Correction

Each pixel of a sensor chip is a single detector with its own properties. Particularly, this pertains to the sensitivity as the case may be the spectral sensitivity. To solve this problem (including lens and illumination variations), a plain and equally "colored" calibration plate (e.g. white or gray) as a flat-field is snapped, which will be used to correct the original image. Between flat-field correction and the future application you must not change the optic. To reduce errors while doing the flat-field correction, a saturation between 50 % and 75 % of the flat-field in the histogram is convenient.

Note
Flat-field correction can also be used as a destructive watermark and works for all f-stops.

To make a flat field correction following steps are necessary:

  1. You need a plain and equally "colored" calibration plate (e.g. white or gray)
  2. No single pixel may be saturated - that's why we recommend to set the maximum gray level in the brightest area to max. 75% of the gray scale (i.e., to gray values below 190 when using 8-bit values)
  3. Choose a BayerXY in "Setting -> Base -> Camera -> GenICam -> Image Format Control -> PixelFormat".
  4. Set the (Filter-) "Mode = Calibrate" (Figure 6)
  5. Start a Live snap ("Acquire" with "Acquisition Mode = Continuous")
  6. Finally, you have to activate the correction: Set the (Filter-) "Mode = On"
  7. Save the settings including the correction data via "Action -> Capture Settings -> Save Active Device Settings"
    (Settings can be saved in the Windows registry or in a file)
Note
After having re-started the camera you have to reload the capture settings vice versa.

The filter snaps a number of images (according to the value of the CalibrationImageCount, e.g. 5) and averages the flat-field images to one correction image.

Figure 6: Image corrections: Host-based flat field correction



Improving the acquisition / image quality

There are several use cases concerning the acquisition / image quality of the camera:



Optimizing the bandwidth

Limiting the bandwidth of the imaging device

It is possible to limit the used bandwidth like the following way:

Since
mvIMPACT Acquire 2.25.0
  1. In "Setting -> Base -> Camera -> GenICam -> Device Control -> Device Link Selector" set property "Device Link Throughput Limit Mode" to "On".
  2. Now, you can set the bandwidth with "Device Link Throughput Limit" to your desired bandwidth in bits per second

    Figure 1: wxPropView - Setting Device Link Throughput Limit




Saving data on the device

Note
As described in Storing and restoring settings, it is also possible to save the settings as an XML file on the host system. You can find further information about for example the XML compatibilities of the different driver versions in the mvIMPACT Acquire SDK manuals and the according setting classes: https://www.matrix-vision.com/manuals/SDK_CPP/classmvIMPACT_1_1acquire_1_1FunctionInterface.html (C++)

There are several use cases concerning device memory:



Retrieving user data from local applications

There are two ways to get access to the UserData, which was transferred via vl_transmit library before:

  1. Via GUI:
    Please start wxPropView. In the section "Image Setting -> RequestInfo" you will find the UserData.
  2. Via Programming:
    Please have a look at the coding sample.

Coding sample

Note
pRequest points to a C++ mvIMPACT::acquire::Request object.



Working with several cameras simultaneously

There are several use cases concerning multiple cameras:



Introducing multicasting

GigE Vision supports to stream data over a network via Multicast to multiple destinations simultaneously. Multicast means, that one source streams data to a group of recipients in the same subnet. As long as the recipients are members of this Multicast group they will get the packages. Other members of the same local subnet will skip the packages (but the packages are sent and so they need bandwidth).

See also
http://en.wikipedia.org/wiki/Multicast

To set up a Multicast environment, GigE Vision introduced camera access types. The most important ones are

  • Control and
  • Read.

With Control, a primary application can used to set up the camera which will stream the data via Multicast. With Read you can set up secondary applications which will playback the data stream.

Figure 1: Multicast setup

Because the mvBlueCOUGAR cameras are GigE Vision compliant devices, you can set up a Multicast environment using wxPropView. This use case will show how you can do this.

Sample

On (the primary) application you have to establish "Control" access.

For this,

  1. please start wxPropView and select the camera.
  2. Click on the "Device" section.
  3. Click on "DesiredAccess" and choose "Control".

    Figure 2: wxPropView - Primary application setting DesiredAccess to "Control"

    See also
    desiredAccess and grantedAccess in
    • mvIMPACT::acquire::Device (in mvIMPACT_Acquire_API_CPP_manual.chm)
    • TDeviceAccessMode (in mvIMPACT_Acquire_API_CPP_manual.chm)
  4. Click on "Use".
  5. Now, select the "Setting -> Base -> Camera -> GenICam" section and open the "Transport Layer Control" subsection (using the device specific interface: "System Settings" and "TransportLayer").
  6. In "GevSCDA" enter a Multicast address like "239.255.255.255" .

    Figure 3: wxPropView - Primary application setting GevSCDA in "GevSCDA"

    See also
    http://en.wikipedia.org/wiki/Multicast_address

One or more applications running on different machines can then establish "read-only" access to the very same device.

Note
The machines of the secondary applications have to be connected to the same network as the primary application.
  1. Please start wxPropView on the other machine and click on the "Device" section.

    Figure 4: wxPropView - Secondary application setting DesiredAccess to "Read"

  2. Features will not be writeable as you can see at the "Transport Layer Control" parameters in Figure 4.

    Figure 5: wxPropView - Secondary application read-only "Transport Layer Control" parameters

  3. Once the primary application starts to request images, the secondary applications will be able to receive these images as well. Please click on "Use" and then "Acquire" ("Acquisition Mode" = "Continuous").

    Figure 6: wxPropView - Secondary application receives images from the primary application

Note
The machine that has "Control" access automatically joins the streaming multicast group of the camera it is controlling. If this is not desired, the "mv Auto Join Multicast Groups" property has to be set to false.
Figure 7: wxPropView - Disable mv Auto Join Multicast Groups to prevent the Control machine to receive multicast streaming data

Attention
GigE Vision (GEV) does not allow packet fragmentation for performance reasons. Because of that and the nature of multicasting and network data transfer in general it is crucial to select a packet size every client with respect to its NIC s MTU can handle as otherwise not all clients will be able to receive the image data. See the chapter

for details.



Using Action Commands

GigE Vision specifies so called Action Commands to trigger an action in multiple devices at roughly the same time.

Note
Due to the nature of Ethernet, this is not as synchronous as a hardware trigger since different network segments can have different latencies. Nevertheless in a switched network, the resulting jitter is acceptable for a broad range of applications and this scheme provides a convenient way to synchronize devices by means of a software command.

Action commands can be unicasted or broadcasted by applications with either exclusive, write or read (only when the device is configured accordingly) access to the device. They can be used e.g. to

  • increment or reset counters
  • reset timers
  • act as trigger sources
  • ...

The most typical scenario is when an application desires to trigger a simultaneous action on multiple devices. This case is showed by the figure below. The application fires a broadcast action command that will reach all the devices on the subnet.

Figure 1: Action command sent as a broadcast to all devices in the subnet


Attention
Due to the nature of Ethernet an action command at a given time can only leave a single network interface thus sending action commands via several different network interfaces to different devices that are e.g. directly connected to a given network interface card in a PC only works sequential and therefore these commands will NOT reach the devices at the same time. This can only be achieved using Scheduled Action Commands. Depending on whether an action command is unicasted or broadcasted the command will either reach a single device on a given subnet or multiple devices.
Note
The following diagrams assume the connecting line between the PC and the devices to be the GVCP(GigE Vision control protocol)s control port socket. Therefore these diagram do no make any assumption about the physical connection between the devices and the PC. However for the examples to work they must all be connected to the same network interface of the PC using a network switch as otherwise as stated above it would not be possible to sent the very same action command using a single network packet to all the devices.

But action commands can also be used by secondary applications. This can even be another device on the same subnet. This is depicted in the following figure.

Figure 2: Action command sent as a broadcast by one device to all other devices in the subnet


Upon reception of an action command, the device will decode the information to identify which internal action signal is requested. An action signal is a device internal signal that can be used as a trigger for functional units inside the device (ex: a frame trigger). It can be routed to all signal sinks of the device.

Each action command message contains information for the device to validate the requested operation:

  1. device_key to authorize the action on this device.
  2. group_key to define a group of devices on which actions have to be executed.
  3. group_mask to be used to filter out some of these devices from the group.

Action commands can only be asserted if the device has an open primary control channel (so if an application has established write or exclusive access to a device) or when unconditional action mode is enabled.

A device can define several action commands which can be selected via the ActionSelector in the device features property tree.

The conditions for an action command to be asserted by the device are:

  1. the device has an open primary control channel or unconditional action mode is enabled.
  2. the device_key in the action command sent by the application and the ActionDeviceKey property of the device must be equal.
  3. the group_key in the action command sent by the application and the ActionGroupKey property for the corresponding action of the device must be equal.
  4. the logical AND-wise operation of group_mask in the action command sent by the application against the ActionGroupMask for the corresponding action of the device must be non-zero. Therefore, they must have at least one common bit set at the same position in the register.

When these 4 conditions are met for at least one action signal configured on the device, then the device internally asserts the requested action. As these conditions could be met on more than one action, the device could assert more than one action signal in parallel. When one of the 4 conditions is not met for any supported action signal, then the device ignores the action command.

This first condition asks for the write or exclusive access being established between the application and the device unless the ActionUnconditionalMode has been enabled. When this bit is set, the device can assert the requested action even if no application has established write or exclusive access as long as the 3 other conditions are met.

Scheduled Action Command

Scheduled Action Commands provide a way to trigger actions in a device at a specific time in the future. The typical use case is depicted in the following diagram:

Figure 3: Principle of scheduled action commands


The transmitter of an action command records the exact time, when the source signal is asserted (External signal). This time t 0 is incremented by a delta time DELTA L and transmitted in an action command to the receivers. The delta time DELTA L has to be larger than the longest possible transmission and processing latency of the action command in the network.

If the packet passes the action command filters in the receiver, then the action signal is put into a time queue (the depth of this queue is indicated by the ActionQueueSize property).

When the time of the local clock is greater or equal to the time of an action signal in the queue, the signal is removed from the queue and asserted. Combined with the timestamp precision of IEEE 1588 which can be sub-microseconds, a Scheduled Action Command provides a way to allow low-jitter software trigger. If the sender of an action command is not capable to set a future time into the packet, the action command has a flag to fall back to legacy mode (bit 0 of the flag field). In this mode the signal is asserted in the moment the packet passes the action command filters.

Attention
Scheduled action commands are not supported by every device. A device supporting scheduled action commands should also support time stamp synchronization based on IEEE1588. MATRIX VISION devices currently do NOT support these features.

Examples

The following examples illustrate the behavior of action commands in various scenarios. The figure below shows 4 different action commands sent by an application. The content of the action command packet is illustrated on the left side of the figure.

Figure 4: Action command examples


The content of the action command must be examined against the conditions listed above for each supported action signal.

For the first request (ACTION_CMD #1)

Action Command 1Device 0Device 1
ACTION 0ACTION 1ACTION 2ACTION 0
device_key0x346384520x346384520x346384520x346384520x34638452
group_key 0x000000240x000000240x000000420x123412440x00000024
group_mask0x000000030x000000010xFFFFFFFF0x000000000x00000002

Device 0 receives the request and looks for the 4 conditions.

  1. exclusive or write access has been established between the application and the device (or unconditional action is enabled)
  2. device_key matches
  3. group_key matches
  4. Logical AND-wise comparison of requested group_mask is non-zero

All 4 conditions are met only for ACTION_0, thus the device immediately asserts the internal signal represented by ACTION_0. The same steps are followed by Device 1. Only the group_mask is different, but nevertheless the logical bitwise AND operation produces a non-zero value, leading to the assertion of ACTION_0 by Device 1.

For the second request (ACTION_CMD #2)

Action Command 2Device 0Device 1
ACTION 0ACTION 1ACTION 2ACTION 0
device_key0x346384520x346384520x346384520x346384520x34638452
group_key 0x000000420x000000240x000000420x123412440x00000024
group_mask0x000000F20x000000010xFFFFFFFF0x000000000x00000002

Looking for the 4 conditions, Device 0 will assert ACTION_1 while Device 1 will not assert any signal because the group_key condition is not met. Therefore, Device 1 ignores the request.

For the third request (ACTION_CMD #3)

Action Command 3Device 0Device 1
ACTION 0ACTION 1ACTION 2ACTION 0
device_key0x346384520x346384520x346384520x346384520x34638452
group_key 0x000000240x000000240x000000420x123412440x00000024
group_mask0x000000020x000000010xFFFFFFFF0x000000000x00000002

In the third example, the group_mask and group_key of Device 0 do not match with ACTION_CMD #3 for any of the ACTION_0 to ACTION_2. Therefore, Device 0 ignores the request. Device 1 asserts ACTION_0 since the 4 conditions are met.

The ACTION_CMD is flexible enough to accommodate “simultaneous” triggering of the same functional action in functionally different devices.

For instance, let’s assume the software trigger of Device 0 can only be associated to its ACTION_3 and that the software trigger of Device 1 can only be associated to its ACTION_1. And the action command supports to trigger the same functional action provided that their respective action group key and masks are set in order to meet the conditions from the previous list.

For the fourth request (ACTION_CMD #4)

Action Command 4Device 0Device 1
ACTION 3ACTION 1
device_key0x346384520x346384520x34638452
group_key 0x000000010x000000010x00000001
group_mask0x000000010xFFFFFFFF0xFFFFFFFF

In this case, Device 0 asserts ACTION_3 and Device 1 asserts ACTION_1 since the conditions are met. As a result of this, the software trigger of both devices can be “simultaneously” triggered even though they are associated to different action numbers.

Writing Code Using Action Commands

The following section uses C# code snippets but the same thing can be done using a variety of other programming languages as well.

To set up an action command on the device something like this is needed:

private static void setupActionCommandOnDevice(GenICam.ActionControl ac, Int64 deviceKey, Int64 actionNumber, Int64 groupKey, Int64 groupMask)
{
  if ((deviceKey == 0) && (groupKey == 0) && (groupMask == 0))
  {
    Console.WriteLine("Switching off action {0}.", actionNumber);
  }
  else
  {
    Console.WriteLine("Setting up action {0}. Device key: 0x{1:X8}, group key: 0x{2:X8}, group mask: 0x{3:X8}", actionNumber, deviceKey, groupKey, groupMask);
  }
  ac.actionDeviceKey.write(deviceKey);
  ac.actionSelector.write(actionNumber);
  ac.actionGroupKey.write(groupKey);
  ac.actionGroupMask.write(groupMask);
}
Note
In a typical scenario the deviceKey parameter will only be set up once as there is only one register for it on each device while there can be various action commands.

Now to send action commands to devices connected to a certain interface it might be necessary to locate the correct instance of the InterfaceModule class first. One way to do this would be like this:

private static List<GenICam.InterfaceModule> getGenTLInterfaceListForDevice(Device pDev)
{
  // first get a list of ALL interfaces in the current system
  GenICam.SystemModule systemModule = new GenICam.SystemModule(pDev);
  Dictionary<String, Int64> interfaceIDToIndexMap = new Dictionary<string, long>();
  Int64 interfaceCount = systemModule.interfaceSelector.maxValue + 1;
  for (Int64 i = 0; i < interfaceCount; i++)
  {
    systemModule.interfaceSelector.write(i);
    interfaceIDToIndexMap.Add(systemModule.interfaceID.read(), i);
  }

  // now try to get access to the interfaces the device in question is connected to
  PropertyI64 interfaceID = new PropertyI64();
  DeviceComponentLocator locator = new DeviceComponentLocator(pDev.hDev);
  locator.bindComponent(interfaceID, "InterfaceID");
  if (interfaceID.isValid == false)
  {
    return null;
  }
  ReadOnlyCollection<String> interfacesTheDeviceIsConnectedTo = interfaceID.listOfValidStrings;

  // create an instance of the GenICam.InterfaceModule class for each interface the device is connected to
  List<GenICam.InterfaceModule> interfaces = new List<GenICam.InterfaceModule>();
  foreach (String interfaceIDString in interfacesTheDeviceIsConnectedTo)
  {
    interfaces.Add(new GenICam.InterfaceModule(pDev, interfaceIDToIndexMap[interfaceIDString]));
  }
  return interfaces;
}

Once the desired interface has been located it could be configured to send an action command like this:

private static void setupActionCommandOnInterface(GenICam.InterfaceModule im, Int32 destinationIP, Int64 deviceKey, Int64 groupKey, Int64 groupMask, bool boScheduledAction)
{
  im.mvActionDestinationIPAddress.write(destinationIP);
  im.mvActionDeviceKey.write(deviceKey);
  im.mvActionGroupKey.write(groupKey);
  im.mvActionGroupMask.write(groupMask);
  im.mvActionScheduledTimeEnable.write(boScheduledAction ? TBoolean.bTrue : TBoolean.bFalse);
  // here the desired execution time must also be configured for scheduled action commands if desired by writing to the im.mvActionScheduledTime property
}

Now the interface is set up completely and sending an action command works like this:

private static void sendActionCommand(GenICam.InterfaceModule im)
{
  im.mvActionSend.call();
}

Depending on the value of destinationIP when actually firing the action command either a broadcast or a unicast message will be generated and send to either one or all devices in the subnet and depending on whether one or more action commands on the device are set up to assert the command they will react appropriately or will silently ignore the command.



Synchronize the cameras to expose at the same time

This can be achieved by connecting the same external trigger signal to one of the digital inputs of each camera like it's shown in the following figure:

Figure 1: Electrical setup for sync. cameras

Each camera then has to be configured for external trigger somehow like in the image below:

Figure 2: wxPropView - Setup for sync. cameras

This assumes that the image acquisition shall start with the rising edge of the trigger signal. Every camera must be configured like this. Each rising edge of the external trigger signal then will start the exposure of a new image at the same time on each camera. Every trigger signal that will occur during the exposure of an image will be silently discarded.



Working with the Hardware Real-Time Controller (HRTC)

Note
Please have a look at the Hardware Real-Time Controller (HRTC) chapter for basic information.

There are several use cases concerning the Hardware Real-Time Controller (HRTC):

  • "Using single camera":
  • "Using multiple cameras":



Single camera samples (HRTC)

Note
Please have a look at the Hardware Real-Time Controller (HRTC) chapter for basic information.

Using a single camera there are following samples available:



Achieve a defined image frequency (HRTC)

Note
Please have a look at the Hardware Real-Time Controller (HRTC) chapter for basic information.

With the use of the HRTC, any feasible frequency with the accuracy of micro seconds(us) is possible. The program to achieve this roughly must look like this (with the trigger mode set to ctmOnRisingEdge):

0. WaitClocks( <frame time in us> - <trigger pulse width in us>) )
1. TriggerSet 1
2. WaitClocks( <trigger pulse width in us> )
3. TriggerReset
4. Jump 0

So to get e.g. exactly 10 images per second from the camera the program would somehow look like this(of course the expose time then must be smaller or equal then the frame time in normal shutter mode):

0. WaitClocks 99000
1. TriggerSet 1
2. WaitClocks 1000
3. TriggerReset
4. Jump 0
Figure 1: wxPropView - Entering the sample "Achieve a defined image frequency"
See also
Download this sample as an rtp file: Frequency10Hz.rtp. To open the file in wxPropView, click on "Digital I/O -> HardwareRealTimeController -> Filename" and select the downloaded file. Afterwards, click on "int Load( )" to load the HRTC program.
Note
Please note the max. frame rate of the corresponding sensor!

To see a code sample (in C++) how this can be implemented in an application see the description of the class mvIMPACT::acquire::RTCtrProgram (C++ developers)

Delay the external trigger signal (HRTC)

Note
Please have a look at the Hardware Real-Time Controller (HRTC) chapter for basic information.
0. WaitDigin DigIn0->On
1. WaitClocks <delay time>
2. TriggerSet 0
3. WaitClocks <trigger pulse width>
4. TriggerReset
5. Jump 0

<trigger pulse width> should not less than 100us.
Figure 1: Delay the external trigger signal

As soon as digital input 0 changes from high to low (0), the HRTC waits the < delay time > (1) and starts the image expose. The expose time is used from the expose setting of the camera. Step (5) jumps back to the beginning to be able to wait for the next incoming signal.

Note
WaitDigIn waits for a state.
Between TriggerSet and TriggerReset has to be a waiting period.
If you are waiting for an external edge in a HRTC sequence like
WaitDigIn[On,Ignore]
WaitDigIn[Off,Ignore]
the minimum pulse width which can be detected by HRTC has to be at least 5 us.

Creating double acquisitions (HRTC)

Note
Please have a look at the Hardware Real-Time Controller (HRTC) chapter for basic information.

If you need a double acquisition, i.e. take two images in a very short time interval, you can achieve this by using the HRTC.

With the following HRTC code, you will

  • take an image using TriggerSet and after TriggerReset you have to
  • set the camera to ExposeSet immediately.
  • Now, you have to wait until the first image was read-out and then
  • set the second TriggerSet.

The ExposureTime was set to 200 us.

0    WaitDigin  DigitalInputs[0] - On
1    TriggerSet  1
2    WaitClocks 200
3    TriggerReset
4    WaitClocks 5
5    ExposeSet
6    WaitClocks 60000
7    TriggerSet 2
8    WaitClocks 100
9    TriggerReset
10   ExposeReset
11   WaitClocks 60000
12   Jump 0



Take two images after one external trigger (HRTC)

Note
Please have a look at the Hardware Real-Time Controller (HRTC) chapter for basic information.
0. WaitDigin DigIn0->Off
1. TriggerSet 1
2. WaitClocks <trigger pulse width>
3. TriggerReset
4. WaitClocks <time between 2 acquisitions - 10us> (= WC1)
5. TriggerSet 2
6. WaitClocks <trigger pulse width>
7. TriggerReset
8. Jump 0

<trigger pulse width> should not less than 100us.
Figure 1: Take two images after one external trigger

This program generates two internal trigger signals after the digital input 0 is going to low. The time between those internal trigger signals is defined by step (4). Each image is getting a different frame ID. The first one has the number 1, defined in the command (1) and the second image will have the number 2. The application can ask for the frame ID of each image, so well known which image is the first and the second one.

he

Take two images with different expose times after an external trigger (HRTC)

Note
Please have a look at the Hardware Real-Time Controller (HRTC) chapter for basic information.

The following code shows the solution in combination with a CCD model of the camera. With CCD models you have to set the exposure time using the trigger width.

0.  WaitDigin DigIn0->Off
1.  ExposeSet
2.  WaitClocks <expose time image1 - 10us> (= WC1)
3.  TriggerSet 1
4.  WaitClocks <trigger pulse width>
5.  TriggerReset
6.  ExposeReset
7.  WaitClocks <time between 2 acquisitions - expose time image1 - 10us> (= WC2)
8.  ExposeSet
9.  WaitClocks <expose time image2 - 10us> (= WC3)
10. TriggerSet 2
11. WaitClocks <trigger pulse width>
12. TriggerReset
13. ExposeReset
14. Jump 0

<trigger pulse width> should not less than 100us.
Figure 1: Take two images with different expose times after an external trigger
Note
Due to the internal loop to wait for a trigger signal, the WaitClocks call between "TriggerSet 1" and "TriggerReset" constitute 100 us . For this reason, the trigger signal cannot be missed.
Before the ExposeReset, you have to call the TriggerReset otherwise the normal flow will continue and the image data will be lost!
The sensor expose time after the TriggerSet is 0 us .
See also
Download this sample as an rtp file: 2Images2DifferentExposureTimes.rtp with two consecutive exposure times (10ms / 20ms). To open the file in wxPropView, click on "Digital I/O -> HardwareRealTimeController -> Filename" and select the downloaded file. Afterwards, click on "int Load( )" to load the HRTC program. There are timeouts added in line 4 and line 14 to illustrate the different exposure times.

Using a CMOS model (e.g. the mvBlueFOX-MLC205), a sample with four consecutive exposure times (10ms / 20ms / 40ms / 80ms) triggered just by one hardware input signal would look like this:

0.  WaitDigin DigIn0->On
1.  TriggerSet
2.  WaitClocks 10000 (= 10 ms)
3.  TriggerReset
4.  WaitClocks 1000000 (= 1 s)
5.  TriggerSet
6.  WaitClocks 20000 (= 20 ms)
7.  TriggerReset
8.  WaitClocks 1000000 (= 1 s)
9.  TriggerSet
10. WaitClocks 40000 (= 40 ms)
11. TriggerReset
12. WaitClocks 1000000 (= 1 s)
13. TriggerSet
14. WaitClocks 80000 (= 40 ms)
15. TriggerReset
16. WaitClocks 1000000 (= 1 s)
17. Jump 0
See also
This second sample is also available as an rtp file: MLC205_four_images_diff_exp.rtp.



Edge controlled triggering (HRTC)

Note
Please have a look at the Hardware Real-Time Controller (HRTC) chapter for basic information.

To achieve an edged controlled triggering, you can use HRTC. Please follow these steps:

  1. First of all, you have to set the TriggerMode to OnHighLevel .
  2. Then, set the TriggerSource to RTCtrl .
Figure 1: wxPropView - TriggerMode and TriggerSource

Afterwards you have to configure the HRTC program:

  1. The HRTC program waits for a rising edge at the digital input 0 (step 1).
  2. If there is a rising edge, the trigger will be set (step 2).
  3. After a short wait time (step 3),
  4. the trigger will be reset (step 4).
  5. Now, the HRTC program waits for a falling edge at the digital input 0 (step 5).
  6. If there is a falling edge, the trigger will jump to step 0 (step 6).
Note
The waiting time at step 0 is necessary to debounce the signal level at the input (the duration should be shorter than the frame time).
Figure 2: wxPropView - Edge controller triggering using HRTC
See also
Download this sample as a capture settings file: MLC200wG_HRTC_TriggerFromHighLevelToEdgeControl.xml. How you can work with capture settings is described in the following chapter.

To see a code sample (in C++) how this can be implemented in an application see the description of the class mvIMPACT::acquire::RTCtrProgram (C++ developers)

Multiple camera samples (HRTC)

Note
Please have a look at the Hardware Real-Time Controller (HRTC) chapter for basic information.

Using a multiple camera there are following samples available:

Delay the expose start of the following camera (HRTC)

Note
Please have a look at the Hardware Real-Time Controller (HRTC) chapter for basic information.
The use case Synchronize the cameras to expose at the same time shows how you have to connect the cameras.

If a defined delay should be necessary between the cameras, the HRTC can do the synchronization work.

In this case, one camera must be the master. The external trigger signal that will start the acquisition must be connected to one of the cameras digital inputs. One of the digital outputs then will be connected to the digital input of the next camera. So camera one uses its digital output to trigger camera two. How to connect the cameras to one another can also be seen in the following image:

Figure 1: Connection diagram for a defined delay from the exposure start of one camera relative to another

Assuming that the external trigger is connected to digital input 0 of camera one and digital output 0 is connected to digital input 0 of camera two. Each additional camera will then be connected to it predecessor like camera 2 is connected to camera 1. The HRTC of camera one then has to be programmed somehow like this:

0. WaitDigin DigIn0->On
1. TriggerSet 0
2. WaitClocks <trigger pulse width>
3. TriggerReset
4. WaitClocks <delay time>
5. SetDigout DigOut0->On
6. WaitClocks 100us
7. SetDigout DigOut0->Off
8. Jump 0

<trigger pulse width> should not less than 100us.

When the cameras are set up to start the exposure on the rising edge of the signal <delay time> of course is the desired delay time minus <trigger pulse width>.

If more than two cameras shall be connected like this, every camera except the last one must run a program like the one discussed above. The delay times of course can vary.

Figure 2: Delay the expose start of the following camera