MATRIX VISION - mvBlueLYNX-M7 Technical Documentation
|
As any GenICam™ compliant device for which there is a GenICam™ GenTL compliant capture driver in mvIMPACT Acquire can be used using the mvIMPACT Acquire interface and it can't be known which features are supported by a device until a device has been initialised and its GenICam™ XML file has been processed it is not possible to provide a complete C++ wrapper for every device statically.
Therefore an interface code generator has been embedded into each driver capable of handling arbitrary devices. This code generator can be used to create a convenient C++ interface file that allows access to every feature offered by a device.
To access features needed to generate a C++ wrapper interface a device needs to be initialized. Code can only be generated for the interface layout selected when the device was opened. If interfaces shall be created for more than a single interface layout the steps that will explain the creation of the wrapper files must be repeated for each interface layout.
Once the device has been opened the code generator can be accessed by navigating to "System Settings -> CodeGeneration"
.
To generate code first of all an appropriate file name should be selected. In order to prevent file name clashes the following hints should be kept in mind when thinking about a file name:
If only a single device family is involved but 2 interface layouts will be used later a suitable file name might be "mvIMPACT_acquire_GenICam_Wrapper_DeviceSpecific.h".
For a more complex application involving different device families using the GenICam interface layout only something like this might make sense:
Once a file name has been selected the code generator can be invoked by executing the "int GenerateCode()" method:
The result of the code generator run will be written into the "LastResult" property afterwards:
Each header file generated by the code generator will include "mvIMPACT_CPP/mvIMPACT_acquire.h" thus when an application is compiled with files that have been automatically generated these header files must have access to this file. This can easily achieved by appropriately settings up the build environment / Makefile.
To avoid problems of multiple includes the file will use an include guard build from the file name.
Within each header file the generated data types will reside in a sub namespace of "mvIMPACT::acquire" in order to avoid name clashes when working with several different created files in the same application. The namespace will automatically be generated from the ModelName tag and the file version tags in the devices GenICam XML file and the interface layout. For a device with a ModelName tag mvBlueIntelligentDevice and a file version of 1.1.0 something like this will be created:
namespace mvIMPACT { namespace acquire { namespace DeviceSpecific { // the name of the interface layout used during the process of code creation namespace MATRIX_VISION_mvBlueIntelligentDevice_1 { // this name will be constructed from the version and product // information that comes with the GenICam XML file. As defined // by the GenICam standard, different major versions of a devices // XML file may not be compatible thus different interface files should be created here // all code will reside in this inner namespace } // namespace MATRIX_VISION_mvBlueIntelligentDevice_1 } // namespace DeviceSpecific } // namespace acquire } // namespace mvIMPACT
In the application the generated header files can be used like normal include files:
#include <string> #include <mvIMPACT_CPP/mvIMPACT_acquire.h> #include <mvIMPACT_CPP/mvIMPACT_acquire_GenICam_Wrapper_DeviceSpecific.h>
Now to access data types from the header files of course the namespaces must somehow be taken into account. When there is just a single interface that has been created automatically the easiest thing to do would probably an appropriate using statement:
using namespace mvIMPACT::acquire::DeviceSpecific::MATRIX_VISION_mvBlueIntelligentDevice_1;
If several files created from different devices shall be used and these devices define similar features in a slightly different way this however might result in name clashes and/or unexpected behaviour. In that case the namespaces should be specified explicitly when creating data type instances from the header file in the application:
//----------------------------------------------------------------------------- void fn( Device* pDev ) //----------------------------------------------------------------------------- { mvIMPACT::acquire::DeviceSpecific::MATRIX_VISION_mvBlueIntelligentDevice_1::DeviceControl dc( pDev ); if( dc.timestampLatch.isValid() ) { dc.timestampLatch.call(); } }
When working with a using statement the same code can be written like this as well:
//----------------------------------------------------------------------------- void fn( Device* pDev ) //----------------------------------------------------------------------------- { DeviceControl dc( pDev ); if( dc.timestampLatch.isValid() ) { dc.timestampLatch.call(); } }
Due to random process deviations, technical limitations of the sensors, etc. there are different reasons that image sensors have image errors. MATRIX VISION provides several procedures to correct these errors, by default these are host-based calculations.
Provided image corrections procedures are
The path "Setting -> Base -> ImageProcessing -> ..." indicates that these corrections are host-based corrections.
Before starting consider the following hints:
Width
and Height
or "Setting -> Base -> Camera -> GenICam -> Image Format Control" using the GenICam interface layout. Due to random process deviations, not all pixels in an image sensor array will react in the same way to a given light condition. These variations are known as blemishes or defective pixels.
There are two types of defective pixels:
To correct the leaky pixels following steps are necessary:
"Gain = 0 dB"
) and exposure time "Setting -> Base -> Camera -> GenICam -> Acquisition Control" "ExposureTime = 360 msec"
to the given operating conditions "Mode = Calibrate leaky pixel"
"Acquisition Mode = SingleFrame"
) "Replace 3x1 average"
or "Replace 3x3 median"
"Action -> Capture Settings -> Save Active Device Settings"
The filter checks:
Pixel > LeakyPixelDeviation_ADCLimit // (default value: 50)
All pixels above this value are considered as leaky pixel.
To correct the cold pixels following steps are necessary:
"Mode = Calibrate cold pixel"
(Figure 2) "Acquisition Mode = SingleFrame"
) "Replace 3x1 average"
or "Replace 3x3 median"
"Action -> Capture Settings -> Save Active Device Settings"
The filter checks:
Pixel < T[cold] // (default value: 15 %) // T[cold] = deviation of the average gray value (ColdPixelDeviation_pc)
All pixels below this value have a dynamic below normal behavior.
"DefectivePixelsFound"
. If you want to reset the correction data or repeat the correction process you have to set the (Filter-) "Mode = Reset Calibration Data"
.To save and load the defective pixel data, appropriate functions are available:
The section "Setting -> Base -> ImageProcessing -> DefectivePixelsFilter" was also extended (see Figure 2a). First, the DefectivePixelsFound indicates the number of found defective pixels. The coordinates are available through the properties DefectivePixelOffsetX and DefectivePixelOffsetY now. In addition to that it is possible to edit, add and delete these values manually (via right-click on the "DefectivePixelOffset" and select "Append Value" or "Delete Last Value"). Second, with the function
you can exchange the data from the filter with the camera and vice versa
Just right-click on mvDefectivePixelWriteToDevice and click on "Execute" to write the data to the camera (and hand over the data to the Features_section_mvDefectivePixelCorrectionControl). To permanently store the data inside the cameras non-volatile memory afterwards mvDefectivePixelDataSave must be called as well!
While opening the camera, the camera will load the defective pixel data from the camera. If there are pixels in the filter available (via calibration), nevertheless you can load the values from the camera. In this case the values will be merged with the existing ones. I.e., new ones are added and duplicates are removed.
Dark current is a characteristic of image sensors, which means, that image sensors also deliver signals in total darkness by warmness, for example, which creates charge carriers spontaneously. This signal overlays the image information. Dark current depends on two circumstances:
The dark current correction is a pixel wise correction where the dark current correction image removes the dark current from the original image. To get a better result it is necessary to snap the original and the dark current images with the same exposure time and temperature.
To correct the dark current pixels following steps are necessary:
"Acquisition Mode = SingleFrame"
) "Action -> Capture Settings -> Save Active Device Settings"
The filter snaps a number of images and averages the dark current images to one correction image.
Each pixel of a sensor chip is a single detector with its own properties. Particularly, this pertains to the sensitivity as the case may be the spectral sensitivity. To solve this problem (including lens and illumination variations), a plain and equally "colored" calibration plate (e.g. white or gray) as a flat-field is snapped, which will be used to correct the original image. Between flat-field correction and the future application you must not change the optic. To reduce errors while doing the flat-field correction, a saturation between 50 % and 75 % of the flat-field in the histogram is convenient.
To make a flat field correction following steps are necessary:
BayerXY
in "Setting -> Base -> Camera -> GenICam -> Image Format Control -> PixelFormat". "Mode = Calibrate"
(Figure 6) "Acquisition Mode = Continuous"
) "Mode = On"
"Action -> Capture Settings -> Save Active Device Settings"
The filter snaps a number of images (according to the value of the CalibrationImageCount
, e.g. 5
) and averages the flat-field images to one correction image.
There are several use cases concerning the acquisition / image quality of the camera:
It is possible to limit the used bandwidth like the following way:
"Setting -> Base -> Camera -> GenICam -> Device Control -> Device Link Selector"
set property "Device Link Throughput Limit Mode" to "On". Now, you can set the bandwidth with "Device Link Throughput Limit" to your desired bandwidth in bits per second
There are several use cases concerning device memory:
There are two ways to get access to the UserData, which was transferred via vl_transmit library before:
"Image Setting -> RequestInfo"
you will find the UserData. pRequest
points to a C++ mvIMPACT::acquire::Request object.
There are several use cases concerning multiple cameras:
GigE Vision supports to stream data over a network via Multicast to multiple destinations simultaneously. Multicast means, that one source streams data to a group of recipients in the same subnet. As long as the recipients are members of this Multicast group they will get the packages. Other members of the same local subnet will skip the packages (but the packages are sent and so they need bandwidth).
To set up a Multicast environment, GigE Vision introduced camera access types. The most important ones are
With Control, a primary application can used to set up the camera which will stream the data via Multicast. With Read you can set up secondary applications which will playback the data stream.
Because the mvBlueCOUGAR cameras are GigE Vision compliant devices, you can set up a Multicast environment using wxPropView. This use case will show how you can do this.
On (the primary) application you have to establish "Control" access.
For this,
"239.255.255.255"
. One or more applications running on different machines can then establish "read-only" access to the very same device.
for details.
GigE Vision specifies so called Action Commands to trigger an action in multiple devices at roughly the same time.
Action commands can be unicasted or broadcasted by applications with either exclusive
, write
or read
(only when the device is configured accordingly) access to the device. They can be used e.g. to
The most typical scenario is when an application desires to trigger a simultaneous action on multiple devices. This case is showed by the figure below. The application fires a broadcast action command that will reach all the devices on the subnet.
Scheduled
Action
Commands
. Depending on whether an action command is unicasted or broadcasted the command will either reach a single device on a given subnet or multiple devices.But action commands can also be used by secondary applications. This can even be another device on the same subnet. This is depicted in the following figure.
Upon reception of an action command, the device will decode the information to identify which internal action signal is requested. An action signal is a device internal signal that can be used as a trigger for functional units inside the device (ex: a frame trigger). It can be routed to all signal sinks of the device.
Each action command message contains information for the device to validate the requested operation:
device_key
to authorize the action on this device.group_key
to define a group of devices on which actions have to be executed.group_mask
to be used to filter out some of these devices from the group.Action commands can only be asserted if the device has an open primary control channel (so if an application has established write
or exclusive
access to a device) or when unconditional
action
mode is enabled.
A device can define several action commands which can be selected via the ActionSelector
in the device features property tree.
The conditions for an action command to be asserted by the device are:
device_key
in the action command sent by the application and the ActionDeviceKey
property of the device must be equal.group_key
in the action command sent by the application and the ActionGroupKey
property for the corresponding action of the device must be equal.group_mask
in the action command sent by the application against the ActionGroupMask
for the corresponding action of the device must be non-zero. Therefore, they must have at least one common bit set at the same position in the register.When these 4 conditions are met for at least one action signal configured on the device, then the device internally asserts the requested action. As these conditions could be met on more than one action, the device could assert more than one action signal in parallel. When one of the 4 conditions is not met for any supported action signal, then the device ignores the action command.
This first condition asks for the write
or exclusive
access being established between the application and the device unless the ActionUnconditionalMode
has been enabled. When this bit is set, the device can assert the requested action even if no application has established write
or exclusive
access as long as the 3 other conditions are met.
Scheduled
Action
Commands
provide a way to trigger actions in a device at a specific time in the future. The typical use case is depicted in the following diagram:
The transmitter of an action command records the exact time, when the source signal is asserted (External signal). This time t
0 is incremented by a delta time DELTA
L and transmitted in an action command to the receivers. The delta time DELTA
L has to be larger than the longest possible transmission and processing latency of the action command in the network.
If the packet passes the action command filters in the receiver, then the action signal is put into a time queue (the depth of this queue is indicated by the ActionQueueSize
property).
When the time of the local clock is greater or equal to the time of an action signal in the queue, the signal is removed from the queue and asserted. Combined with the timestamp precision of IEEE 1588 which can be sub-microseconds, a Scheduled Action Command provides a way to allow low-jitter software trigger. If the sender of an action command is not capable to set a future time into the packet, the action command has a flag to fall back to legacy mode (bit 0 of the flag field). In this mode the signal is asserted in the moment the packet passes the action command filters.
The following examples illustrate the behavior of action commands in various scenarios. The figure below shows 4 different action commands sent by an application. The content of the action command packet is illustrated on the left side of the figure.
The content of the action command must be examined against the conditions listed above for each supported action signal.
For the first request (ACTION_CMD #1)
Action Command 1 | Device 0 | Device 1 | |||
ACTION 0 | ACTION 1 | ACTION 2 | ACTION 0 | ||
device_key | 0x34638452 | 0x34638452 | 0x34638452 | 0x34638452 | 0x34638452 |
group_key | 0x00000024 | 0x00000024 | 0x00000042 | 0x12341244 | 0x00000024 |
group_mask | 0x00000003 | 0x00000001 | 0xFFFFFFFF | 0x00000000 | 0x00000002 |
Device 0 receives the request and looks for the 4 conditions.
exclusive
or write
access has been established between the application and the device (or unconditional action is enabled)device_key
matchesgroup_key
matchesgroup_mask
is non-zeroAll 4 conditions are met only for ACTION_0, thus the device immediately asserts the internal signal represented by ACTION_0. The same steps are followed by Device 1. Only the group_mask
is different, but nevertheless the logical bitwise AND operation produces a non-zero value, leading to the assertion of ACTION_0 by Device 1.
For the second request (ACTION_CMD #2)
Action Command 2 | Device 0 | Device 1 | |||
ACTION 0 | ACTION 1 | ACTION 2 | ACTION 0 | ||
device_key | 0x34638452 | 0x34638452 | 0x34638452 | 0x34638452 | 0x34638452 |
group_key | 0x00000042 | 0x00000024 | 0x00000042 | 0x12341244 | 0x00000024 |
group_mask | 0x000000F2 | 0x00000001 | 0xFFFFFFFF | 0x00000000 | 0x00000002 |
Looking for the 4 conditions, Device 0 will assert ACTION_1 while Device 1 will not assert any signal because the group_key
condition is not met. Therefore, Device 1 ignores the request.
For the third request (ACTION_CMD #3)
Action Command 3 | Device 0 | Device 1 | |||
ACTION 0 | ACTION 1 | ACTION 2 | ACTION 0 | ||
device_key | 0x34638452 | 0x34638452 | 0x34638452 | 0x34638452 | 0x34638452 |
group_key | 0x00000024 | 0x00000024 | 0x00000042 | 0x12341244 | 0x00000024 |
group_mask | 0x00000002 | 0x00000001 | 0xFFFFFFFF | 0x00000000 | 0x00000002 |
In the third example, the group_mask
and group_key
of Device 0 do not match with ACTION_CMD #3 for any of the ACTION_0 to ACTION_2. Therefore, Device 0 ignores the request. Device 1 asserts ACTION_0 since the 4 conditions are met.
The ACTION_CMD is flexible enough to accommodate “simultaneous” triggering of the same functional action in functionally different devices.
For instance, let’s assume the software trigger of Device 0 can only be associated to its ACTION_3 and that the software trigger of Device 1 can only be associated to its ACTION_1. And the action command supports to trigger the same functional action provided that their respective action group key and masks are set in order to meet the conditions from the previous list.
For the fourth request (ACTION_CMD #4)
Action Command 4 | Device 0 | Device 1 | |
ACTION 3 | ACTION 1 | ||
device_key | 0x34638452 | 0x34638452 | 0x34638452 |
group_key | 0x00000001 | 0x00000001 | 0x00000001 |
group_mask | 0x00000001 | 0xFFFFFFFF | 0xFFFFFFFF |
In this case, Device 0 asserts ACTION_3 and Device 1 asserts ACTION_1 since the conditions are met. As a result of this, the software trigger of both devices can be “simultaneously” triggered even though they are associated to different action numbers.
The following section uses C# code snippets but the same thing can be done using a variety of other programming languages as well.
To set up an action command on the device something like this is needed:
private static void setupActionCommandOnDevice(GenICam.ActionControl ac, Int64 deviceKey, Int64 actionNumber, Int64 groupKey, Int64 groupMask) { if ((deviceKey == 0) && (groupKey == 0) && (groupMask == 0)) { Console.WriteLine("Switching off action {0}.", actionNumber); } else { Console.WriteLine("Setting up action {0}. Device key: 0x{1:X8}, group key: 0x{2:X8}, group mask: 0x{3:X8}", actionNumber, deviceKey, groupKey, groupMask); } ac.actionDeviceKey.write(deviceKey); ac.actionSelector.write(actionNumber); ac.actionGroupKey.write(groupKey); ac.actionGroupMask.write(groupMask); }
deviceKey
parameter will only be set up once as there is only one register for it on each device while there can be various action commands.Now to send action commands to devices connected to a certain interface it might be necessary to locate the correct instance of the InterfaceModule class first. One way to do this would be like this:
private static List<GenICam.InterfaceModule> getGenTLInterfaceListForDevice(Device pDev) { // first get a list of ALL interfaces in the current system GenICam.SystemModule systemModule = new GenICam.SystemModule(pDev); Dictionary<String, Int64> interfaceIDToIndexMap = new Dictionary<string, long>(); Int64 interfaceCount = systemModule.interfaceSelector.maxValue + 1; for (Int64 i = 0; i < interfaceCount; i++) { systemModule.interfaceSelector.write(i); interfaceIDToIndexMap.Add(systemModule.interfaceID.read(), i); } // now try to get access to the interfaces the device in question is connected to PropertyI64 interfaceID = new PropertyI64(); DeviceComponentLocator locator = new DeviceComponentLocator(pDev.hDev); locator.bindComponent(interfaceID, "InterfaceID"); if (interfaceID.isValid == false) { return null; } ReadOnlyCollection<String> interfacesTheDeviceIsConnectedTo = interfaceID.listOfValidStrings; // create an instance of the GenICam.InterfaceModule class for each interface the device is connected to List<GenICam.InterfaceModule> interfaces = new List<GenICam.InterfaceModule>(); foreach (String interfaceIDString in interfacesTheDeviceIsConnectedTo) { interfaces.Add(new GenICam.InterfaceModule(pDev, interfaceIDToIndexMap[interfaceIDString])); } return interfaces; }
Once the desired interface has been located it could be configured to send an action command like this:
private static void setupActionCommandOnInterface(GenICam.InterfaceModule im, Int32 destinationIP, Int64 deviceKey, Int64 groupKey, Int64 groupMask, bool boScheduledAction) { im.mvActionDestinationIPAddress.write(destinationIP); im.mvActionDeviceKey.write(deviceKey); im.mvActionGroupKey.write(groupKey); im.mvActionGroupMask.write(groupMask); im.mvActionScheduledTimeEnable.write(boScheduledAction ? TBoolean.bTrue : TBoolean.bFalse); // here the desired execution time must also be configured for scheduled action commands if desired by writing to the im.mvActionScheduledTime property }
Now the interface is set up completely and sending an action command works like this:
private static void sendActionCommand(GenICam.InterfaceModule im) { im.mvActionSend.call(); }
Depending on the value of destinationIP
when actually firing the action command either a broadcast or a unicast message will be generated and send to either one or all devices in the subnet and depending on whether one or more action commands on the device are set up to assert the command they will react appropriately or will silently ignore the command.
This can be achieved by connecting the same external trigger signal to one of the digital inputs of each camera like it's shown in the following figure:
Each camera then has to be configured for external trigger somehow like in the image below:
This assumes that the image acquisition shall start with the rising edge of the trigger signal. Every camera must be configured like this. Each rising edge of the external trigger signal then will start the exposure of a new image at the same time on each camera. Every trigger signal that will occur during the exposure of an image will be silently discarded.
There are several use cases concerning the Hardware Real-Time Controller (HRTC):
Using a single camera there are following samples available:
With the use of the HRTC, any feasible frequency with the accuracy of micro seconds(us) is possible. The program to achieve this roughly must look like this (with the trigger mode set to ctmOnRisingEdge):
0. WaitClocks( <frame time in us> - <trigger pulse width in us>) ) 1. TriggerSet 1 2. WaitClocks( <trigger pulse width in us> ) 3. TriggerReset 4. Jump 0
So to get e.g. exactly 10 images per second from the camera the program would somehow look like this(of course the expose time then must be smaller or equal then the frame time in normal shutter mode):
0. WaitClocks 99000 1. TriggerSet 1 2. WaitClocks 1000 3. TriggerReset 4. Jump 0
rtp
file: Frequency10Hz.rtp. To open the file in wxPropView, click on "Digital I/O -> HardwareRealTimeController -> Filename"
and select the downloaded file. Afterwards, click on "int Load( )" to load the HRTC program.To see a code sample (in C++) how this can be implemented in an application see the description of the class mvIMPACT::acquire::RTCtrProgram (C++ developers)
0. WaitDigin DigIn0->On 1. WaitClocks <delay time> 2. TriggerSet 0 3. WaitClocks <trigger pulse width> 4. TriggerReset 5. Jump 0 <trigger pulse width> should not less than 100us.
As soon as digital input 0 changes from high to low (0), the HRTC waits the < delay time > (1) and starts the image expose. The expose time is used from the expose setting of the camera. Step (5) jumps back to the beginning to be able to wait for the next incoming signal.
WaitDigIn[On,Ignore] WaitDigIn[Off,Ignore]the minimum pulse width which can be detected by HRTC has to be at least 5 us.
If you need a double acquisition, i.e. take two images in a very short time interval, you can achieve this by using the HRTC.
With the following HRTC code, you will
The ExposureTime was set to 200 us.
0 WaitDigin DigitalInputs[0] - On 1 TriggerSet 1 2 WaitClocks 200 3 TriggerReset 4 WaitClocks 5 5 ExposeSet 6 WaitClocks 60000 7 TriggerSet 2 8 WaitClocks 100 9 TriggerReset 10 ExposeReset 11 WaitClocks 60000 12 Jump 0
0. WaitDigin DigIn0->Off 1. TriggerSet 1 2. WaitClocks <trigger pulse width> 3. TriggerReset 4. WaitClocks <time between 2 acquisitions - 10us> (= WC1) 5. TriggerSet 2 6. WaitClocks <trigger pulse width> 7. TriggerReset 8. Jump 0 <trigger pulse width> should not less than 100us.
This program generates two internal trigger signals after the digital input 0 is going to low. The time between those internal trigger signals is defined by step (4). Each image is getting a different frame ID. The first one has the number 1, defined in the command (1) and the second image will have the number 2. The application can ask for the frame ID of each image, so well known which image is the first and the second one.
he
The following code shows the solution in combination with a CCD model of the camera. With CCD models you have to set the exposure time using the trigger width.
0. WaitDigin DigIn0->Off 1. ExposeSet 2. WaitClocks <expose time image1 - 10us> (= WC1) 3. TriggerSet 1 4. WaitClocks <trigger pulse width> 5. TriggerReset 6. ExposeReset 7. WaitClocks <time between 2 acquisitions - expose time image1 - 10us> (= WC2) 8. ExposeSet 9. WaitClocks <expose time image2 - 10us> (= WC3) 10. TriggerSet 2 11. WaitClocks <trigger pulse width> 12. TriggerReset 13. ExposeReset 14. Jump 0 <trigger pulse width> should not less than 100us.
rtp
file: 2Images2DifferentExposureTimes.rtp with two consecutive exposure times (10ms / 20ms). To open the file in wxPropView, click on "Digital I/O -> HardwareRealTimeController -> Filename"
and select the downloaded file. Afterwards, click on "int Load( )" to load the HRTC program. There are timeouts added in line 4 and line 14 to illustrate the different exposure times.Using a CMOS model (e.g. the mvBlueFOX-MLC205), a sample with four consecutive exposure times (10ms / 20ms / 40ms / 80ms) triggered just by one hardware input signal would look like this:
0. WaitDigin DigIn0->On 1. TriggerSet 2. WaitClocks 10000 (= 10 ms) 3. TriggerReset 4. WaitClocks 1000000 (= 1 s) 5. TriggerSet 6. WaitClocks 20000 (= 20 ms) 7. TriggerReset 8. WaitClocks 1000000 (= 1 s) 9. TriggerSet 10. WaitClocks 40000 (= 40 ms) 11. TriggerReset 12. WaitClocks 1000000 (= 1 s) 13. TriggerSet 14. WaitClocks 80000 (= 40 ms) 15. TriggerReset 16. WaitClocks 1000000 (= 1 s) 17. Jump 0
rtp
file: MLC205_four_images_diff_exp.rtp.
To achieve an edged controlled triggering, you can use HRTC. Please follow these steps:
Afterwards you have to configure the HRTC program:
To see a code sample (in C++) how this can be implemented in an application see the description of the class mvIMPACT::acquire::RTCtrProgram (C++ developers)
Using a multiple camera there are following samples available:
If a defined delay should be necessary between the cameras, the HRTC can do the synchronization work.
In this case, one camera must be the master. The external trigger signal that will start the acquisition must be connected to one of the cameras digital inputs. One of the digital outputs then will be connected to the digital input of the next camera. So camera one uses its digital output to trigger camera two. How to connect the cameras to one another can also be seen in the following image:
Assuming that the external trigger is connected to digital input 0 of camera one and digital output 0 is connected to digital input 0 of camera two. Each additional camera will then be connected to it predecessor like camera 2 is connected to camera 1. The HRTC of camera one then has to be programmed somehow like this:
0. WaitDigin DigIn0->On 1. TriggerSet 0 2. WaitClocks <trigger pulse width> 3. TriggerReset 4. WaitClocks <delay time> 5. SetDigout DigOut0->On 6. WaitClocks 100us 7. SetDigout DigOut0->Off 8. Jump 0 <trigger pulse width> should not less than 100us.
When the cameras are set up to start the exposure on the rising edge of the signal <delay time> of course is the desired delay time minus <trigger pulse width>.
If more than two cameras shall be connected like this, every camera except the last one must run a program like the one discussed above. The delay times of course can vary.