mvIMPACT Acquire SDK C++
ContinuousCaptureToAVIFile.cpp

The ContinuousCaptureToAVIFile program is based on the C++03 version of the ContinuousCapture.cpp example These days it could be rewritten using C++11 and above features for threads etc. but as it running on Windows systems only this has not been done. If you are interested in AVI file creation AND modern C++ have a look at the SequenceCapture.cpp example instead. The example will start a thread that continuously captures and displays images from the selected device. In addition to that, all the captured images will be written into an AVI stream during the acquisition. Some additional CPU load will be caused because every image must be flipped horizontally as images must be stored top down in the AVI stream in order to be correctly displayed in a conventional soft player.

Note
This example will be available on Windows only. Apart from that it uses the very old Windows only Video for Windows(VfW) API. There are better and more portable ways to create video streams these days. One example might be the FFmpeg project. An mvIMPACT Acquire example using this project is ContinuousCaptureFFmpeg.cpp. When starting a new project using Video for Windows is not recommended. Instead something more portable or modern should be considered.
Please note, that due to weak data transfer rates on certain systems e.g. USB transfer and data transfer to hard disk at the same time might have a bad influence on the overall performance. In that case the SequenceCapture.cpp or ContinuousCaptureFFmpeg.cpp sample might be a more feasible approach. Also decoupling the actual encoding of the video data from the acquisition using multiple threads might be a good idea then.
Program location
The source file ContinuousCaptureToAVIFile.cpp can be found under:
%INSTALLDIR%\apps\ContinuousCaptureToAVIFile\
Note
If you have installed the package without example applications, this file will not be available. On Windows the sample application can be installed or removed from the target system at any time by simply restarting the installation package.
ContinuousCaptureToAVIFile example:
  1. Opens a MATRIX VISION device.
  2. Captures and displays images continuously
  3. Writes an AVI stream while capturing.
Console Output
[0]: BF000306 (mvBlueFOX-120C, Family: mvBlueFOX, interface layout: DeviceSpecific)

Please enter the number in front of the listed device followed by [ENTER] to open it: 0
Using device number 0.
No command line parameters specified. Available parameters:
  'outputFile' or 'of' to specify the name of the resulting AVI file
  'frameRate' or 'fr' to specify the frame rate(frames per second for playback) of the resulting AVI file
  'recordingTime' or 'rt' to specify the time in ms the sample shall capture image data. 
                          If this parameter is omitted, the capture process will be aborted after the user pressed a key


USAGE EXAMPLE:
  ContinuousCaptureToAVIFile rt=5000 of=myfile.avi frameRate=25

Using output file '.\output.avi' with 25 frames per second for playback (this has nothing to do with the capture 
frame rate but only affects the frame rate stored in the header of the AVI file)

Please note that if the frame rate specified only affects the playback speed for the resulting AVI file.
Devices that support a fixed frame rate should be set to the same rate, but this won't be done in this sample, 
thus the playback speed of the AVI file might differ from the real acquisition speed Initialising the device. This might take some time...

To use the HRTC to configure the mvBlueFOX to capture with a defined frequency press 'y'.
How it works

This sample is very similar to the continuous acquisition ContinuousCapture.cpp source example:

First of all the user will be prompted to select the device he wants to use for this sample:

DeviceManager devMgr;
Device* pDev = getDeviceFromUserInput( devMgr );

The function getDeviceFromUserInput() is included via

#include <apps/Common/exampleHelper.h>

Then after the device has been initialised successfully image requests will constantly be sent to the drivers request queue and the application waits for the results:

// Send all requests to the capture queue. There can be more than 1 queue for some devices, but for this sample
// we will work with the default capture queue. If a device supports more than one capture or result
// queue, this will be stated in the manual. If nothing is mentioned about it, the device supports one
// queue only. This loop will send all requests currently available to the driver. To modify the number of requests
// use the property mvIMPACT::acquire::SystemSettings::requestCount at runtime or the property
// mvIMPACT::acquire::Device::defaultRequestCount BEFORE opening the device.
while( ( result = static_cast<TDMR_ERROR>(fi.imageRequestSingle()) ) == DMR_NO_ERROR) {};
{
cout << "'FunctionInterface.imageRequestSingle' returned with an unexpected result: " << result
<< "(" << ImpactAcquireException::getErrorCodeAsString( result ) << ")" << endl;
}
// Start the acquisition manually if this was requested(this is to prepare the driver for data capture and tell the device to start streaming data)
if( pThreadParameter->pDev->acquisitionStartStopBehaviour.read() == assbUser )
{
if( ( result = static_cast<TDMR_ERROR>(fi.acquisitionStart()) ) != DMR_NO_ERROR )
{
cout << "'FunctionInterface.acquisitionStart' returned with an unexpected result: " << result
<< "(" << ImpactAcquireException::getErrorCodeAsString( result ) << ")" << endl;
}
}
// run thread loop
Request* pRequest = nullptr;
Request* pPreviousRequest = nullptr;
const unsigned int timeout_ms = {500};
while( !boTerminated )
{
// wait for results from the default capture queue
int requestNr = fi.imageRequestWaitFor( timeout_ms );
pRequest = fi.isRequestNrValid( requestNr ) ? fi.getRequest( requestNr ) : nullptr;
if( pRequest != nullptr )
{
if( pRequest->isOK() )
{
// do something with the image
}
else
{
// some error: pRequest->requestResult.readS() will return a string representation
}
// this image has been used thus the buffer is no longer needed...
if( pPreviousRequest )
{
// this image has been displayed thus the buffer is no longer needed...
pPreviousRequest->unlock();
}
pPreviousRequest = pRequest;
// send a new image request into the capture queue
fi.imageRequestSingle();
}
else
{
// If the error code is -2119(DEV_WAIT_FOR_REQUEST_FAILED), the documentation will provide
// additional information under TDMR_ERROR in the interface reference
}
}
TDMR_ERROR
Errors reported by the device manager.
Definition: mvDriverBaseEnums.h:2351
@ DEV_NO_FREE_REQUEST_AVAILABLE
The user requested a new image, but no free mvIMPACT::acquire::Request object is available to process...
Definition: mvDriverBaseEnums.h:2509
@ DMR_NO_ERROR
The function call was executed successfully.
Definition: mvDriverBaseEnums.h:2356
@ assbUser
The user can control the start and stop of the data transfer from the device.
Definition: mvDriverBaseEnums.h:147

With the request number returned by mvIMPACT::acquire::FunctionInterface::imageRequestWaitFor you can gain access the image buffer:

pRequest = fi.isRequestNrValid( requestNr ) ? fi.getRequest( requestNr ) : 0;

The image attached to the request can then be processed and/or displayed if the request does not report an error.

When the image is no longer needed you have to unlock the image buffer as otherwise the driver will refuse to use it again. This makes sure, that no image, that is still used by the user will be overwritten by the device:

pPreviousRequest->unlock();

All the AVI related operations are encapsulated in the class AVIWrapper, which by no means is a complete wrapper around the AVI related functions offered by Windows but it offers everything that is needed to create an AVI file from a sequence of images.

Within the capture thread the single images will be added to the AVI file constantly by the following function:

//-----------------------------------------------------------------------------
void storeImageToStream( FunctionInterface& fi, int requestNr, AVIWrapper* pAVIWrapper )
//-----------------------------------------------------------------------------
{
if( fi.isRequestNrValid( requestNr ) )
{
const Request* pRequest = fi.getRequest( requestNr );
if( pRequest->isOK() )
{
// store to AVI file
try
{
// in a real application this would be done in a separate thread in order to
// buffer hard disk delays.
// Unfortunately we have to flip the images as they are stored upside down in the stream...
// this function only works for image formats where each channel has the same line pitch!
inplaceHorizontalMirror( pRequest->imageData.read(), pRequest->imageHeight.read(), pRequest->imageLinePitch.read() );
pAVIWrapper->SaveDataToAVIStream( reinterpret_cast<unsigned char*>(pRequest->imageData.read()), pRequest->imageSize.read() );
}
catch( const AVIException &e )
{
cout << "Could not store image to stream(" << string(e.what() ) << ")" << endl;
}
}
}
}

As it can be seen in the code of this function the image must be flipped in order to be stored in the file correctly. This also influence the way images are stored as because of that the current image can't be stored into the stream as certain window messages might cause the image to be redrawn by the display window during or after the flipping, which would produce a unstable live display. Because of that we will store each image AFTER the display has been assigned a new one:

pRequest = fi.getRequest( requestNr );
if( pRequest->isOK() )
{
display.SetImage( pRequest );
display.Update();
}
else
{
cout << "Error: " << pRequest->requestResult.readS() << endl;
}
// As images might be redrawn by the display window, we can't process the image currently
// displayed. In order not to copy the current image, which would cause additional CPU load
// we will flip and store the previous image if available
storeImageToStream( fi, lastRequestNr, pThreadParameter->pAVIWrapper );

The creation of the AVI itself can be found in the function main together with a lot of comments in the source code.

To provide an absolute frame rate, the mvBlueFOX makes a Hardware Real-Time Controller available (HRTC).

The function

setupBlueFOXFrameRate( Device* pDev, int& frameRate_Hz )

shows, how to program a HRTC loop.

Source code
#ifdef _MSC_VER // is Microsoft compiler?
# if _MSC_VER < 1300 // is 'old' VC 6 compiler?
# pragma warning( disable : 4786 ) // 'identifier was truncated to '255' characters in the debug information'
# endif // #if _MSC_VER < 1300
#endif // #ifdef _MSC_VER
#include <windows.h>
#include <process.h>
#include <conio.h>
#include <iostream>
#include <apps/Common/exampleHelper.h>
#include <apps/Common/aviwrapper.h>
using namespace std;
using namespace mvIMPACT::acquire;
static bool s_boTerminated = false;
//-----------------------------------------------------------------------------
struct ThreadParameter
//-----------------------------------------------------------------------------
{
Device* pDev;
ImageDisplayWindow displayWindow;
AVIWrapper* pAVIWrapper;
ThreadParameter( Device* p, std::string& windowTitle, AVIWrapper* pAVI )
: pDev( p ), displayWindow( windowTitle ), pAVIWrapper( pAVI ) {}
};
//-----------------------------------------------------------------------------
void displayCommandLineOptions( void )
//-----------------------------------------------------------------------------
{
cout << "Available parameters:" << endl
<< " 'outputFile' or 'of' to specify the name of the resulting AVI file" << endl
<< " 'frameRate' or 'fr' to specify the frame rate(frames per second for playback) of the resulting AVI file" << endl
<< " 'recordingTime' or 'rt' to specify the time in ms the sample shall capture image data. If this parameter" << endl
<< " is omitted, the capture process will be aborted after the user pressed a key" << endl
<< endl
<< "USAGE EXAMPLE:" << endl
<< " ContinuousCaptureToAVIFile rt=5000 of=myfile.avi frameRate=25" << endl << endl;
}
//-----------------------------------------------------------------------------
void inplaceHorizontalMirror( void* pData, int height, size_t pitch )
//-----------------------------------------------------------------------------
{
int upperHalfOfLines = height / 2; // the line in the middle (if existent) doesn't need to be processed!
char* pLowerLine = static_cast<char*>( pData ) + ( ( height - 1 ) * pitch );
char* pUpperLine = static_cast<char*>( pData );
char* pTmpLine = new char[pitch];
for( int y = 0; y < upperHalfOfLines; y++ )
{
memcpy( pTmpLine, pUpperLine, pitch );
memcpy( pUpperLine, pLowerLine, pitch );
memcpy( pLowerLine, pTmpLine, pitch );
pUpperLine += pitch;
pLowerLine -= pitch;
}
delete [] pTmpLine;
}
//-----------------------------------------------------------------------------
// Currently only the mvBlueFOX supports HRTC and thus the definition of an
// absolute frame rate during the capture process.
void setupBlueFOXFrameRate( Device* pDev, int frameRate_Hz )
//-----------------------------------------------------------------------------
{
cout << "To use the HRTC to configure the mvBlueFOX to capture with a defined frequency press 'y'." << endl;
if( _getch() != 'y' )
{
return;
}
// mvBlueFOX devices can define a fixed frame frequency
cout << "Trying to capture at " << frameRate_Hz << " frames per second. Please make sure the device can deliver this frame rate" << endl
<< "as otherwise the resulting AVI stream will be replayed with an incorrect speed" << endl;
int frametime_us = static_cast<int>( 1000000.0 * ( 1.0 / static_cast<double>( frameRate_Hz ) ) );
const int TRIGGER_PULSE_WIDTH_us = 100;
if( frametime_us < 2 * TRIGGER_PULSE_WIDTH_us )
{
cout << "frame rate too high (" << frameRate_Hz << "). Using 10 Hz." << endl;
frametime_us = 100000;
}
CameraSettingsBlueFOX bfs( pDev );
if( bfs.expose_us.read() > frametime_us / 2 )
{
ostringstream oss;
oss << "Reducing frame-time from " << bfs.expose_us.read() << " us to " << frametime_us / 2 << " us." << endl
<< "Higher values are possible but require a more sophisticated HRTC program" << endl;
bfs.expose_us.write( frametime_us / 2 );
}
IOSubSystemBlueFOX bfIOs( pDev );
// define a HRTC program that results in a define image frequency
// the hardware real time controller shall be used to trigger an image
bfs.triggerSource.write( ctsRTCtrl );
// when the hardware real time controller switches the trigger signal to
// high the exposure of the image shall start
bfs.triggerMode.write( ctmOnRisingEdge );
// error checks
if( bfIOs.RTCtrProgramCount() == 0 )
{
// no HRTC controllers available (this never happens for the mvBlueFOX)
cout << "This device (" << pDev->product.read() << ") doesn't support HRTC" << endl;
return;
}
RTCtrProgram* pRTCtrlprogram = bfIOs.getRTCtrProgram( 0 );
if( !pRTCtrlprogram )
{
// this only should happen if the system is short of memory
cout << "Error! No valid program. Short of memory?" << endl;
return;
}
// start of the program
// we need 5 steps for the program
pRTCtrlprogram->setProgramSize( 5 );
// wait a certain amount of time to achieve the desired frequency
int progStep = 0;
RTCtrProgramStep* pRTCtrlStep = 0;
pRTCtrlStep = pRTCtrlprogram->programStep( progStep++ );
pRTCtrlStep->opCode.write( rtctrlProgWaitClocks );
pRTCtrlStep->clocks_us.write( frametime_us - TRIGGER_PULSE_WIDTH_us );
// trigger an image
pRTCtrlStep = pRTCtrlprogram->programStep( progStep++ );
pRTCtrlStep->opCode.write( rtctrlProgTriggerSet );
// high time for the trigger signal (should not be smaller than 100 us)
pRTCtrlStep = pRTCtrlprogram->programStep( progStep++ );
pRTCtrlStep->opCode.write( rtctrlProgWaitClocks );
pRTCtrlStep->clocks_us.write( TRIGGER_PULSE_WIDTH_us );
// end trigger signal
pRTCtrlStep = pRTCtrlprogram->programStep( progStep++ );
pRTCtrlStep->opCode.write( rtctrlProgTriggerReset );
// restart the program
pRTCtrlStep = pRTCtrlprogram->programStep( progStep++ );
pRTCtrlStep->opCode.write( rtctrlProgJumpLoc );
pRTCtrlStep->address.write( 0 );
// start the program
pRTCtrlprogram->mode.write( rtctrlModeRun );
// Now this camera will deliver images at exactly the desired frequency
// when it is constantly feed with image requests and the camera can deliver
// images at this frequency.
}
//-----------------------------------------------------------------------------
void storeImageToStream( FunctionInterface& fi, int requestNr, AVIWrapper* pAVIWrapper )
//-----------------------------------------------------------------------------
{
if( fi.isRequestNrValid( requestNr ) )
{
const Request* pRequest = fi.getRequest( requestNr );
if( pRequest->isOK() )
{
// store to AVI file
try
{
// in a real application this would be done in a separate thread in order to
// buffer hard-disc delays.
// Unfortunately we have to flip the images as they are stored upside down in the stream...
// this function only works for image formats where each channel has the same line pitch!
inplaceHorizontalMirror( pRequest->imageData.read(), pRequest->imageHeight.read(), pRequest->imageLinePitch.read() );
pAVIWrapper->SaveDataToAVIStream( reinterpret_cast<unsigned char*>( pRequest->imageData.read() ), pRequest->imageSize.read() );
}
catch( const AVIException& e )
{
cout << "Could not store image to stream(" << string( e.what() ) << ")" << endl;
}
}
}
}
//-----------------------------------------------------------------------------
unsigned int __stdcall liveThread( void* pData )
//-----------------------------------------------------------------------------
{
ThreadParameter* pThreadParameter = reinterpret_cast<ThreadParameter*>( pData );
ImageDisplay& display = pThreadParameter->displayWindow.GetImageDisplay();
// create an interface to the device found
FunctionInterface fi( pThreadParameter->pDev );
// Send all requests to the capture queue. There can be more than 1 queue for some devices, but for this sample
// we will work with the default capture queue. If a device supports more than one capture or result
// queue, this will be stated in the manual. If nothing is mentioned about it, the device supports one
// queue only. This loop will send all requests currently available to the driver. To modify the number of requests
// use the property mvIMPACT::acquire::SystemSettings::requestCount at runtime or the property
// mvIMPACT::acquire::Device::defaultRequestCount BEFORE opening the device.
while( ( result = static_cast<TDMR_ERROR>( fi.imageRequestSingle() ) ) == DMR_NO_ERROR ) {};
{
cout << "'FunctionInterface.imageRequestSingle' returned with an unexpected result: " << result
<< "(" << ImpactAcquireException::getErrorCodeAsString( result ) << ")" << endl;
}
manuallyStartAcquisitionIfNeeded( pThreadParameter->pDev, fi );
const Request* pRequest = 0;
const unsigned int timeout_ms = 500;
int requestNr = INVALID_ID;
// we always have to keep at least 2 images as the display module might want to repaint the image, thus we
// cannot free it unless we have a assigned the display to a new buffer.
int lastRequestNr = INVALID_ID;
// run thread loop
while( !s_boTerminated )
{
// wait for results from the default capture queue
requestNr = fi.imageRequestWaitFor( timeout_ms );
if( fi.isRequestNrValid( requestNr ) )
{
pRequest = fi.getRequest( requestNr );
if( pRequest->isOK() )
{
display.SetImage( pRequest );
display.Update();
}
else
{
cout << "Error: " << pRequest->requestResult.readS() << endl;
}
// As images might be redrawn by the display window, we can't process the image currently
// displayed. In order not to copy the current image, which would cause additional CPU load
// we will flip and store the previous image if available
storeImageToStream( fi, lastRequestNr, pThreadParameter->pAVIWrapper );
if( fi.isRequestNrValid( lastRequestNr ) )
{
// this image has been displayed thus the buffer is no longer needed...
fi.imageRequestUnlock( lastRequestNr );
}
lastRequestNr = requestNr;
// send a new image request into the capture queue
fi.imageRequestSingle();
}
//else
//{
// Please note that slow systems or interface technologies in combination with high resolution sensors
// might need more time to transmit an image than the timeout value which has been passed to imageRequestWaitFor().
// If this is the case simply wait multiple times OR increase the timeout(not recommended as usually not necessary
// and potentially makes the capture thread less responsive) and rebuild this application.
// Once the device is configured for triggered image acquisition and the timeout elapsed before
// the device has been triggered this might happen as well.
// The return code would be -2119(DEV_WAIT_FOR_REQUEST_FAILED) in that case, the documentation will provide
// additional information under TDMR_ERROR in the interface reference.
// If waiting with an infinite timeout(-1) it will be necessary to call 'imageRequestReset' from another thread
// to force 'imageRequestWaitFor' to return when no data is coming from the device/can be captured.
// cout << "imageRequestWaitFor failed (" << requestNr << ", " << ImpactAcquireException::getErrorCodeAsString( requestNr ) << ")"
// << ", timeout value too small?" << endl;
//}
}
manuallyStopAcquisitionIfNeeded( pThreadParameter->pDev, fi );
// stop the display from showing freed memory
display.RemoveImage();
// try to store the last image into the stream
storeImageToStream( fi, requestNr, pThreadParameter->pAVIWrapper );
// In this sample all the next lines are redundant as the device driver will be
// closed now, but in a real world application a thread like this might be started
// several times an then it becomes crucial to clean up correctly.
// free the last potentially locked request
if( fi.isRequestNrValid( requestNr ) )
{
fi.imageRequestUnlock( requestNr );
}
// clear all queues
fi.imageRequestReset( 0, 0 );
return 0;
}
//-----------------------------------------------------------------------------
int main( int argc, char* argv[] )
//-----------------------------------------------------------------------------
{
DeviceManager devMgr;
Device* pDev = getDeviceFromUserInput( devMgr );
if( !pDev )
{
cout << "Unable to continue! Press any key to end the program." << endl;
return _getch();
}
// default parameters
string fileName( ".\\output.avi" );
unsigned int frameRate = 25;
unsigned int recordingTime = 0;
bool boInvalidCommandLineParameterDetected = false;
// scan command line
if( argc > 1 )
{
for( int i = 1; i < argc; i++ )
{
string param( argv[i] ), key, value;
string::size_type keyEnd = param.find_first_of( "=" );
if( ( keyEnd == string::npos ) || ( keyEnd == param.length() - 1 ) )
{
cout << "Invalid command line parameter: '" << param << "' (ignored)." << endl;
boInvalidCommandLineParameterDetected = true;
}
else
{
key = param.substr( 0, keyEnd );
value = param.substr( keyEnd + 1 );
if( ( key == "outputFile" ) || ( key == "of" ) )
{
fileName = value;
}
else if( ( key == "frameRate" ) || ( key == "fr" ) )
{
frameRate = static_cast<unsigned int>( atoi( value.c_str() ) );
}
else if( ( key == "recordingTime" ) || ( key == "rt" ) )
{
recordingTime = static_cast<unsigned int>( atoi( value.c_str() ) );
}
else
{
cout << "Invalid command line parameter: '" << param << "' (ignored)." << endl;
boInvalidCommandLineParameterDetected = true;
}
}
}
if( boInvalidCommandLineParameterDetected )
{
displayCommandLineOptions();
}
}
else
{
cout << "No command line parameters specified." << endl;
displayCommandLineOptions();
}
cout << endl
<< "PLEASE NOTE THAT THIS EXAMPLE APPLICATION MAKES USE OF A VERY OLD, OUTDATED WINDOWS ONLY API WHICH IS NOT RECOMMENDED FOR NEW PROJECTS!" << endl
<< "There are various other, more portable ways to encode/store a video stream there day. Please consider using the FFmpeg library (see" << endl
<< "'ContinuousCaptureFFmpeg' in the C++ manual) or something similar instead!" << endl
<< endl
<< "Using output file '" << fileName << "' with " << frameRate << " frames per second for playback (this has nothing to do with the capture frame rate but only affects the frame rate stored in the header of the AVI file)" << endl
<< endl
<< "Please note that if the frame rate specified only affects the playback speed for the resulting AVI file." << endl
<< "Devices that support a fixed frame rate should be set to the same rate, but this won't be done" << endl
<< "in this sample, thus the playback speed of the AVI file might differ from the real acquisition speed" << endl;
cout << "Initialising the device. This might take some time..." << endl << endl;
try
{
pDev->open();
}
catch( const ImpactAcquireException& e )
{
// this e.g. might happen if the same device is already opened in another process...
cout << "An error occurred while opening device " << pDev->serial.read()
<< "(error code: " << e.getErrorCode() << "). Press any key to end the application..." << endl;
return _getch();
}
int width = 0, height = 0, bitcount = 0;
try
{
// set up the device for AVI output.
// most codecs only accept RGB888 data with no alpha byte. Make sure that either the driver is
// operated in RGB888Packed mode or you supply the correct image data converted by hand here.
// Here we select the color mode satisfying most codecs, so this sample will work in most cases, but not always.
// For details about the used codec have a look on the net...
ImageDestination id( pDev );
id.pixelFormat.write( idpfRGB888Packed );
// Now we need to find out the dimension of the resulting image. Thus we have to perform a dummy image capture.
ImageRequestControl irc( pDev );
FunctionInterface fi( pDev );
Request* pCurrentCaptureBufferLayout = 0;
fi.getCurrentCaptureBufferLayout( irc, &pCurrentCaptureBufferLayout );
// now we have the information needed to configure the AVI stream
width = pCurrentCaptureBufferLayout->imageWidth.read();
height = pCurrentCaptureBufferLayout->imageHeight.read();
bitcount = pCurrentCaptureBufferLayout->imageBytesPerPixel.read() * 8;
}
catch( const ImpactAcquireException& e )
{
cout << "An exception occurred while configuring the device: " << e.getErrorString() << "(" << e.getErrorCode() << ")." << endl
<< "Unable to continue. Press any key to end the application." << endl << endl;
return _getch();
}
if( pDev->family.read() == "mvBlueFOX" )
{
setupBlueFOXFrameRate( pDev, frameRate );
}
// Now we have to create and configure the AVI stream
// create the AVI file builder
try
{
AVIWrapper myAVIWrapper;
myAVIWrapper.OpenAVIFile( fileName.c_str(), OF_WRITE | OF_CREATE | OF_SHARE_DENY_WRITE );
// To select from installed compression handlers, pass codecMax as codec to the next function, which is also
// the default parameter if not specified. Windows will display a dialog to select the codec then.
// Most codecs only accept RGB888 data with no alpha byte. Make sure that either the driver is
// operated in RGB888Packed mode or you supply the correct image data converted by hand here.
cout << "Please select a compression handler from the dialog box" << endl << endl;
myAVIWrapper.CreateAVIStreamFromDIBs( width, height, bitcount, frameRate, 8000, "myStream" );
// The remaining work is almost the same as for every other continuous acquisition. We have to start a capture thread
// and configure the display to show the captured images that will also be written to the stream...
// start the execution of the 'live' thread.
unsigned int dwThreadID;
string windowTitle( "mvIMPACT_acquire sample, Device " + pDev->serial.read() );
// initialise display window
// IMPORTANT: It's NOT safe to create multiple display windows in multiple threads!!!
ThreadParameter threadParam( pDev, windowTitle, &myAVIWrapper );
HANDLE hThread = ( HANDLE )_beginthreadex( 0, 0, liveThread, ( LPVOID )( &threadParam ), 0, &dwThreadID );
if( recordingTime == 0 )
{
cout << "Press any key to end the application" << endl;
if( _getch() == EOF )
{
cout << "'_getch()' did return EOF..." << endl;
}
s_boTerminated = true;
WaitForSingleObject( hThread, INFINITE );
CloseHandle( hThread );
}
else
{
cout << "Recording for " << recordingTime << " ms. Please wait..." << endl;
Sleep( recordingTime );
s_boTerminated = true;
WaitForSingleObject( hThread, INFINITE );
CloseHandle( hThread );
cout << "Press any key to end the application" << endl;
return _getch();
}
}
catch( const AVIException& e )
{
cout << "Error while creating AVI stream(" << string( e.what() ) << ")." << endl
<< "Please note, that not every codec will accept every pixel format, thus this error might" << endl
<< "appear without changing the destination pixel format within the driver. However the" << endl
<< "format selected in this sample (RGB888Packed) works for the greatest number of codecs" << endl
<< "Unable to continue. Press any key to end the application." << endl;
return _getch();
}
return 0;
}
@ idpfRGB888Packed
The image will be transferred as an RGB image with 24 bit per pixel.
Definition: mvDriverBaseEnums.h:3707
@ ctsRTCtrl
Use real time controller (RTCtrl) as the source for the trigger signal.
Definition: mvDriverBaseEnums.h:1399
@ rtctrlProgWaitClocks
Wait for n clocks.
Definition: mvDriverBaseEnums.h:4985
@ rtctrlProgJumpLoc
Jump to location.
Definition: mvDriverBaseEnums.h:4987
@ rtctrlProgTriggerReset
Reset internal trigger signal of the sensor controller.
Definition: mvDriverBaseEnums.h:4991
@ rtctrlProgTriggerSet
Set internal trigger signal of the sensor controller.
Definition: mvDriverBaseEnums.h:4989
@ ctmOnRisingEdge
Start the exposure of a frame when the trigger input level changes from low to high.
Definition: mvDriverBaseEnums.h:1372
@ rtctrlModeRun
RTC switched on and NOT editable.
Definition: mvDriverBaseEnums.h:4967
This namespace contains classes and functions that can be used to display images.
This namespace contains classes and functions belonging to the image acquisition module of this SDK.
const int INVALID_ID
A constant to check for an invalid ID returned from the property handling module.
Definition: mvPropHandlingDatatypes.h:58