mvIMPACT Acquire SDK C++
ContinuousCaptureFFmpeg.cpp

The ContinuousCaptureFFmpeg program is a short example which shows how the image data acquired by mvIMPACT Acquire can be used to store a video stream. It is based on the ContinuousCapture.cpp example concerning the acquisition implementation

Since
2.38.0
Note
The source code of this example makes use of the FFmpeg project. Also please have a close look at this section regarding any licence issues: FFmpeg.
The following packages are needed to build the code:
Windows
  • FFmpeg-dev
  • FFmpeg-share
Linux
  • libavcodec-dev
  • libavformat-dev
  • libavutil-dev
Note
At least version 3.x of the FFmpeg packages is needed in order to be able to compile this example! Older versions of the FFmpeg API are NOT compatible with this code and compilation will fail! The libraries needed which can be used with this code are:
  • libavcodec-58, libavformat-58 and libavutil-56 (FFmpeg 4.0 was released on 20.04.2018)
  • libavcodec-57, libavformat-57 and libavutil-55 (FFmpeg 3.0 was released on 15.02.2016)
A C++ 11 compatible compiler is needed to build this example application! You might need to explicitly enable C++11 support in your Makefile!
ContinuousCaptureFFmpeg example:
  1. Opens a device
  2. Configures the device and tries to setup the output pixel format in way that the actual video encoder does not need to convert the input data again
  3. Sets up the video encoder
  4. Starts the image acquisition
  5. Stores each frame in the video file
Note
This example is meant to show how to use the FFmpeg API in combination with the mvIMPACT Acquire API. Since version 2.39.0 there is also a video stream API (e.g. the class mvIMPACT::acquire::labs::VideoStream) that wraps most of the difficult FFmpeg related calls and offers a convenient interface. The approach shown here should only be used when the full FFmpeg API is needed.
Please note, that due to weak data transfer rates on certain systems e.g. USB transfer and data transfer to hard disk at the same time might have a bad influence on the overall performance. Decoupling the actual encoding of the video data from the acquisition using multiple threads might be a good idea then. Also fiddling around with the quality or the used mvIMPACT::acquire::TVideoCodec might help.
How it works

First the device and the image processing engine is configured.

Note
Not every video encoder supports every pixel format. Most video encoders want YUV422 data either in planar or packed format. So mvIMPACT Acquire should be configured in a way that either mvIMPACT::acquire::ibpfYUV422Packed or mvIMPACT::acquire::ibpfYUV422Planar image buffers will be returned to the application. To figure out which formats are supported by which encoder run the ffmpeg executable using the following parameters:
ffmpeg -h encoder=<NAME OF YOUR CODEC> -v quiet

If for any reason a format is needed that is NOT supported by mvIMPACT Acquire the application should select the format that matches best and has to perform the final conversion on its own then.

// make sure the image resolution matches the video resolution
int64_type imageWidth = {0};
int64_type imageHeight = {0};
try
{
if( pDev->interfaceLayout.read() == dilGenICam )
{
GenICam::UserSetControl usc( pDev );
if( usc.userSetSelector.isValid() && usc.userSetSelector.isWriteable() && usc.userSetLoad.isValid() )
{
usc.userSetSelector.writeS( "Default" );
usc.userSetLoad.call();
}
GenICam::ImageFormatControl ifc( pDev );
if( ifc.width.isValid() && ifc.height.isValid() )
{
imageWidth = ifc.width.read();
imageHeight = ifc.height.read();
}
GenICam::AcquisitionControl acq( pDev );
if( acq.acquisitionFrameRateEnable.isValid() && acq.acquisitionFrameRateEnable.isWriteable() )
{
acq.acquisitionFrameRateEnable.write( bTrue );
acq.acquisitionFrameRate.write( 25.0 );
}
}
else
{
BasicDeviceSettingsWithAOI bds( pDev );
if( bds.aoiWidth.isValid() && bds.aoiHeight.isValid() )
{
imageWidth = bds.aoiWidth.read();
imageHeight = bds.aoiHeight.read();
}
}
ImageDestination id( pDev );
id.pixelFormat.write( idpfYUV422Packed );
}
catch( const ImpactAcquireException& e )
{
// This e.g. might happen if the same device is already opened in another process...
cout << "An error occurred while configuring the device " << pDev->serial.read()
<< "(error code: " << e.getErrorCodeAsString() << ")." << endl
<< "Press [ENTER] to end the application..." << endl;
cin.get();
return 1;
}
@ idpfYUV422Packed
This is a YUV422 packed image with 32 bit for a pair of pixels.
Definition: mvDriverBaseEnums.h:3647
@ bTrue
On, true or logical high.
Definition: mvDriverBaseEnums.h:576
@ dilGenICam
A GenICamâ„¢ like interface layout shall be used.
Definition: mvDriverBaseEnums.h:2004
Note
In versions smaller than 58 of libavcodec (which provides the compressing functionalities of FFmpeg), it is necessary to register the codecs before usage by calling avcodec_register_all(). This is done in the constructor of the AVIHelper class:
explicit AVIHelper() : pPkt_( nullptr ), pF_( nullptr ), pFrame_( nullptr ), pC_ ( nullptr ), pCodec_( nullptr )
{
#if (LIBAVCODEC_VERSION_MAJOR < 58)
// older versions of libavcodec need to register the codecs before usage
avcodec_register_all();
#endif // #ifdef REGISTER_CODECS
}

Later the AVIHelper::startRecordingEngine() is called to configure the encoder and file access:

pCodec_ = avcodec_find_encoder_by_name( codec_name.c_str() );
if( !pCodec_ )
{
cout << "Codec '" << codec_name << "' not found" << std::endl;
return 1;
}
pC_ = avcodec_alloc_context3( pCodec_ );
if( !pC_ )
{
cout << "Could not allocate video codec context!" << std::endl;
return 1;
}
pPkt_ = av_packet_alloc();
if( !pPkt_ )
{
return 1;
}
// pass the devices resolution to the codec
pC_->width = static_cast<int>( width );
pC_->height = static_cast<int>( height );
// frames per second
pC_->time_base = AVRational {1, 25};
pC_->framerate = AVRational {25, 1};
// emit one intra frame every ten frames
// check frame pict_type before passing frame
// to encoder, if frame->pict_type is AV_PICTURE_TYPE_I
// then gop_size is ignored and the output of encoder
// will always be I frame irrespective to gop_size
pC_->gop_size = 10;
pC_->max_b_frames = 1;
// set the pixel format for the codec
// this has to match the pixel format of the request, otherwise the mapping of the pixel data would not work correctly
pC_->pix_fmt = AV_PIX_FMT_YUV422P;
// put sample parameters
pC_->bit_rate = getOptimalBitRateValue( width, height );
if( pCodec_->id == AV_CODEC_ID_MPEG2VIDEO )
{
av_opt_set( pC_->priv_data, "preset", "slow", 0 );
}
// open it
int ret = avcodec_open2( pC_, pCodec_, NULL );
if( ret < 0 )
{
char buf[AV_ERROR_MAX_STRING_SIZE];
av_strerror( ret, buf, sizeof( buf ) );
cout << "Could not open codec: " << buf << ", ret: " << ret << endl;
return 1;
}
// open the file to store the video data
pF_ = mv_fopen_s( filename.c_str(), "wb" );
if( !pF_ )
{
cout << "Could not open " << filename << std::endl;
return 1;
}
pFrame_ = av_frame_alloc();
if( !pFrame_ )
{
cout << "Could not allocate video frame" << std::endl;
return 1;
}
pFrame_->format = pC_->pix_fmt;
pFrame_->width = pC_->width;
pFrame_->height = pC_->height;
ret = av_frame_get_buffer( pFrame_, 32 );
if( ret < 0 )
{
cout << "Could not allocate the video frame data" << std::endl;
return 1;
}
return 0;

Once a frame is acquired the frame is appended to the file.

// get access to the image data of the request
const ImageBuffer* pIB = pReq->getImageBufferDesc().getBuffer();
fflush( stdout );
// make sure the frame data is writable
av_frame_make_writable( pFrame_ );
// map the pixel data from request memory to the frame's pixel data
for( int y = 0; y < pIB->iHeight; y++ )
{
unsigned char* pSrc = reinterpret_cast<unsigned char*>( pIB->vpData ) + y * pIB->pChannels[0].iLinePitch;
uint8_t* pDstY = pFrame_->data[0] + y * pFrame_->linesize[0];
uint8_t* pDstU = pFrame_->data[1] + y * pFrame_->linesize[1];
uint8_t* pDstV = pFrame_->data[2] + y * pFrame_->linesize[2];
for( int x = 0; x < pIB->iWidth / 2; x++ )
{
// YUV422Packed to YUV422Planar
*pDstY++ = *pSrc++;
*pDstU++ = *pSrc++;
*pDstY++ = *pSrc++;
*pDstV++ = *pSrc++;
}
}
// the frame number is mapped to the request's frame number
pFrame_->pts = cnt;
encode( pC_, pFrame_, pF_ );

Finally the file is closed and allocated resources are freed by calling the AVIHelper::stopRecordEngine() or the destructor of the AVIHelper class.

if( pC_ && pF_ && pCodec_ && pPkt_ )
{
static const uint8_t s_endcode[4] { 0, 0, 1, 0xb7};
// flush the encoder
encode( pC_, nullptr, pF_ );
// add sequence end code to have a real MPEG file
if( pCodec_->id == AV_CODEC_ID_MPEG2VIDEO )
{
fwrite( s_endcode, 1, sizeof( s_endcode ), pF_ );
}
// close the file
fclose( pF_ );
pF_ = nullptr;
avcodec_free_context( &pC_ );
pC_ = nullptr;
av_frame_free( &pFrame_ );
pFrame_ = nullptr;
av_packet_free( &pPkt_ );
pPkt_ = nullptr;
}
FFmpegIncludePrologue.h
// *INDENT-OFF*
#if defined(_MSC_VER) && (_MSC_VER >= 1400) // is at least VC 2005 compiler?
# pragma warning( push )
# pragma warning( disable : 4244 ) // 'conversion from 'Bla' to 'Blub', possible loss of data
#elif defined(__clang__) || defined(__clang_analyzer__) // check for __clang__ first as clang also defines __GNUC__
# pragma clang diagnostic push
# pragma clang diagnostic ignored "-Wc++11-extensions" // commas at the end of enumerator lists are a C++11 extension
#elif defined(__GNUC__)
# if((__GNUC__ * 100) + __GNUC_MINOR__) >= 406 // is at least gcc 4.6
# pragma GCC diagnostic push
# if((__GNUC__ * 100) + __GNUC_MINOR__) >= 602 // is at least gcc 6.2.0
# pragma GCC diagnostic ignored "-Wpedantic"
# pragma GCC diagnostic ignored "-Wattributes"
# else
# pragma GCC diagnostic ignored "-pedantic"
# endif
# endif
#endif
// *INDENT-ON*
#ifdef __cplusplus
# define __STDC_CONSTANT_MACROS
# define __STDINT_MACROS
extern "C" {
#endif // #ifdef __cplusplus
#ifndef INT64_C
# define INT64_C(c) (c ## LL)
# define UINT64_C(c) (c ## ULL)
#endif
FFmpegIncludeEpilogue.h
#ifdef __cplusplus
}
#endif // #ifdef __cplusplus
// *INDENT-OFF*
#if defined(_MSC_VER) && (_MSC_VER >= 1400) // is at least VC 2005 compiler?
# pragma warning( pop )
#elif defined(__clang__) // check for __clang__ first as clang also defines __GNUC__
# pragma clang diagnostic pop
#elif defined(__GNUC__)
# if((__GNUC__ * 100) + __GNUC_MINOR__) >= 406 // is at least gcc 4.6
# pragma GCC diagnostic pop
# endif
#endif
// *INDENT-ON*
ContinuousCaptureFFmpeg.cpp
// STL includes
#include <algorithm>
#include <chrono>
#include <functional>
#include <iostream>
#include <map>
// FFmpeg includes
#include <apps/Common/FFmpegIncludePrologue.h>
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavutil/opt.h>
#include <libavutil/imgutils.h>
#include <apps/Common/FFmpegIncludeEpilogue.h>
#if defined(LIBAVCODEC_VERSION_MAJOR) && (LIBAVCODEC_VERSION_MAJOR < 57)
# error Outdated libavcodec header package detected! We need at least a 3.x release of the FFmpeg package in order for this code to compile!
#endif
#if defined(LIBAVFORMAT_VERSION_MAJOR) && (LIBAVFORMAT_VERSION_MAJOR < 57)
# error Outdated libavformat header package detected! We need at least a 3.x release of the FFmpeg package in order for this code to compile!
#endif
#if defined(LIBAVUTIL_VERSION_MAJOR) && (LIBAVUTIL_VERSION_MAJOR < 55)
# error Outdated libavutil header package detected! We need at least a 3.x release of the FFmpeg package in order for this code to compile!
#endif
// mvIMPACT Acquire includes
#include <apps/Common/exampleHelper.h>
#ifdef _WIN32
# define USE_DISPLAY
#endif // #ifdef _WIN32
using namespace mvIMPACT::acquire;
using namespace std;
typedef map<string, AVCodecID> StringToCodecMap;
//-----------------------------------------------------------------------------
class AVIHelper
//-----------------------------------------------------------------------------
{
private:
static StringToCodecMap s_supportedCodecs_;
AVPacket* pPkt_;
AVFrame* pFrame_;
AVCodecContext* pC_;
AVPixelFormat pixelFormat_;
AVFormatContext* pFC_;
bool boMustOpenAndCloseFile_;
int64_type firstTimestamp_us_;
string getErrorMessageFromCode( int code )
{
char buf[AV_ERROR_MAX_STRING_SIZE];
av_strerror( code, buf, sizeof( buf ) );
return buf;
}
void encode( AVCodecContext* pEncCtx, AVFrame* pFrame )
{
// send the frame to the encoder
if( !pFrame )
{
cout << "Flushing encoder stream (not gotten any frame)" << endl;
}
int ret = avcodec_send_frame( pEncCtx, pFrame );
if( ret < 0 )
{
cout << "Error sending a frame for encoding" << endl;
exit( 1 );
}
while( ret >= 0 )
{
// Initialize the packet with default values
av_init_packet( pPkt_ );
ret = avcodec_receive_packet( pEncCtx, pPkt_ );
if( ( ret == AVERROR( EAGAIN ) ) || ( ret == AVERROR_EOF ) )
{
return;
}
else if( ret < 0 )
{
cout << "Error during encoding" << endl;
exit( 1 );
}
if( ( pPkt_->pts % 100 ) == 0 )
{
cout << "Writing encoded frame " << pPkt_->pts << " (size=" << pPkt_->size << ")" << endl;
}
pPkt_->stream_index = 0;
ret = av_interleaved_write_frame( pFC_, pPkt_ );
if( ret < 0 )
{
cout << "Error while writing video frame" << endl;
}
av_packet_unref( pPkt_ );
}
}
int64_type getOptimalBitRateValue( int64_type width, int64_type height ) const
{
if( ( pC_->framerate.num != FRAME_RATE ) && ( pC_->framerate.den != 1 ) )
{
cout << "Unable to continue! Press [ENTER] to end the application" << endl;
cin.get();
exit( 1 );
}
return width * height * 3;
}
static void populateCodecMap( void )
{
if( s_supportedCodecs_.empty() )
{
s_supportedCodecs_["MPEG2VIDEO"] = AV_CODEC_ID_MPEG2VIDEO;
s_supportedCodecs_["H264"] = AV_CODEC_ID_H264;
s_supportedCodecs_["H265"] = AV_CODEC_ID_H265;
}
}
public:
explicit AVIHelper() : pPkt_( nullptr ), pFrame_( nullptr ), pC_ ( nullptr ), pixelFormat_( AV_PIX_FMT_NONE ),
pFC_( nullptr ), boMustOpenAndCloseFile_( false )
{
#if (LIBAVCODEC_VERSION_MAJOR < 58)
// older versions of libavcodec need to register the codecs before usage
avcodec_register_all();
#endif // #ifdef REGISTER_CODECS
populateCodecMap();
}
~AVIHelper()
{
stopRecordingEngine();
}
enum
{
FRAME_RATE = 25,
DEFAULT_TIMESCALE = 1000000
};
static bool isCodecSupported( const string& codec )
{
populateCodecMap();
return s_supportedCodecs_.find( codec ) != s_supportedCodecs_.end();
}
static const StringToCodecMap& getSupportedCodecs( void )
{
populateCodecMap();
return s_supportedCodecs_;
}
int startRecordingEngine( int64_type width, int64_type height, AVCodecID codecToUse, int crf, int bitrate, const string& fileName )
{
string crf_str = std::to_string( crf );
firstTimestamp_us_ = 0;
string fullFileName( fileName );
int timescale = DEFAULT_TIMESCALE;
pixelFormat_ = AV_PIX_FMT_YUV422P;
switch( codecToUse )
{
case AV_CODEC_ID_MPEG2VIDEO:
fullFileName += ".m2v";
timescale = FRAME_RATE;
break;
case AV_CODEC_ID_H264:
case AV_CODEC_ID_H265:
fullFileName += ".mp4";
break;
default:
cout << "Codec not found!" << endl;
return 1;
}
AVOutputFormat* pOFormat = av_guess_format( NULL, fullFileName.c_str(), NULL );
if( ( avformat_alloc_output_context2( &pFC_, pOFormat, NULL, fullFileName.c_str() ) ) < 0 )
{
return 1;
}
const AVCodec* pCodec = avcodec_find_encoder( codecToUse );
if( !pCodec )
{
cout << "Codec not found" << endl;
return 1;
}
// Set codec in output format
pOFormat->video_codec = codecToUse;
AVStream* pVideoStream = avformat_new_stream( pFC_, pCodec );
pVideoStream->time_base = AVRational {1, timescale};
pVideoStream->avg_frame_rate = { FRAME_RATE, 1 };
pVideoStream->codecpar->codec_id = codecToUse;
pVideoStream->id = pFC_->nb_streams - 1;
pVideoStream->codecpar->codec_id = pOFormat->video_codec;
pVideoStream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
pVideoStream->codecpar->width = static_cast<int>( width );
pVideoStream->codecpar->height = static_cast<int>( height );
pVideoStream->codecpar->format = pixelFormat_;
if( codecToUse == AV_CODEC_ID_MPEG2VIDEO )
{
pVideoStream->codecpar->bit_rate = bitrate; // 6.000 kBit/s as default maximum bitrate for MPEG2
}
pC_ = avcodec_alloc_context3( pCodec );
if( !pC_ )
{
cout << "Could not allocate video codec context!" << endl;
return 1;
}
// setup specific parameters for the different file formats
pC_->max_b_frames = 1;
AVDictionary* pOptParams = nullptr;
pC_->time_base = pVideoStream->time_base;
pC_->gop_size = 10;
pC_->pkt_timebase = pC_->time_base;
switch( pVideoStream->codecpar->codec_id )
{
case AV_CODEC_ID_H264:
av_opt_set( pC_->priv_data, "crf", crf_str.c_str(), AV_OPT_SEARCH_CHILDREN );
av_opt_set( pC_->priv_data, "preset", "slow", AV_OPT_SEARCH_CHILDREN );
break;
case AV_CODEC_ID_H265:
av_opt_set( pC_->priv_data, "crf", crf_str.c_str(), AV_OPT_SEARCH_CHILDREN );
av_opt_set( pC_->priv_data, "preset", "fast", AV_OPT_SEARCH_CHILDREN );
break;
default:
break;
}
int ret = avcodec_parameters_to_context( pC_, pVideoStream->codecpar );
if( ret != 0 )
{
cout << "Could not pass stream parameters to codec context: " << getErrorMessageFromCode( ret ) << ", ret: " << ret << endl;
}
boMustOpenAndCloseFile_ = !( pOFormat->flags & AVFMT_NOFILE );
if( boMustOpenAndCloseFile_ )
{
avio_open( &pFC_->pb, fullFileName.c_str(), AVIO_FLAG_WRITE );
}
av_dump_format( pFC_, 0, fullFileName.c_str(), 1 );
pPkt_ = av_packet_alloc();
if( !pPkt_ )
{
cout << "Could not allocate AVPacket structure!" << endl;
return 1;
}
// pass the devices resolution to the codec
pC_->width = pVideoStream->codecpar->width;
pC_->height = pVideoStream->codecpar->height;
pC_->framerate = AVRational {FRAME_RATE, 1};
pC_->time_base = pVideoStream->time_base;
// emit one intra frame every ten frames
// check frame pict_type before passing frame
// to encoder, if frame->pict_type is AV_PICTURE_TYPE_I
// then gop_size is ignored and the output of encoder
// will always be I frame irrespective to gop_size
pC_->gop_size = 10;
// set the pixel format for the codec
// this has to match the pixel format of the request, otherwise the mapping of the pixel data would not work correctly
pC_->pix_fmt = pixelFormat_;
// put sample parameters
pC_->bit_rate = getOptimalBitRateValue( width, height );
pC_->strict_std_compliance = FF_COMPLIANCE_UNOFFICIAL;
ret = avcodec_open2( pC_, pCodec, NULL );
if( ret < 0 )
{
cout << "Could not open codec: " << getErrorMessageFromCode( ret ) << ", ret: " << ret << endl;
return 1;
}
pFrame_ = av_frame_alloc();
if( !pFrame_ )
{
cout << "Could not allocate video frame" << endl;
return 1;
}
pFrame_->format = pC_->pix_fmt;
pFrame_->width = pC_->width;
pFrame_->height = pC_->height;
ret = av_frame_get_buffer( pFrame_, 32 );
if( ret < 0 )
{
cout << "Could not allocate the video frame data" << endl;
return 1;
}
const int result = avformat_write_header( pFC_, &pOptParams );
if( pOptParams )
{
av_dict_free( &pOptParams );
pOptParams = nullptr;
}
if( result != 0 )
{
cout << "Could not write file header!" << endl;
return 1;
}
return 0;
}
void writeFrame( shared_ptr<Request> pReq, int cnt )
{
// get access to the image data of the request
const ImageBuffer* pIB = pReq->getImageBufferDesc().getBuffer();
fflush( stdout );
av_frame_make_writable( pFrame_ );
// map the pixel data from request memory to the frame's pixel data
switch( pIB->pixelFormat )
{
switch( pixelFormat_ )
{
case AV_PIX_FMT_YUV422P:
for( int y = 0; y < pIB->iHeight; y++ )
{
const unsigned char* pSrc = reinterpret_cast<const unsigned char*>( pIB->vpData ) + y * pIB->pChannels[0].iLinePitch;
uint8_t* pDstY = pFrame_->data[0] + y * pFrame_->linesize[0];
uint8_t* pDstU = pFrame_->data[1] + y * pFrame_->linesize[1];
uint8_t* pDstV = pFrame_->data[2] + y * pFrame_->linesize[2];
for( int x = 0; x < pIB->iWidth / 2; x++ )
{
// YUV422Packed to YUV422Planar/AV_PIX_FMT_YUV422P
*pDstY++ = *pSrc++;
*pDstU++ = *pSrc++;
*pDstY++ = *pSrc++;
*pDstV++ = *pSrc++;
}
}
break;
case AV_PIX_FMT_YUV420P:
for( int y = 0; y < pIB->iHeight; y++ )
{
const unsigned char* pSrc = reinterpret_cast<const unsigned char*>( pIB->vpData ) + y * pIB->pChannels[0].iLinePitch;
uint8_t* pDstY = pFrame_->data[0] + y * pFrame_->linesize[0];
uint8_t* pDstU = pFrame_->data[1] + y * pFrame_->linesize[1];
uint8_t* pDstV = pFrame_->data[2] + y * pFrame_->linesize[2];
for( int x = 0; x < pIB->iWidth / 2; x++ )
{
// YUV422Packed to YUV422Planar/AV_PIX_FMT_YUV422P
*pDstY++ = *pSrc++;
*pDstY++ = *pSrc++;
if( ( y % 2 ) == 0 )
{
*pDstU++ = *pSrc++;
*pDstV++ = *pSrc++;
}
}
}
break;
default:
assert( !"Unhandled FFmpeg pixel format detected!" );
break;
}
break;
switch( pixelFormat_ )
{
case AV_PIX_FMT_YUV422P:
for( int y = 0; y < pIB->iHeight; y++ )
{
for( int channel = 0; channel < 3; channel++ )
{
memcpy( pFrame_->data[channel] + y * pFrame_->linesize[channel], reinterpret_cast<const unsigned char*>( pIB->vpData ) + y * pIB->pChannels[channel].iLinePitch + pIB->pChannels[channel].iChannelOffset, ( pIB->pChannels[channel].iLinePitch < pFrame_->linesize[channel] ) ? pIB->pChannels[channel].iLinePitch : pFrame_->linesize[channel] );
}
}
break;
case AV_PIX_FMT_YUV420P:
for( int y = 0; y < pIB->iHeight; y++ )
{
for( int channel = 0; channel < 3; channel++ )
{
if( channel == 0 )
{
memcpy( pFrame_->data[channel] + y * pFrame_->linesize[channel], reinterpret_cast<const unsigned char*>( pIB->vpData ) + y * pIB->pChannels[channel].iLinePitch + pIB->pChannels[channel].iChannelOffset, ( pIB->pChannels[channel].iLinePitch < pFrame_->linesize[channel] ) ? pIB->pChannels[channel].iLinePitch : pFrame_->linesize[channel] );
}
else if( ( y % 2 ) == 0 )
{
memcpy( pFrame_->data[channel] + ( y / 2 ) * pFrame_->linesize[channel], reinterpret_cast<const unsigned char*>( pIB->vpData ) + y * pIB->pChannels[channel].iLinePitch + pIB->pChannels[channel].iChannelOffset, ( pIB->pChannels[channel].iLinePitch < pFrame_->linesize[channel] ) ? pIB->pChannels[channel].iLinePitch : pFrame_->linesize[channel] );
}
}
}
break;
default:
assert( !"Unhandled FFmpeg pixel format detected!" );
break;
}
break;
default:
assert( !"Unhandled mvIMPACT Acquire pixel format detected!" );
break;
}
// Presentation time stamp (pts) of each frame in us
if( firstTimestamp_us_ == 0 )
{
firstTimestamp_us_ = pReq->infoTimeStamp_us.read();
}
switch( pC_->codec_id )
{
case AV_CODEC_ID_H264:
case AV_CODEC_ID_H265:
pFrame_->pts = pReq->infoTimeStamp_us.read() - firstTimestamp_us_;
break;
default:
pFrame_->pts = cnt;
break;
}
encode( pC_, pFrame_ );
}
void stopRecordingEngine( void )
{
if( pC_ && pFC_ )
{
// flush the encoder
encode( pC_, nullptr );
av_write_trailer( pFC_ );
if( boMustOpenAndCloseFile_ )
{
avio_close( pFC_->pb );
boMustOpenAndCloseFile_ = false;
}
}
if( pC_ )
{
avcodec_free_context( &pC_ );
pC_ = nullptr;
}
if( pFC_ )
{
avformat_free_context( pFC_ );
pFC_ = nullptr;
}
if( pFrame_ )
{
av_frame_free( &pFrame_ );
pFrame_ = nullptr;
}
if( pPkt_ )
{
av_packet_free( &pPkt_ );
pPkt_ = nullptr;
}
}
};
StringToCodecMap AVIHelper::s_supportedCodecs_ = StringToCodecMap();
//=============================================================================
//================= Data type definitions =====================================
//=============================================================================
//-----------------------------------------------------------------------------
struct ThreadParameter
//-----------------------------------------------------------------------------
{
Device* pDev_;
AVIHelper aviHelper_;
unsigned int requestsCaptured_;
Statistics statistics_;
#ifdef USE_DISPLAY
ImageDisplayWindow displayWindow_;
#endif // #ifdef USE_DISPLAY
explicit ThreadParameter( Device* pDev ) : pDev_( pDev ), aviHelper_(), requestsCaptured_( 0 ), statistics_( pDev )
#ifdef USE_DISPLAY
// initialise display window
// IMPORTANT: It's NOT safe to create multiple display windows in multiple threads!!!
, displayWindow_( "mvIMPACT_acquire sample, Device " + pDev_->serial.read() )
#endif // #ifdef USE_DISPLAY
{}
ThreadParameter( const ThreadParameter& src ) = delete;
ThreadParameter& operator=( const ThreadParameter& rhs ) = delete;
};
//=============================================================================
//================= implementation ============================================
//=============================================================================
//-----------------------------------------------------------------------------
void myThreadCallback( shared_ptr<Request> pRequest, ThreadParameter& threadParameter )
//-----------------------------------------------------------------------------
{
++threadParameter.requestsCaptured_;
// display some statistical information every 100th image
if( threadParameter.requestsCaptured_ % 100 == 0 )
{
const Statistics& s = threadParameter.statistics_;
cout << "Info from " << threadParameter.pDev_->serial.read()
<< ": " << s.framesPerSecond.name() << ": " << s.framesPerSecond.readS()
<< ", " << s.errorCount.name() << ": " << s.errorCount.readS()
<< ", " << s.captureTime_s.name() << ": " << s.captureTime_s.readS() << endl;
}
if( pRequest->isOK() )
{
#ifdef USE_DISPLAY
threadParameter.displayWindow_.GetImageDisplay().SetImage( pRequest );
threadParameter.displayWindow_.GetImageDisplay().Update();
#else
cout << "Image captured: " << pRequest->imageOffsetX.read() << "x" << pRequest->imageOffsetY.read() << "@" << pRequest->imageWidth.read() << "x" << pRequest->imageHeight.read() << endl;
#endif // #ifdef USE_DISPLAY
threadParameter.aviHelper_.writeFrame( pRequest, threadParameter.requestsCaptured_ );
}
else
{
cout << "Error: " << pRequest->requestResult.readS() << endl;
}
}
//-----------------------------------------------------------------------------
void displayCommandLineOptions( void )
//-----------------------------------------------------------------------------
{
cout << "Available parameters:" << endl
<< " 'serial' or 's' to specify the serial number of the device to use" << endl
<< " 'codec' or 'c' to specify the name of the codec to use" << endl
<< " 'recordingTime' or 'rt' to specify the recording time in ms. If not specified pressing ENTER will terminate the recording" << endl
<< " Only H.264/H.265 codec:" << endl
<< " 'constantRateFactor' or 'crf' to specify the ratio between encoding speed and quality (default: 23, lower value results in bigger files)." << endl
<< " Only MPEG2 codec:" << endl
<< " 'bitrate' or 'b' to specify the maximum average bitrate in kBit/s of the MPEG2 video." << endl
<< endl
<< "USAGE EXAMPLE:" << endl
<< " ContinuousCaptureFFmpeg s=VD* codec=H264 rt=1000 crf=18" << endl << endl;
}
//-----------------------------------------------------------------------------
AVCodecID selectCodecFromUserInput( void )
//-----------------------------------------------------------------------------
{
const StringToCodecMap& validCodecs = AVIHelper::getSupportedCodecs();
StringToCodecMap::size_type index = 0;
cout << "Codecs currently supported by this class (FFmpeg supports a lot more, contributions welcome):" << endl;
for( const auto& codec : validCodecs )
{
cout << " [" << index++ << "]: " << codec.first << endl;
}
cout << endl;
cout << "Please select a codec: ";
StringToCodecMap::size_type codecNr = 0;
cin >> codecNr;
// remove the '\n' from the stream
cin.get();
if( codecNr > validCodecs.size() )
{
return AV_CODEC_ID_NONE;
}
StringToCodecMap::const_iterator it = validCodecs.begin();
for( StringToCodecMap::size_type i = 0; i < codecNr; i++ )
{
++it;
}
return it->second;
}
//-----------------------------------------------------------------------------
void selectQualityFromUserInput( AVCodecID codec, unsigned int& crf, unsigned int& bitrate )
//-----------------------------------------------------------------------------
{
// Default values
switch( codec )
{
case AV_CODEC_ID_H264:
case AV_CODEC_ID_H265:
cout << endl
<< "Please select a constant rate factor (crf) (0-51) for encoding quality (default: 23)" << endl
<< "Lower values will result in better quality, faster encoding but bigger file size: ";
cin >> crf;
cin.get();
if( crf > 51 )
{
cout << "CRF out of range (0-51), using 28..." << endl;
crf = 28;
}
break;
case AV_CODEC_ID_MPEG2VIDEO:
cout << endl
<< "Please select a average bitrate in kBit/s for encoding quality (default=6000): ";
cin >> bitrate;
cin.get();
if( bitrate > 50000 )
{
cout << "Bitrate out of range (0-50.000), using 6000 kBit/s" << endl;
bitrate = 6000;
}
break;
default:
cout << "Unsupported video codec: " << codec << "! Cannot obtain quality parameters since I don't know what to do with it." << endl;
break;
}
}
//-----------------------------------------------------------------------------
/**
* Please note that this example is meant to show how to use the FFmpeg API in combination
* with the mvIMPACT Acquire API. Since version 2.39.0 there is also a video stream API
* (e.g. the class mvIMPACT::acquire::labs::VideoStream) that wraps most of the difficult
* FFmpeg related calls and offers a convenient interface. The approach shown here should
* only be used when the full FFmpeg API is needed.
*/
int main( int argc, char* argv[] )
//-----------------------------------------------------------------------------
{
DeviceManager devMgr;
Device* pDev = nullptr;
AVCodecID avCodec = AV_CODEC_ID_NONE;
unsigned int recordingTime = 0;
unsigned int crf = 23;
unsigned int bitrate = 6000;
// scan command line
if( argc > 1 )
{
bool boInvalidCommandLineParameterDetected = false;
for( int i = 1; i < argc; i++ )
{
string param( argv[i] ), key, value;
string::size_type keyEnd = param.find_first_of( "=" );
if( ( keyEnd == string::npos ) || ( keyEnd == param.length() - 1 ) )
{
cout << "Invalid command line parameter: '" << param << "' (ignored)." << endl;
boInvalidCommandLineParameterDetected = true;
}
else
{
key = param.substr( 0, keyEnd );
value = param.substr( keyEnd + 1 );
if( ( key == "serial" ) || ( key == "s" ) )
{
pDev = devMgr.getDeviceBySerial( value );
if( pDev->interfaceLayout.isValid() )
{
// if this device offers the 'GenICam' interface switch it on, as this will
// allow are better control over GenICam compliant devices
conditionalSetProperty( pDev->interfaceLayout, dilGenICam, true );
}
}
else if( ( key == "codec" ) || ( key == "c" ) )
{
const StringToCodecMap::const_iterator itCodec = AVIHelper::getSupportedCodecs().find( value );
if( itCodec != AVIHelper::getSupportedCodecs().end() )
{
avCodec = itCodec->second;
cout << "Using codec: " << itCodec->first << endl;
}
}
else if( ( key == "recordingTime" ) || ( key == "rt" ) )
{
recordingTime = static_cast<unsigned int>( atoi( value.c_str() ) );
}
else if( ( key == "constantRateFactor" ) || ( key == "crf" ) )
{
crf = static_cast<unsigned int>( atoi( value.c_str() ) );
}
else if( ( key == "bitrate" ) || ( key == "b" ) )
{
bitrate = static_cast<unsigned int>( atoi( value.c_str() ) );
}
else
{
cout << "Invalid command line parameter: '" << param << "' (ignored)." << endl;
boInvalidCommandLineParameterDetected = true;
}
}
}
if( boInvalidCommandLineParameterDetected )
{
displayCommandLineOptions();
}
}
else
{
cout << "No command line parameters specified." << endl;
displayCommandLineOptions();
}
if( pDev == nullptr )
{
pDev = getDeviceFromUserInput( devMgr );
}
if( pDev == nullptr )
{
cout << "Unable to continue! Press [ENTER] to end the application" << endl;
cin.get();
return 1;
}
if( avCodec == AV_CODEC_ID_NONE )
{
avCodec = selectCodecFromUserInput();
selectQualityFromUserInput( avCodec, crf, bitrate );
}
cout << "Initialising the device '" << pDev->serial.read() << "'. This might take some time..." << endl;
try
{
pDev->open();
}
catch( const ImpactAcquireException& e )
{
// this e.g. might happen if the same device is already opened in another process...
cout << "An error occurred while opening the device " << pDev->serial.read()
<< "(error code: " << e.getErrorCodeAsString() << ").";
return 1;
}
// make sure the image resolution matches the video resolution
int64_type imageWidth = {0};
int64_type imageHeight = {0};
try
{
if( pDev->interfaceLayout.read() == dilGenICam )
{
GenICam::UserSetControl usc( pDev );
if( usc.userSetSelector.isValid() && usc.userSetSelector.isWriteable() && usc.userSetLoad.isValid() )
{
usc.userSetSelector.writeS( "Default" );
usc.userSetLoad.call();
}
GenICam::ImageFormatControl ifc( pDev );
if( ifc.width.isValid() && ifc.height.isValid() )
{
imageWidth = ifc.width.read();
imageHeight = ifc.height.read();
}
GenICam::AcquisitionControl acq( pDev );
if( acq.acquisitionFrameRateEnable.isValid() && acq.acquisitionFrameRateEnable.isWriteable() )
{
acq.acquisitionFrameRateEnable.write( bTrue );
acq.acquisitionFrameRate.write( static_cast<double>( AVIHelper::FRAME_RATE ) );
}
}
else
{
BasicDeviceSettingsWithAOI bds( pDev );
if( bds.aoiWidth.isValid() && bds.aoiHeight.isValid() )
{
imageWidth = bds.aoiWidth.read();
imageHeight = bds.aoiHeight.read();
}
}
ImageDestination id( pDev );
id.pixelFormat.write( idpfYUV422Planar );
}
catch( const ImpactAcquireException& e )
{
// This e.g. might happen if the same device is already opened in another process...
cout << "An error occurred while configuring the device " << pDev->serial.read()
<< "(error code: " << e.getErrorCodeAsString() << ")." << endl
<< "Press [ENTER] to end the application..." << endl;
cin.get();
return 1;
}
ThreadParameter threadParam( pDev );
ostringstream oss;
oss << pDev->serial.readS() << "@" << imageWidth << "x" << imageHeight;
helper::RequestProvider requestProvider( pDev );
threadParam.aviHelper_.startRecordingEngine( imageWidth, imageHeight, avCodec, crf, bitrate, oss.str() );
requestProvider.acquisitionStart( myThreadCallback, ref( threadParam ) );
if( recordingTime == 0 )
{
cout << "Press [ENTER] to stop the acquisition thread" << endl;
cin.get();
}
else
{
cout << "Recording for " << recordingTime << "ms now." << endl;
this_thread::sleep_for( chrono::milliseconds( recordingTime ) );
}
requestProvider.acquisitionStop();
threadParam.aviHelper_.stopRecordingEngine();
return 0;
}
@ idpfYUV422Planar
The image will be transferred as a YUV422 image with 16 bit per pixel.
Definition: mvDriverBaseEnums.h:3733
@ ibpfYUV422Packed
This is a YUV422 packed image with 32 bit for a pair of pixels.
Definition: mvDriverBaseEnums.h:3025
@ ibpfYUV422Planar
This is a YUV422 planar image with 32 bit for a pair of pixels.
Definition: mvDriverBaseEnums.h:3131
This namespace contains classes and functions that can be used to display images.
This namespace contains classes and functions belonging to the image acquisition module of this SDK.