Glossario

C

CameraLink (CL) è una popolare interfaccia progettata appositamente per applicazioni di computer vision. Nata per trasferimento immagini ad alta velocità, supporta profondità da 8 a 16bit per pixel ed un pixel clock massimo di 85MHz.

CameraLink presenta tre varianti:

  • BASE (max. 24 bits per clock)
  • MEDIUM (max. 48 bits per clock)
  • FULL (max. 64 bits per clock)

MATRIX VISION offre i seguenti frame grabber CameraLink:

Caratteristiche   mvGAMMA-CL mvTITAN-CL mvHYPERION-CLb mvHYPERION-CLe mvHYPERION-CLm mvHYPERION-CLf
Configurazioni supportate 1x BASE
  2x BASE        
  1x MEDIUM        
  1x FULL          
Driver (mvIMPACT Acquire) Windows® XP, Vista, 7
32 / 64 bits
XP, Vista, 7
32 / 64 bits
XP, Vista, 7
32 / 64 bits
XP, Vista, 7
32 / 64 bits
XP, Vista, 7
32 / 64 bits
XP, Vista, 7
32 / 64 bits
  Linux®     32 / 64 bits 32 / 64 bits 32 / 64 bits 32 / 64 bits
Bus PCI 32 bits / 33 MHz Rev. 2.1 32 bits / 33 MHz Rev. 2.1        
  PCI Express®     x1 x1 x4 x4
Continuous data rate MB/s 95 100
mvTITAN-CL/110: 110
200 200 640 640
Max. pixel clock MHz 66 66 85 85 85 85
tl_files/mv11/images/info/powerovercameralink.jpg
      yes yes yes yes
CCD vs. CMOS

Sono due le tecnologie attualmente presenti sul mercato dei sensori: CCD e CMOS.

Normalmente, i sensori CCD presentano una migliore qualità dell'immagine, minore rumore elettrico e assenza di rumore a pattern fisso. I sensori CMOS, invece, sono mediamente più economici ed hanno anche caratteristiche aggiuntive che non sarebbero integrabili nella tecnologia CCD. I CCD hanno una maggiore dinamica; un grande vantaggio nelle applicazioni con notevoli differenze di brillantezza all'interno dell'immagine. A seconda delle esigenze applicative è possibile utilizzare sensori monocromatici o a colori. Alcuni sono disponibili in una sola delle due versioni. I sensori a colori presentano un filtro davanti alla matrice di celle sensibili, in modo da far ricevere solo una specifica lunghezza d'onda. Questa struttura non è comunque insensibile a emissioni IR e per questo motivo è necessario un ulteriore filtro IR per evitare una rappresentazione non corretta dei colori. A causa però del fatto che le celle sono legate solo ad una specifica banda, con sensori a colori si ottiene una risoluzione spaziale inferiore. Nel caso fosse necessaria una elevata accuratezza del colore, come per il controllo di qualità di stampa, o una elevata risoluzione spaziale, è consigliato l'utilizzo di telecamere a 3 chip, che hanno un chip inpendente per ogni colore (rosse, verde e blu). Un ulteriore aspetto da tenere in considerazione è il tipo di esposizione. Sia CCD che CMOS sono disponibili con il global shutter (immagine intera), mentre alcuni sensori CMOS più semplici presentano normalmente un rolling shutter. Con oggetti in rapido movimento il rolling shutter genera distorsioni geometriche a causa del tipo di esposizione.

Caratteristiche CCD CMOS
Signal at pixel output gate Electron packagese Voltage
Signal at chip output gate Voltage (analog) Bits (digital)
Signal at camera output gate Bits (digital) Bits (digital)
Filling faktor / aperture High Medium
Amplification interference None Medium
System noise Low Medium
System complexity High Low
Sensor complexity Low High
Camera components PCB + different chips + lens Chip + lens
Research and developing costs Depends on application Depends on application
System costs Depends on application Depends on application
Performance CCD CMOS
Reactivity Medium A bit better
Dynamic High Medium
Uniformity of the pixels High Low .. medium
Uniform exposure time Fast, combined Slow
Speed Medium .. high Higher
Windowing Limited Extended
Antiblooming High .. none High
Power supply and pulsing High voltage, different Lower voltage, easy

In conclusione

I sensori CCD hanno una migliore qualità dell'immagine, una maggiore sensibilità e dinamica oltre ad un controllo dell'esposizione sicrono per tutti i pixel, mentre i CMOS permetto maggiore compattezza, un frame rate più elevato ed hanno differenti modalità.

G

GenICam

GenICam deriva da "GEN eric programming I nterface for CAM eras". Si tratta di un metodo generico per accedere e modificare i parametri di una periferica con un interfaccia unificata. Una periferica compatibile GenICam fornisce un file descrittivo o direttamente (memoria interna) o da un supporto locale (hard-disk) o remoto (web).Un file descrittivo GenICam è come un manuale della periferica leggibile dalla macchina. Fornisce un nome leggibile ed un range di valori per i parametri utilizzabili con la periferica sia in lettura che scrittura ed istruzioni su quale comando debba essere inviato alla periferica quando l'utente modifica o legge determinati parametri. Questi file descrittivi sono scritti in XML.

Per maggior informazioni su questo argomento: http://www.genicam.org.

GigE Vision

GigE Vision è un protocollo di rete progettato per la comunicazione tra una periferica di imaging ed una applicazione. Il protocollo descrive esaustivamente:

  • identificazione periferica
  • trasmissione dati
    • dati immagine
    • dati addizionali
  • lettura/scrittura parametri.

GigE Vision utilizza UDP per la trasmissione dei dati per ridurre l'overhead introdotto da TCP.

Nota: UDP non garantisce l'ordine con il quale i pacchetti raggiungono il client e nemmeno garantisce che i pacchetti raggiungano la destinazione. Comunque GigE Vision definisce un meccanismo che permette di rilevare pacchetti persi. Questo fa in modo che i driver di acquisizione sviluppati dai vari produttori possano implementare algoritmi in grado di ricostruire le immagini oltre agli altri dati trasferiti ed eventualmente richiedere alla periferica il reinvio dei pacchetti persi fino a quando il buffer completo non è stato ricostruito correttamente. Per ulteriori informazioni: http://www.machinevisiononline.org/public/articles/index.cfm?cat=167

Il filter driver GigE Vision, il driver di acquisizone sviluppati da MATRIX VISION e tutte le periferiche MATRIX VISION compatibili GigE Vision supportano questo meccanismo di reinvio e nella maggior parte dei casi i buffer possono essere ricostruiti. Questo naturalmente non può incrementare la banda disponibile, quindi se parti della linea di trasmissione vengono sovraccaricate per un periodo troppo lungo i dati verrano persi comunque.

Entrambi i driver di acquisizione permettono una sintonia fine dell'algoritmo di reinvio utilizzato internamente e forniscono informazioni relative ai dati persi ed a quelli che sono stati richiesti più volte. Questo tipo di informazioni e configurazione sono parte del SDK. Maggiori dettagli si possono trovare nella corrispondente documentazione.

H

High Dynamic Range

The HDR (High Dynamic Range) mode increases the usable contrast range. This is achieved by dividing the integration time in two or three phases. The exposure time proportion of the three phases can be set independently. Furthermore, it can be set, how many signal of each phase is charged.

Functionality

tl_files/mv11/images/support/faq/HDR_mode_01.gif

Figure 1: Diagram of the -x00w sensor's HDR mode

Description

  • Phase 0
    • During T1 all pixels are integrated until they reach the defined signal level of Knee Point 1.
    • If one pixel reaches the level, the integration will be stopped.
    • During T1 no pixel can reached a level higher than P1.
  • Phase 1
    • During T2 all pixels are integrated until they reach the defined signal level of Knee Point 2.
    • T2 is always smaller than T1 so that the percentage compared to the total exposure time is lower.
    • Thus, the signal increase during T2 is lower as during T1.
    • The max. signal level of Knee Point 2 is higher than of Knee Point 1.
  • Phase 2
    • During T2 all pixels are integrated until the possible saturation.
    • T3 is always smaller than T2, so that the percentage compared to the total exposure time is again lower here.
    • Thus, the signal increase during T3 is lower as during T2.

For this reason, darker pixels can be integrated during the complete integration time and the sensor reaches its full sensitivity. Pixel's, which are limited at each Knee Points, lose a part of their integration time - even more, if they are brighter.

tl_files/mv11/images/support/faq/HDR_mode_02.gif

Figure 2: Integration time of different bright pixels

In the diagram you can see the signal line of three different bright pixels. The slope depends of the light intensity , thus it is per pixel the same here (granted that the light intensity is temporally constant).
Given that the very light pixel is limited soon at the signal levels S1 and S1, the whole integration time is lower compared to the dark pixel. In practice, the parts of the integration time are very different. T1, for example, is 95% of Ttotal, T2 only 4% and T3 only 1%. Thus, a high decrease of the very light pixels can be achieved. However, if you want to divide the integration thresholds into three parts that is S2 = 2 x S1 and S3 = 3 x S1, a hundredfold brightness of one pixel's step from S2 to S3, compared to the step from 0 and S1 is needed.

Using HDR with CMOS sensor -x00w

Figure 3 is showing the usage of the HDR mode. Here, an image sequence was created with the integration time between 10µs and 100ms. You can see three slopes of the HDR mode. The "waves" result from the rounding during the three exposure phases. They can only be partly adjusted during one line period of the sensor.

tl_files/mv11/images/support/faq/HDR_mode_03.gif

Figure 3: wxPropView HDR screenshot

Notes about the usage of the HDR mode with mvBlueFOX-200w

  • In the HDR mode, the basic amplification is reduced by approx. 0.7, to utilize a huge, dynamic area of the sensor.
  • If the manual gain is raised, this effect will be reversed.
  • Exposure times, which are too low, make no sense. During the third phase, if the exposure time reaches a possible minimum (one line period), a sensible lower limit is reached.

Possible settings using mvBlueFOX-200w

HDREnable

  • Off : Standard mode
  • On : HDR mode on, reduced amplification
    • HDRMode:
      • Fixed setting with 2 Knee Points. modulation 0 .. 33% / 1 .. 66% / 2 .. 100%
        • Fixed0: Phase 1 exposure 12.5% , Phase 2 31.25% of total exposure
        • Fixed1: Phase 1 exposure 6.25% , Phase 2 1.56% of total exposure
        • Fixed2: Phase 1 exposure 3.12% , Phase 2 0.78% of total exposure
        • Fixed3: Phase 1 exposure 1.56% , Phase 2 0.39% of total exposure
        • Fixed4: Phase 1 exposure 0.78% , Phase 2 0.195% of total exposure
        • Fixed5: Phase 1 exposure 0.39% , Phase 2 0.049% of total exposure
      • User: Variable setting of the Knee Point (1..2), threshold and exposure time proportion
        • HDRKneePointCount: Number of Knee Points (1..2)
        • HDRKneePoints
          • HDRKneePoint-0
            • HDRExposure_ppm: Proportion of Phase 1 compared to total exposure in parts per million (ppm)
            • HDRControlVoltage_mV: Control voltage for exposure threshold of first Knee Point (3030mV is equivalent to approx. 33%)
          • HDRKneePoint-1
            • HDRExposure_ppm: Proportion of Phase 1 compared to total exposure in parts per million (ppm)
            • HDRControlVoltage_mV: Control voltage for exposure threshold of first Knee Point (2630mV is equivalent to approx. 66%)
HRTC

MATRIX VISION's Hardware Real-Time Controller (in short HRTC) is a component of the FPGA which is used for time-critical I/O and acquisition control. For this reason the HRTC supersedes the use of an external PLC for camera and process control in many cases.
A HRTC program consists of a sequence of operating steps, which are processed by the controller. For the creation of these sequences, you can use, for example, the GUI tool wxPropView.

tl_files/mv11/images/support/faq/wxPropView_HRTC_programm.jpg

Figure 1: Entering a Hardware Real-Time Controller program in wxPropView

The sample in Figure 1 shows how to achieve a defined image frequency of 10 images per second in five program steps.
For more information, please have a look at the corresponding product manual chapter "HRTC - Hardware Real-Time Controller" (here mvBlueFOX).

Note: To use the HRTC you have to set the trigger mode (TriggerMode) and the trigger source (TriggerSource) (C++ syntax):

CameraSettings->triggerMode = ctmOnRisingEdge
CameraSettings->triggerSource = ctsRTCtrl

With wxPropView:

tl_files/mv11/images/support/faq/wxPropView_HRTC_setting.jpgFigure 2: Set HRTC as TriggerSource

Areas of use and applications

Current trends in digital cameras are moving towards the usage of bus systems like IEEE1394, USB and Gigabit Ethernet, which are not capable of real-time. If applications with digital cameras do require complex trigger and flash control, cameras with I/O and trigger inputs like the mvBlueFOX may be used or additional separate I/O boards come into operation. When using I/O boards some uncertainty due to the latency of the bus systems still persists. Therefore it is more sensible to move the real-time relevant features into the camera and thus simplify the local system.
Possible applications (excerpt) are:

  • Generation of trigger signals
  • Synchronization of multiple cameras
  • Fast generation of image sequences with different flash and exposure control
  • Dark and light image acquisitions to generate a reference image
  • Exposure control on images with different wave lengths (R/G/IR)

tl_files/mv11/images/support/faq/Server_camera_en.jpg

Figure 3: Processing chain from sensor to server: the more components in-between, the bigger the friction loss

Example: Delay the expose start of the following camera

If a defined delay should be necessary between cameras (here mvBlueFOX), the HRTC can do the synchronization work.

In this case, one camera must be the master. The external trigger signal that will start the acquisition must be connected to one of the cameras digital inputs. One of the digital outputs then will be connected to the digital input of the next camera. So camera one uses it's digital output to trigger camera two. How to connect the cameras to one another can also be seen in the following image:

tl_files/mv11/images/support/faq/HRTC_sample_connection.jpg

Figure 4: Connection diagram for a defined delay from the exposure start of one camera relative to another

Assuming that the external trigger is connected to digital input 0 of camera one and digital output 0 is connected to digital input 0 of camera two. Each additional camera will then be connected to it predecessor like camera 2 is connected to camera 1. The HRTC of camera one then has to be programmed somehow like this:

0. WaitDigin DigIn0->On
1. TriggerSet 0
2. WaitClocks <trigger pulse width>
3. TriggerReset
4. WaitClocks <delay time>
5. SetDigout DigOut0->On
6. WaitClocks 100µs
7. SetDigout DigOut0->Off
8. Jump 0

<trigger pulse width> should not less than 100µs.

When the cameras are set up to start the exposure on the rising edge of the signal, of course is the desired <delay time> minus <trigger pulse width>. If more then two cameras shall be connected like this, every camera except the last one must run a program like the one discussed above. The delay times of course can vary.

tl_files/mv11/images/support/faq/HRTC_sample_diagram.jpg

Figure 5: Delay the expose start of the following camera

Products

Following products have a HRTC:

I

Industrial image processing

Industrial image processing or digital image processing means the digital processing of image data. In contrast to the manual processing of digital image editing the digital image processing generally applies following:

  • extract specific criteria from the image data for further processings or analysis
  • the image data aren't often needed and partly aren't even displayed
  • the acquisition and the processing take place automatically by using predefined procedures and sequences in closed systems
  • the complexity of the algorithms partly demands special hardware to process the image data

Areas of use and applications

Consecutively there are some example applications from different areas of use:

Automation, industry

  • workpiece checking on dimensional accuracy and completeness
  • surface checking on correct print and freedom from errormounting check of printed circuit boards
  • control and tracing of the flow of material
  • visualization of process flows in a control center

Microscopy (medicine, research)

  • improvement and analysis of light microscopic shots
  • acquisition and calculation of electron microscope images
  • extraction of laser scan shots e.g. in the ophthalmology

Medicine

  • eyes diagnosis (see above)
  • categorisation of burns
  • acquisition of tooth profiles

Safety engineering

  • compression and recording of image sequences in banks and petrol stations
  • analysis of objects in images (video sensor technology)
  • infrared camera technology
  • recording of cash machine transactions

Traffic engineering

  • surveillance, reckoning and control of the flow of traffic
  • billing of parking garage fees via licence plate OCR
  • cameras in vehicles: distance measurings, electr. rear-view mirrors, road sign recordation
  • surveillance/inspection of elements at risk (aircraft turbines)

Commerce

  • empties check
  • determination of the flow of consumers to control the deployment

M

MARTIX VISION SDK's / interfaces

mvSDK

  • oldest interface
  • close to hardware
  • creation time-consuming
  • each hardware family with own driver DLL
  • not object-oriented
  • active support is discontinued

mvAcquireControl

  • successor of mvSDK
  • based on grabber.dll (mvSDK runs in the background)
  • C and C++ interface for the user available
  • active support is discontinued 

mvIMPACT Acquire

  • current interface since 2004
  • incompatible to mvSDK or mvAcquireControl
  • object-oriented
  • C and .NET wrapper for the user available
  • applications based on mvIMPACT Acquire compatible to all product families
MvIMPACT Acquire

mvIMPACT Acquire è l'interfaccia di programmazione (API) per l'hardware MATRIX VISION. Sono supportati i seguenti linguaggi di programmazione:

  • C,
  • C++,
  • C# e
  • VB.NET

R

Rolling Shutter

Some MATRIX VISION industrial cameras or smart cameras are using CMOS sensors with rolling shutter. This means that the exposure of the single image lines starts and ends at different times (see Figure 1).

tl_files/mv11/images/support/faq/Rolling_shutter_01.gif

Figure 1: Exposure of every line starts and ends at different times

Figure 1 has two sections. The left side shows two states of a sensor with a line block, which shifts over the sensor. The white area represents the light sensitive area. This area shifts line-by-line top down the image. For example if you set the exposure time to 100 lines, this area will have a height of 100 lines. When the integration window shifts to the next line, the line first has to be reset ("Reset line"; blue in Figure 1). At the top of the area the line will be read out, after it was exposured for 100 lines ("Read line"; red in Figure 1).
Every line is opened according to the integration time, however, the exposure happens time-shifted. The right side of Figure 1 tries to clarify this with the help of a lines/time diagram. Line by line the start of the exposure ("Exposure time") shifts, i. e. rightwards in the diagram. Furthermore the right side of Figure 1 clarifies the phases of the image acquisition with a rolling shutter. Every line performs a reset sequence, a exposure process ("Exposure time") and a transfer process ("Transfer time"). The start of the transfer process of the first line and the end of the transfer process of the last line defines the frame rate. In short:

  
                       1
Frame_rate = ------------------------
           transfer_time * image_height

If the exposure time longer than the transfer process, in the formula you have to replace the transfer time by the exposure time.

Exposure effects during horizontal movement

During an exposure if an object is moving horizontally you will receive a shifted image. Figure 2 shows how the object moves from left to right. During the line-by-line delayed acquisition the rolling shutter of the sensor, which has only three lines for clarification, notices only parts of the object at other positions. The composed acquisition in the example displays the image shift. Additionally the movement of the object causes small blur effects.

tl_files/mv11/images/support/faq/Rolling_shutter_02.gif

Figure 2: Shifted image during horizontal object movement

Looking at the line-by-line movement of the object, it is exciting, how short the covered distance of the object is. For example: If we have a rolling shutter sensor with the height of 480 pixels and a frame rate of 30Hz (30 images per second) an object with the speed of 10 meters per second (36 km/h) will cover a distance of 0.694 milimeters during every line change.
14400 lines are read out every second: 

  
                                      lines
read_lines = 30Hz * 480 lines = 14400 ------
                                        s

The outcome of this is, every 0.0000694 seconds a line change happens:

  
                 1
line_change = ----- = 0.0000694 s
               14400

If the object has a speed of 10 m/s, it covers a distance of 0.694 mm during every line change:

  
s = v * t 
        m
s = 10 --- * 0.0000694 s
        s
s = 0.000694 m = 0.694 mm

For this reason a rolling shutter sensor is suited for movement analyses with high frame rates.

Exposure effects during vertical movement

During an exposure if an object is moving vertically you will receive a compression or dilation. This depends on the direction of the movement. In Figure 3 an object moves vertically in opposition to the exposure direction of the sensor, which has only three lines for clarification. The example shows that the object is compressed. The image of the composed acquisition verifies this.

tl_files/mv11/images/support/faq/Rolling_shutter_03.gif

Figure 3: Compression during vertical object movement

The rolling shutter effect can be used in own interest. The line-by-line integration reduces the blur of a fast moving object. Consequently the integration time can be doubled.

tl_files/mv11/images/support/faq/Rolling_shutter_04.gif

Figure 4: Reduced blur with rolling shutter

Flash control on CMOS sensors with rolling shutter

Flash control on CMOS sensors with rolling shutter is limited. The exposure time has to be long enough to flash all lines at the same time (see Figure 4 "All row integration"). The flash moment has to be in the area of "All row integration" and the flash time may not be longer than "All row integration"

tl_files/mv11/images/support/faq/Rolling_shutter_05.gif

Figure 5: Possible flash window

In this modus operandi you have to mind, that there is no extraneous light during the exposure of the lines ("Exposure time").

S

SNFC

Standard Feature Naming Convention of GenICam.

The latest GenICam properties list can be found here: http://www.emva.org/genicam/genicam%E2%84%A2_document_download

The file is called "GenICam™ Standard Features Naming Convention (PDF)".

U

USB vs. Firewire

In the meantime as a replacement for analog cameras with frame grabbers based on a PC many digital cameras with different interfaces are in place on the image processing market. While on the high end of the performance spectrum special frame grabber solutions are needed, in the low end cost effective USB 2.0 and FireWire solutions spread out.

Why do we have USB as interface for our industrial camera?

The USB technology offers in comparison to the FireWire some advantages:

  • The USB interface is available on all current laptops, PCs and many embedded industrial systems.
  • The USB interface guarantees a secured transfer.
  • The bandwidth allocation works dynamically (in contrast to DCAM [1894-based Digital Camera Specification], which works isochronously).

Which advantages offers the industrial USB 2.0 camera mvBlueFOX from MATRIX VISION?

Beside the general advantages of the USB interface and the easy handling the mvBlueFOX features additional advantages:

  • With HRTC (Hardware Real-Time Controller) it is possible to control the digital I/O flexibly.
  • Flexible trigger possibilites are available.
  • The cameras supports a dynamic camera control (changes of parameters "on-the-fly").
  • Each image can be acquired with appropriate parameters.
  • An industrial connector is available.The cameras are bus powered.
  • The CCD sensors are adjusted. For this reason a position accuracy is reached of less than one percent.
  • The cameras are available as OEM modules.
  • User data can be saved on the camera.
  • The drivers are powerful and provides many image processing functionality.
  • A uniform driver architecture for all MATRIX VISION products (frame grabbers, cameras) and future products is available.
  • The drivers are available for Linux (also for embedded systems like ARM, PPC etc.).
USB3 Vision

USB3 Vision is a standard for the image processing industry, which defines the communication between image acquisition device and application via USB 3.0. This protocol completely describes:

  • device discovery
  • data transmission
    • image data
    • additional data
  • read/write of parameters.