Back to Adaptive Vision Studio website

You are here: Start » Filter Reference » GigE Vision » GigEVision_MultiDevice_GrabImages


Captures frame streams from multiple synchronously triggered GigE Vision compliant cameras.

Name Type Range Description
inAddress1 GevAddress GigE Vision identifying address of first device
inAddress2 GevAddress GigE Vision identifying address of second device
inAddress3 GevAddress* GigE Vision identifying address of third device
inAddress4 GevAddress* GigE Vision identifying address of fourth device
inAddress5 GevAddress* GigE Vision identifying address of fifth device
inAddress6 GevAddress* GigE Vision identifying address of sixth device
inPixelFormat GevPixelFormat Requested Pixel Format in GenICam pixel naming convention
inInputQueueSize Integer 1 - 100 Number of incoming frames that can be buffered before the application is able to process them
inMaxTimeDiff Integer 20 - 3600000 Maximum time difference between received frames in milliseconds
inTimeout Integer* 200 - 3600000 Maximum time to receive first frame
outFrame1 Image? Captured frame from first device
outFrame2 Image? Captured frame from second device
outFrame3 Image? Captured frame from third device
outFrame4 Image? Captured frame from fourth device
outFrame5 Image? Captured frame from fifth device
outFrame6 Image? Captured frame from sixth device
diagCurrentTimeDiff Integer Time difference between receive of currently captured frames


This filter is intended for establishing connection with and streaming images from up to six GigE Vision® compliant cameras working with single common trigger source (synchronously triggered). It is intended to be used in systems with from two to six cameras (configured to work with external trigger source), in which a single signal source is triggering all cameras synchronously and images from this trigger events must be received with their synchronization maintained.

This filter is an equivalent of six combined GigEVision_GrabImage_WithTimeout filters, that additionally helps to start streaming in multiple cameras at once, helps to maintain synchronization between cameras working with common trigger signal (e.g. in a situation when one of the frames in the set captured after the trigger is dropped due to network errors), and allows to define a single timeout value to wait for all cameras at once.

Special caution must be paid when connecting multiple cameras to a single computer. Such systems may require additional configuration of the cameras in order to properly utilize network throughput or computer hardware. For more information refer to the Connecting Multiple Devices to a Single Computer section of Connecting Device.

In the first iteration this filter will start acquisition in all configured cameras to prepare all of them for the first trigger event. Later during normal work it will wait for a set of frames for not longer than the selected timeout value (on inTimeout port) in milliseconds. After timeout occurs a Nil value is returned on all outFrame ports instead of images.

To maintain frames synchronization a receive time (at application level) of every separate frame is tracked internally. Frames are considered to originate from a single trigger event when their receive time is within the specified interval. When the receive time of the frames differs by more than the specified threshold value, they are considered to originate from a different trigger event and are not returned from the filter in the same iteration. When a frame is received later than the allowed time difference after the first received frame, its corresponding outFrame output returns Nil instead and this frame is left to be returned in the next iteration.

It is required to manually configure the receive time difference threshold for a target system. This value must be set on inMaxTimeDiff input. The time must be greater that the maximum frame receive time dispersion observed in the system, but less than the time between two consecutive trigger events. A diagnostic output diagCurrentTimeDiff can be used to determine the frame receive time dispersion in the target system. This output returns the difference between receive times of frames currently returned on the outFrame outputs. Statistics can be gather from this output during the system test to empirically determine the absolute minimal value of the time difference threshold.

To select devices for the filter appropriate inAddress inputs must be set to their network addresses. At least inAddress1 and inAddress2 inputs must be set. To use more devices additional optional inAddress inputs can be used. A device address is a textual definition of either IP, MAC or serial number. For information on address type meaning and selecting device address see Device manager section in user manual.

The inPixelFormat input specifies an image pixel format to be set in all devices before acquisition is started. The value for this input may be left empty to not change the format in the cameras and to leave the formats set up earlier. Pixel format is a textual name of image pixel (color) format. Name of the format must correspond to one of the format names supported by the device. Use Device Manager (click "..." button in filter properties) to select the format name from a list of formats supported by the device. For information on selecting pixel format see Device manager section in user manual.

Filter will internally stream images in a selected device's pixel format. This format will be then converted to the most appropriate application Image format, in such a way that no data will be lost.

When the filter receives a valid set of frames from the cameras all active outFrame outputs return valid images. When timeout occurs before the first frame from the cameras is received all outFrame outputs returns Nil. When the first frame is received before timeout, but not all frames are received before additional maximum time difference elapses, some of outFrame outputs return valid images and the rest of the outputs return Nil.

For general information about working with GigE Vision devices, please refer to the following article.


  • Interactively select cameras available in your network by defining the inAddress inputs.
  • Choose inPixelFormat from those supported by your camera.
  • Determine and set a proper value for inMaxTimeDiff.
  • By setting inTimeout specify how many milliseconds the filter should wait for the incoming frames.


This filter can throw an exception to report error. Read how to deal with errors in Error Handling.

List of possible exceptions:

Error type Description
RuntimeError Multiple cameras are not supported in Smart edition

Complexity Level

This filter is available on Basic Complexity Level.

See Also

  • Application Notes - set of documents about connecting devices or establishing communication with Adaptive Vision Studio.