Back to Aurora Vision Library Lite website

You are here: Start » Camera Support » Ensenso » Ensenso_GrabSurface

Ensenso_GrabSurface


Header: ThirdPartySdk.h
Namespace: avl

Captures a Surface using Ensenso.

Syntax

bool avl::Ensenso_GrabSurface
(
	Ensenso_State& ioState,
	const atl::Array<avl::EnsensoCameraInformation>& inDevices,
	const atl::Optional<atl::File>& inParametersFile,
	avl::Surface& outSurface
)

Parameters

Name Type Default Description
Input will be modified ioState Ensenso_State& Object used to maintain state of the function.
Input value inDevices const Array<EnsensoCameraInformation>& Structures identifying devices.
Input value inParametersFile const Optional<File>& NIL Initial global parameters.
Output value outSurface Surface& Captured Surface.

Remarks

Initial parameters

Initial parameters are only set during capture start. To change parameters either restart the stream, or use appropriate Set/Get filters.

Settings

To obtain settings from the camera:

  • From NxView Parameters window
    • Launch NxView
    • Open camera
    • Open Parameters window (menu Capture->Parameters...)
    • Adjust settings as wanted
    • Use Save... button
  • From NxTreeEdit application
    • Either:
      • Launch Aurora Vision Studio, add Ensenso_GrabPoint3DGrid filter, Run it
      • Launch NxView, open camera
    • Launch NxTreeEdit, connect to wanted instance
    • Adjust settings as wanted, either in NxView or NxTreeEdit
    • right click on /Cameras/BySerialNo/WantedSerialNumber and select Copy value as JSON string
    • save to a simple text file using an editor

The settings include all camera parameters, including Link, Calibration and Parameters The saved file can be then used in inCalibrationFile and inSettingsFile arguments.

To obtain global parameters, follow previous NxTreeEdit step, but save the global /Parameters node.

Surface, Point3DGrid, Point color map

If one properly sets the global /Parameters/RenderPointMap settings node (e.g. by using inParametersFile parameter) output data will change.

  • To obtain Surface, RenderPointMap should contain ViewPose node, Point3DGrid will be outputted (perspective projection from camera).
  • To obtain Point3DGrid, do not set the ViewPose parameter, and use only one stereoscopic camera
  • To obtain Images one has to use Surface output, and make sure RenderPointMap/Texture is set to true

Currently only RenderPointMap node is applied from global Parameters. Other settings may be applied in future revisions.

Mixing Grab filters

Data internally is processed in frames. A frame can contain Surface, Image and Point3DGrid at the same time, or any mix of them. Grab filters will take proper data from current frame, if no data is available in current frame it is discarded, and next frame is taken. This process is repeated until data is available or the filter times out.

If one configures camera to grab a Surface and an Image, a frame will contain Surface and an Image, no Point3DGrid. When one uses GrabPoint3DGrid in that situation it will never output data, because no frame contains a Point3DGrid. Two consecutive GrabSurface will discard one Image. Second Grab surface finds no Surface in current frame - it was processed, it will discard it and look in the next frame.

Camera identification

When there is only one Ensenso camera connected, the field inDevices can be set to Auto. In this situation, the first available camera will be used.

inDevices can be used to pick one or many of multiple cameras connected to the computer.

Multithreaded environment

This function is not guaranteed to be thread-safe. When used in multithreaded environment, it has to be manually synchronized.

See Also