DL_SegmentInstances_Deploy


Header: AVLDL.h
Namespace: avl
Module: DeepLearning

Loads a deep learning model and prepares its execution on a specific target device.

Syntax

void avl::DL_SegmentInstances_Deploy
(
	const avl::SegmentInstancesModelDirectory& inModelDirectory,
	const atl::Optional<avl::DeviceKind::Type>& inDeviceType,
	const int inDeviceIndex,
	const atl::Optional<int>& inMaxObjectsCountHint,
	avl::SegmentInstancesModelId& outModelId
)

Parameters

Name Type Range Default Description
Input value
inModelDirectory const SegmentInstancesModelDirectory& A Segment Instances model stored in a specific disk directory.
Input value
inDeviceType const Optional<DeviceKind::Type>& NIL A type of a device selected for deploying and executing the model. If not set, device depending on version (CPU/GPU) of installed Deep Learning add-on is selected. If not set, device depending on version (CPU/GPU) of installed Deep Learning add-on is selected.
Input value
inDeviceIndex const int 0 - 0 An index of a device selected for deploying and executing the model.
Input value
inMaxObjectsCountHint const Optional<int>& NIL Prepares the model for an execution with specific inMaxObjectsCount
Output value
outModelId SegmentInstancesModelId& Identifier of the deployed model

Hints

  • In most cases, this filter should be placed in the INITIALIZE section.
  • Executing this filter may take several seconds.
  • This filter should be connected to DL_SegmentInstances through the ModelId ports.
  • You can edit the model directly through the inModelDirectory. Another option is to use the Deep Learning Editor application and just copy the path to the created model.
  • If any subsequent DL_SegmentInstances filter using deployed model has inMaxObjectsCount set to not-NIL, it is advisable to set inMaxObjectsCountHint to the maximum from the values set to this parameter. Following this guidelines should ensure an optimal memory usage and no performance hit on first call to DL_SegmentInstances.

Remarks

  • Passing NIL as inTargetDevice (which is default), is identical to passing DeviceKind::CUDA on GPU version of Deep Learning add-on and DeviceKind::CPU on CPU version on Deep Learning add-on.
  • GPU version of Deep Learning add-on supports DeviceKind::CUDA and DeviceKind::CPU as inTargetDevice value.
  • CPU version of Deep Learning add-on supports only DeviceKind::CPU as inTargetDevice value.

See Also