Loads a deep learning model and prepares its execution on a specific target device.
void avl::DL_DetectFeatures_Deploy ( const avl::DetectFeaturesModelDirectory& inModelDirectory, const atl::Optional<avl::DeviceType::Type>& inTargetDevice, avl::DetectFeaturesModelId& outModelId )
|inModelDirectory||const DetectFeaturesModelDirectory&||A Detect Features model stored in a specific disk directory|
|inTargetDevice||const Optional<DeviceType::Type>&||NIL||A device selected for deploying and executing the model. If not set, device depending on version (CPU/GPU) of installed Deep Learning Add-on is selected.|
|outModelId||DetectFeaturesModelId&||Identifier of the deployed model|
- In most cases, this filter should be placed in the INITIALIZE section.
- Executing this filter may take several seconds.
- This filter should be connected to DL_DetectFeatures through the ModelId ports.
- You can edit the model directly through the inModelDirectory. Another option is to use the Deep Learning Editor application and just copy the path to the created model.
- Passing NIL as inTargetDevice (which is default), is identical to passing DeviceType::CUDA on GPU version of Deep Learning Addon and DeviceType::CPU on CPU version on DeepLearning Addon.
- GPU version of Deep Learning Addon supports DeviceType::CUDA and DeviceType::CPU as inTargetDevice value.
- CPU version of Deep Learning Addon supports only DeviceType::CPU as inTargetDevice value.
- DL_DetectFeatures – Executes a Detect Features model on a single input image.