Loads a deep learning model and prepares its execution on a specific target device.
|inModelDirectory||ClassifyObjectModelDirectory||A Classify Object model stored in a specific disk directory|
|inTargetDevice||DeviceType*||A device selected for deploying and executing the model. If not set, device depending on version (CPU/GPU) of installed Deep Learning Add-on is selected.|
|outModelId||ClassifyObjectModelId||Identifier of the deployed model|
- In most cases, this filter should be placed in the INITIALIZE section.
- Executing this filter may take several seconds.
- This filter should be connected to DL_ClassifyObject through the ModelId ports.
- You can edit the model directly through the inModelDirectory. Another option is to use the Deep Learning Editor application and just copy the path to the created model.
- Passing NIL as inTargetDevice (which is default), is identical to passing DeviceType::CUDA on GPU version of Deep Learning Addon and DeviceType::CPU on CPU version on DeepLearning Addon.
- GPU version of Deep Learning Addon supports DeviceType::CUDA and DeviceType::CPU as inTargetDevice value.
- CPU version of Deep Learning Addon supports only DeviceType::CPU as inTargetDevice value.
This filter is available on Basic Complexity Level.
Disabled in Lite Edition
This filter is disabled in Lite Edition. It is available only in full, Adaptive Vision Studio Professional version.
- DL_ClassifyObject – Executes a Classify Object model on a single input image.