# DL_ClassifyObject_Deploy

Loads a deep learning model and prepares its execution on a specific target device.

### Syntax

void avl::DL_ClassifyObject_Deploy
(
const avl::ClassifyObjectModelDirectory& inModelDirectory,
const atl::Optional<avl::DeviceType::Type>& inTargetDevice,
avl::ClassifyObjectModelId& outModelId
)


### Parameters

Name Type Default Description
inModelDirectory const ClassifyObjectModelDirectory& A Classify Object model stored in a specific disk directory
inTargetDevice const Optional<DeviceType::Type>& NIL A device selected for deploying and executing the model. If not set, device depending on version (CPU/GPU) of installed Deep Learning Add-on is selected.
outModelId ClassifyObjectModelId& Identifier of the deployed model

### Hints

• In most cases, this filter should be placed in the INITIALIZE section.
• Executing this filter may take several seconds.
• This filter should be connected to DL_ClassifyObject through the ModelId ports.
• You can edit the model directly through the inModelDirectory. Another option is to use the Deep Learning Editor application and just copy the path to the created model.

### Remarks

• Passing NIL as inTargetDevice (which is default), is identical to passing DeviceType::CUDA on GPU version of Deep Learning Addon and DeviceType::CPU on CPU version on DeepLearning Addon.
• GPU version of Deep Learning Addon supports DeviceType::CUDA and DeviceType::CPU as inTargetDevice value.
• CPU version of Deep Learning Addon supports only DeviceType::CPU as inTargetDevice value.