You are here: Start » Technical Issues » Deep Learning Training API
Deep Learning Training API
Table of contents:
- Overview
- Namespaces
- Classes and Types
- Functions
- Handling Events
- Usage Example
- JSON Configuration Example
- Best Practices
- Limitations and Notes
Overview
The Deep Learning API provides a comprehensive framework for training and deploying Deep Learning models, focused on feature detection and anomaly detection tasks. It offers an object-oriented interface that simplifies the complexities of configuring and managing Deep Learning operations. Whole API declaration is located in Api.h file under avl::DeepLearning namespace.
Namespaces
avl::DeepLearning: Main namespace containing all public-facing classes and types.
Classes and Types
-
Detect Features Training:
avl::DeepLearning::DetectFeaturesTraining -
Anomaly Detection 2 Similarity Based Training:
avl::DeepLearning::AnomalyDetection2SimilarityBasedTraining
These are the primary classes that users should interact with for feature detection or anomaly detection training. They are built on top of the TrainingBase and offer specialized methods and properties for configuring and managing feature detection and anomaly detection workflows.
Constructors
DetectFeaturesTraining();
AnomalyDetection2SimilarityBasedTraining();
Configuration Methods
Training configuration can be performed in two ways:
-
Via Set Methods: Configuration can be done using methods like
SetDevice,SetNetworkDepth, and otherSet*methods. If a specificSet*method is not called, the default value will be used. -
Via JSON File: Use the
ParseConfigFromFilemethod to load configuration from a JSON file.
Enums
SetType: Specifies the dataset role (Train, Valid, Test, Unknown).DeviceType: Defines the hardware device for training (CUDA, CPU).ModelTypeId: Identifies the model type (e.g.,DetectFeatures,AnomalyDetection2SimilarityBased).
Functions
ParseConfigFromFile
Loads configuration from a JSON file.
void ParseConfigFromFile(const atl::String& jsonConfigFilePath);
Example of JSON file configuration below.
SetSample
Sets the input sample and its annotation for the test, validation or training process.
Method Signatures
DetectFeaturesTraining
void SetSample(const atl::String& imageFilePath, const Annotation& annotation, SetType type, atl::Optional<const avl::Region&> roi);
void SetSample(const atl::String& imageFilePath, const Annotation& annotation, SetType type);
AnomalyDetection2SimilarityBasedTraining
void SetSample(const atl::String& imageFilePath, const Annotation& annotation, SetType type, atl::Optional<const avl::Region&> roi = atl::NIL);
Parameters
imageFilePath: Path to the image file to add to the datasetannotation: Annotation object containing class name and optional region datatype: Dataset type (SetType::Train, SetType::Valid, SetType::Test)roi: Optional region of interest to limit processing to a specific area of the image
Annotation
For DetectFeatures, the annotation must include both a class name and a region data that specifies the feature's location.
Annotation(const atl::String& className, const avl::Region& data)For AnomalyDetection2SimilarityBased, the annotation must include a class name. Only the class names "Good" and "Bad" are supported (both must start with a capital letter). If any other class name is used, the deep learning training service will generate an error.
Annotation(const atl::String& className);Important: Please ensure that samples from both the "Good" and "Bad" classes are included in the training, validation, and test datasets. This is essential for calculating the threshold once training is complete.
StartTraining
Begins the training process.
void StartTraining();
SaveModel
Saves the trained model to disk in two formats:
- A model state file (.pte) for internal use and training continuation
- A Weaver model file (.avdlmodel) for deployment in applications
Method Signatures
void SaveModel(const atl::Optional<atl::String>& modelDirectoryPath = atl::NIL, const bool overwritePreviousModel = false);
void SaveModel(const char* modelDirectoryPath, const bool overwritePreviousModel = false);
Parameters
modelDirectoryPath: Optional path to a directory where model files will be saved. If not provided (default), models are saved in the default directory: [current working directory]/Model/models/overwritePreviousModel: When set to true, any existing model files at the destination will be overwritten. When false (default), and model files exist, an error will be raised.
Helper Methods
After saving, you can retrieve the exact paths to the saved model files using:
GetModelStateFilePath(): Returns the path to the .pte model state fileGetModelWeaverFilePath(): Returns the path to the .avdlmodel Weaver model file
Usage Examples
// 1. Save to default location:
training.SaveModel();
// 2. Save to default location and overwrite existing files:
training.SaveModel(atl::NIL, true);
// 3. Save to custom location:
training.SaveModel("C:/My/Models/Path");
// 4. Save to custom location and overwrite existing files:
training.SaveModel("C:/My/Models/Path", true);
// 5. Get saved file paths:
std::cout << "Model State (.pte) saved to: " << training.GetModelStateFilePath().CStr8() << std::endl;
std::cout << "Weaver Model (.avdlmodel) saved to: " << training.GetModelWeaverFilePath().CStr8() << std::endl;
LoadModel
Loads a previously saved model state (.pte) file for inference.
Method Signatures
void LoadModel(const atl::String& modelFilePath);
void LoadModel(const char* modelFilePath);
Parameters
modelFilePath: Path to a model state file (.pte) to load. The method will load the specified model file for inference operations.
Functionality
Loading a model allows you to:
- Perform inference on new images using a trained model
Usage Examples
// 1. Load a specific model file:
training.LoadModel("C:/My/Models/Path/model.pte");
// 2. Load using the path from a previous save operation (PTE file):
training.SaveModel(); // Save first
training.LoadModel(training.GetModelStateFilePath()); // Load the saved model
Important Notes
- This method loads only the model state (.pte) file used for training and inference within this API
- The Weaver model (.avdlmodel) files created by SaveModel() are for deployment in production applications
- After loading, the model is immediately ready for inference with InferAndGrade()
GetModelStateFilePath & GetModelWeaverFilePath
Helper methods to retrieve the paths of saved model files.
Method Signatures
atl::String GetModelStateFilePath();
atl::String GetModelWeaverFilePath();
Return Values
GetModelStateFilePath(): Returns the full path to the saved model state file (.pte)GetModelWeaverFilePath(): Returns the full path to the saved Weaver model file (.avdlmodel)
Usage
These methods can be called only after SaveModel() to get the exact file paths where the models were saved:
training.SaveModel("./MyModels");
std::cout << "PTE model saved to: " << training.GetModelStateFilePath().CStr8() << std::endl;
std::cout << "Weaver model saved to: " << training.GetModelWeaverFilePath().CStr8() << std::endl;
InferAndGrade
Performs inference and grades the results. If the InferResultReceived method is overridden, it will be utilized during the inference process.
void InferAndGrade(
const atl::String& imageFilePath,
const Annotation& annotation,
const atl::Optional<avl::Region>& roi = atl::NIL,
const atl::Optional<atl::Array<atl::String>>& setNames = atl::NIL);
Parameters
imageFilePath: Path to the image for inference.annotation: Annotation with class name (and optional region) used for grading or context.roi(optional): Region of interest. When omitted oratl::NIL, the full image is used.setNames(optional): Logical grouping / tag list for evaluation summary (e.g. custom test subsets).
GetTrainingThreshold
Returns the calculated threshold value for AnomalyDetection2SimilarityBasedTraining. This method is typically used in custom inference logic to compare against inference scores.
float GetTrainingThreshold();
Note: This method is primarily used for AnomalyDetection2SimilarityBasedTraining to obtain the threshold calculated during training.
SetLogCallback
Sets a callback function to receive log messages during training and inference operations.
void SetLogCallback(std::function<void(const atl::String&)> callback);
Parameters
callback: Function that will be called with log messages from the training process
Usage Example
auto myCallback = [](const atl::String& msg) {
std::cout << "Log: " << msg.CStr8() << std::endl;
};
training.SetLogCallback(myCallback);
Handling Events
To communicate with the user during training and inference, several events are available:
TrainingProgressReceived(int currentIteration, int totalIterations, double loss, double trainMetric, double validationMetric): Called to update progress during training.InferResultReceived(const atl::Array<avl::Image>&): Invoked when inference results are available for DetectFeaturesTraining.InferResultReceived(const atl::String& sampleFilePath, const atl::String& sampleClassName, const atl::Array<avl::Image>&, const atl::Array<double>&): Invoked when inference results are available for AnomalyDetection2SimilarityBasedTraining.
Usage Example
An example demonstrating how to use the Deep Learning Training API for feature detection is included with the installer. This example covers key concepts such as class inheritance, training progress tracking, inference result handling, and sample data for demonstration purposes
JSON Configuration Example
Below is an example of JSON Configuration File for a feature detection model:
{
"device": "cuda",
"device_id": 0,
"is_continuation": false,
"network_depth": 3,
"iterations": 2,
"min_number_of_tiles": 6,
"need_to_convert_samples": false,
"stop.training_time_s": 0,
"stop.validation_value": 0.0,
"stop.stagnant_iterations": 0,
"feature_size": 96,
"aug.rotation": 0.0,
"aug.scale.min": 1.0,
"aug.scale.max": 1.0,
"aug.shear.vertical": 0.0,
"aug.shear.horizontal": 0.0,
"aug.flip.vertical": false,
"aug.flip.horizontal": false,
"aug.noise": 2.0,
"aug.blur": 0,
"aug.luminance": 0.04,
"aug.contrast": 0.0,
"to_grayscale": false,
"downsample": 2,
"is_mega_tiling": false,
"mega_tile_size": 128,
"class_names": "thread",
"adv.class_names_sep": ";"
}
Best Practices
- Use
DetectFeaturesTrainingfor feature detection tasks instead of directly usingTrainingBase. - Extend
DetectFeaturesTrainingfor custom behavior during training. - Ensure balanced datasets for training and validation.
- Use callback methods to monitor training progress.
Limitations and Notes
- The
ExportQuantizedModelmethod is not supported forDetectFeaturesTraining. - Configuration can be done through property setters or by loading a JSON configuration file.
SaveModel()creates two files: a .pte file for training/inference and a .avdlmodel file for deployment.LoadModel()only loads .pte files for inference operations within this API.- Weaver model files (.avdlmodel) are intended for deployment in production applications, not for loading back into the training API.
- Provide an
atl::Optional<avl::Region>ROI to limit inference processing area; passatl::NIL(or omit parameter) to use the full image. - An
Annotationwithout a region is valid for tasks that don't require pixel masks.
