You are here: Start » SDK Usage
WEAVER SDK is designed to be used as a part of C/C++ projects developed with Microsoft Visual Studio 2015-2019. Projects using WEAVER SDK have to add:
- %WEAVER_SDK_PATH5_3%\include path to Configuration Properties > C/C++ > General > Additional Include Directories.
- %WEAVER_SDK_PATH5_3%\lib path to Configuration Properties > Linker > General > Additional Library Directories.
- WeaverApi.lib to Configuration Properties > Linker > Input > Additional Dependencies
All program using WEAVER SDK have to load DLL files from %WEAVER_SDK_PATH5_3%\bin\x64. Common ways to ensure that, are:
- copy contents of the %WEAVER_SDK_PATH5_3%\bin\x64 next to final executable file, or
- add %WEAVER_SDK_PATH5_3%\bin\x64 to the
PATHenvironment variable, or
- copy contents of the %WEAVER_SDK_PATH5_3%\bin\x64 to some directory listed in the
Please note that the last 2 options may lead in rare cases and in GPU version to errors in other applications using CUDA.
The first option may be accomplished by adding
xcopy "$(WEAVER_SDK_PATH5_3)\bin\$(PlatformName)" "$(OutDir)" /d /y to Configuration Properties > Build Event > Post Build Event > Command Line.
A typical use of WEAVER SDK can be divided into several steps:
Deploying a model.
This is done by
weaver_model_deploy(C API) or by
weaver::modelconstructor (C++ API). It requires path to a Keras model file (saved with weights and architecture) and a target device, used for running the model later. Deploying the model allocates memory on the selected device for model weights and other data. Also, it is strongly encouraged to set desired data orders of output tensors. The order should be chosen depending on "type" of the specific output tensor (e.g. if image is expected, correct orders would be NCHW or NHWC; if classification result is expected, correct order would be NL) and an algorithm used for parsing output data. Also, desired data order can be set or changed later with
weaver_model_set_desired_output_order(C API) or
Input data loading and preparing.
This step covers obtaining input data which is outside scope of WEAVER SDK. In many cases, it is done with external libraries, like Aurora Vision Library or OpenCV. Generally speaking, it comes to getting a raw input data from a file, a camera or an other source and converting the data to a format (e.g. floating point numbers ranging from -1 to 1) expected by the specific network input.
Creating input tensors.
Input tensors are direct inputs for the model in WEAVER SDK. Tensors may own its memory or map some external (possibly read-only) memory. Please note, that tensors does not support byte padding which is added in many cases in image types from external libraries. Common way to ensure that data handled by a tensor has no padding is to copy only meaningful bytes from a source memory to the tensor data. Also, some external libraries may have a function for removing such padding.
Running the model.
This is done with
weaver_model_run(C API) or
weaver::model::run(C++ API). It performs calculations on the device selected during the model deployment. First call may take longer than following ones as it allocates memory needed for executing layers. Output tensors are created during the call and does not need to be preallocated before.
Parsing output tensors.
This step covers getting meaningful data from raw output data which is outside scope of WEAVER SDK. In most cases, it comes to finding a maximum in confidence values (in case of classification) or converting output tensor data to an image (e.g. in case of autoencoders).
Tensor holds or maps data used as an input or an output for running a model. Currently only one data type is supported: single precision floating point numbers.
Tensor can be created in 3 ways:
- owning and managing its data; with
weaver_tensor_create(C API) or
weaver::tensor::tensor(const format&)(C++ API)
- mapping to preallocated memory, allowing for reading and writing; with
weaver_tensor_create_mapped_writable(C API) or
weaver::tensor::tensor(const format&, map_writable_data)(C++ API)
- mapping to preallocated memory, allowing for reading only; with
weaver_tensor_create_mapped(C API) or
weaver::tensor::tensor(const format&, map_readonly_data)(C++ API)
In the first case, creating a tensor allocates required memory which is deallocated during the tensor destruction. The second and third case are very similar. They both require a pointer to preallocated memory which is not deallocated by any operation on the tensor. Please note, that the mapped memory cannot be deallocated until the mapping tensor is destroyed. The only difference between these two cases are operations allowed on mapped memory, which is described more closely below.
Accessing the data managed by a tensor is separated in 2 operations:
- accessing the data for reading only, which is done by
weaver_tensor_get_data(C API) or
- accessing the data for writing (and reading), which is done by
weaver_tensor_get_writable_data(C API) or
|Memory owning||Writable mapping||Read-only mapping|
|Getting data for reading||Ok, returns a pointer||Ok, returns a pointer||Ok, returns a pointer|
|Getting data for writing (and reading)||Ok, returns a pointer||Ok, returns a pointer||Error, returns the NULL pointer (C API) or throws a
Please note, that the functions from C API may also fail in case of passing a corrupted (e.g. NULL) tensor.
Data order and dimensions
Data managed by a tensor is also described with the two more (after a type) parameters: data order and data dimensions. These two are closely related. Currently, only supported data orders are:
where N is a batch size, L is a length of linear data (e.g. confidence values), H is a height of image-like data, W is a width of image-like data and C is a depth (number of channels) of image-like data. Order of the letters starts from the outermost dimension (which changes the least frequently), to the innermost one (which changes the most frequently). Tensor dimensions are a list of sizes of each dimension, ordered by the data order.
The NL data order is used mostly for results of a classification, as it denotes tensors containing some linear data, like confidence values. This specific data order means that memory holds L values, than another L values, and so on, multiplied N times. For example, classification results of 3 images to 5 classes, grouped with the NL data order, would yield a tensor with dimensions: 3, 5 and holding 15 values: 5 confidence values for the first image, than 5 values for the second one and the last 5 values for the third one.
The NHWC and NCHW data orders are used mostly for image-like tensors. The first one is similar to an images stored in an “interleaved” format, where a single pixel holds values for all channels. The second one is similar to an images stored in a “planar” format, where single image is composed from multiple subsequent “subimages”, one for each channel.
For example, let us assume some sample data as 2 images with 3 channels (red-green-blue, for easy referencing), height equal to 5 and width equal to 7. This data in NHWC data order would yield a tensor with dimensions: 2, 5, 7, 3 and holding 210 values. The first 3 values would be red, green and blue channel values for the top left pixel of the first image. Subsequent 3 values would be red, green and blue channel values for the second pixel in the first row of the first image, and so on. After 21 values, which compose the first row of the first image, another 21 values would be placed, composing the second row of the first image, and so on. After 105 values, which compose the first image, second image data is placed, starting from the top left pixel.
Same data in NCHW data order would yield a tensor with dimensions: 2, 3, 5, 7 and holding 210 values. The first 35 values would compose an “image” containing values of the red channel in each coordinate of the first image, row after row. Subsequent 35 values would compose a similar “image” for the green channel and further 35 values would compose an analogous “image” for the blue channel. After these 105 values, which compose the first image, second image data is placed, starting from its red channel.
Please note, that tensor currently does not support byte padding and assume that all values are tightly packed in memory, without any gaps. On the other hand, such padding is common in many external libraries related to an image processing. Removing the padding may be done by a copying only meaningful data (without the padding) from a source to a tensor or by using specialized function from the library.
Once the application is ready, it is time for preparing a distribution package or an installer. There are several requirements that needs to be fulfilled:
- The final executable file of the application needs to have access to the contents of %WEAVER_SDK_PATH5_3%\bin\x64. Common ways to ensure that are described in Project Configuration.
- The computer that the application will run on needs a valid license for the use of Aurora Vision WEAVER SDK product. Licenses can be managed with the License Manager application, that is installed with Aurora Vision WEAVER SDK.
- A license file (*.avkey) can be also manually copied to the end user's machine without installing Aurora Vision WEAVER SDK. It must be placed in a subdirectory of the AppData system folder. The typical location for the license file is C:\Users\%USERNAME%\AppData\Local\Aurora Vision\Licenses. Remember that the license is valid per machine, so every computer that runs the application needs a separate license file.
- Alternatively to the (*.avkey) files we support USB Dongle licenses.
Errors in WEAVER SDK are reported with return values (C API) or
weaver::exception exceptions (C++ API). Possible errors are described more closely in comments to
weaver_status_t enumeration values in WeaverApi.h. Also, in most cases, additional info is available. It can be retrieved with
weaver_get_last_error_info() function (C API) or
weaver::exception::what() method (C++ API). It should help in solving problems.
Please note that additional info may by partially encrypted. If the public part does not help in solving an issue, contact with Aurora Vision Support is required.
|Previous: Installation guide||Next: Supported Keras layers and features|