Back to Aurora Vision Library website

You are here: Start » Function Reference » Computer Vision » Template Matching » CreateGrayModel

CreateGrayModel


Header: AVL.h
Namespace: avl
Module: MatchingBasic

Creates a model for NCC or SAD template matching.

Applications: Dynamic creation of models in the runtime environment (normally they are created interactively in Studio).

Syntax

C++
C#
 
void avl::CreateGrayModel
(
	const avl::Image& inImage,
	atl::Optional<const avl::Region&> inTemplateRegion,
	atl::Optional<const avl::Rectangle2D&> inReferenceFrame,
	int inMinPyramidLevel,
	atl::Optional<int> inMaxPyramidLevel,
	float inMinAngle,
	float inMaxAngle,
	float inAnglePrecision,
	float inMinScale,
	float inMaxScale,
	float inScalePrecision,
	avl::GrayModel& outGrayModel,
	atl::Optional<avl::Point2D&> outGrayModelPoint = atl::NIL,
	atl::Array<avl::Image>& diagTemplatePyramid
)

Parameters

Name Type Range Default Description
Input value inImage const Image& Image from which model will be extracted
Input value inTemplateRegion Optional<const Region&> NIL Region of the image from which model will be extracted
Input value inReferenceFrame Optional<const Rectangle2D&> NIL Exact position of the model object in the image
Input value inMinPyramidLevel int 0 - 12 0 Defines the index of the lowest reduced resolution level used to speed up computations
Input value inMaxPyramidLevel Optional<int> 0 - 12 NIL Defines the number of reduced resolution levels used to speed up computations
Input value inMinAngle float 0.0f Start of range of possible rotations
Input value inMaxAngle float 0.0f End of range of possible rotations
Input value inAnglePrecision float 0.001 - 10.0 1.0f Defines angular resolution of the matching process
Input value inMinScale float 0.0 - 1.0f Start of range of possible scales
Input value inMaxScale float 0.0 - 1.0f End of range of possible scales
Input value inScalePrecision float 0.001 - 10.0 1.0f Defines scale resolution of the matching process
Output value outGrayModel GrayModel& Created model that can be used by LocateMultipleObjects_NCC and LocateMultipleObjects_SAD filters
Output value outGrayModelPoint Optional<Point2D&> NIL The middle point of the created model
Diagnostic input diagTemplatePyramid Array<Image>& Visualization of the model at different resolution levels

Optional Outputs

The computation of following outputs can be switched off by passing value atl::NIL to these parameters: outGrayModelPoint.

Read more about Optional Outputs.

Description

The operation creates a Gray Matching model for the object represented in inTemplateRegion region in inImage image. The resulting model can be matched against any image using the LocateMultipleObjects_NCC filter or the LocateMultipleObjects_SAD filter.

The model consists of a pyramid of iteratively downsampled images, the original image being the first of them. The inMaxPyramidLevel parameter determines how many additional images of the pyramid shall be computed. Greater inMaxPyramidLevel values can speed up matching process considerably, but it should be set so the image on the highest pyramid level is not too distorted.

The inMinAngle and inMaxAngle parameters describe possible rotation angles of the model, i.e. only those object occurrences will be later found by LocateMultipleObjects_NCC (or LocateMultipleObjects_SAD) whose rotation angles are in the range <inMinAngle,inMaxAngle>. The inAnglePrecision parameter controls the angular resolution of the matching process. The model is created in several rotations. The angles of consecutive rotations differ by an angle step from each other. The value of this angle step is determined on the basis of inAnglePrecision. Its value represents the multiplicity of the automatically computed angle step used as an actual step. So the greater inAnglePrecision is, the greater accuracy can be achieved and the lower is the chance of missing object occurrences. In practice however increasing inAnglePrecision above a certain threshold (unique for every object) does not increase the accuracy, only the computation time.

The inReferenceFrame is a characteristic rectangle, the position of which will be returned by LocateMultipleObjects_NCC (or LocateMultipleObjects_SAD) as an occurrence of the object. By default, it is set to the bounding box of the inTemplateRegion.

Remarks

Read more about Local Coordinate Systems in Machine Vision Guide: Local Coordinate Systems.

Additional information about Template Matching can be found in Machine Vision Guide: Template Matching

Hardware Acceleration

This operation supports automatic parallelization for multicore and multiprocessor systems.

Errors

List of possible exceptions:

Error type Description
DomainError Incorrect scale range in CreateGrayModel.
DomainError Minimal pyramid level cannot be greater than maximal pyramid level in CreateGrayModel.
DomainError Region of interest exceeds an input image in CreateGrayModel.

See Also

  • LocateSingleObject_NCC – Finds a single occurrence of a predefined template on an image by analysing the normalized correlation between pixel values.
  • LocateMultipleObjects_NCC – Finds all occurrences of a predefined template on an image by analysing the normalized correlation between pixel values.
  • LocateSingleObject_SAD – Finds a single occurrence of a predefined template on an image by analysing the Square Average Difference between pixel values.
  • LocateMultipleObjects_SAD – Finds multiple occurrences of a predefined template on an image by analysing the Square Average Difference between pixel values.