Back to Adaptive Vision Studio website

You are here: Start » Filter Reference » Computer Vision » Template Matching » CreateEdgeModel

CreateEdgeModel


Module: MatchingPro

Creates a model for edge-based template matching.

Applications

Dynamic creation of models in the runtime environment (normally they are created interactively in Studio).
Name Type Range Description
inImage Image Image from which model will be extracted
inTemplateRegion Region* Region of the image from which model will be extracted
inReferenceFrame Rectangle2D* Exact position of the model object in the image
inMinPyramidLevel Integer 0 - 12 Defines the index of the lowest reduced resolution level used to speed up computations
inMaxPyramidLevel Integer* 0 - 12 Defines the number of reduced resolution levels used to speed up computations
inSmoothingStdDev Real 0.0 - Standard deviation of the gaussian smoothing applied before edge extraction
inEdgeThreshold Real 0.0 - Higher threshold for edge magnitude
inEdgeHysteresis Real 0.0 - Threshold hysteresis value for edge magnitude
inMinAngle Real Start of range of possible rotations
inMaxAngle Real End of range of possible rotations
inAnglePrecision Real 0.001 - 10.0 Defines angular resolution of the matching process
inMinScale Real 0.0 - Start of range of possible scales
inMaxScale Real 0.0 - End of range of possible scales
inScalePrecision Real 0.001 - 10.0 Defines scale resolution of the matching process
inEdgeCompleteness Real 0.01 - 1.0 Determines what fraction of the edges will be present in the created model
outEdgeModel EdgeModel? Created model that can be used by LocateMultipleObjects_Edges
outEdgeModelPoint Point2D? The middle point of the created model
diagEdges PathArray? Visualization of the model edges found at the original resolution
diagEdgePyramid ImageArray? Visualization of the edges found at different resolution levels

Description

The operation creates an Edge Matching model for the object represented in inTemplateRegion region in inImage image. The resulting model can be matched against any image using the LocateMultipleObjects_Edges filter.

The model consists of a pyramid of iteratively downsampled images, the original image being the first of them. The inMaxPyramidLevel parameter determines how many additional images of the pyramid shall be computed. Its value has great influence on computation speed. Therefore it is highly recommended to set its value as high as possible, at the same time ensuring that the model edges are at least 2^inMaxPyramidLevel pixels far from the inImage image frame. However, if it is set too high and no model edges are found on some pyramid level, an error with appropriate description occurs.

The inEdgeThreshold and inEdgeHysteresis parameters control the hysteresis threshold (as in ThresholdImage_Hysteresis) used in the edge extraction (as in DetectEdges_AsRegion) phase of the model creation. It is an important part of using the filter to set these parameters properly, because only found edge pixels determine how good the later matching process will be. Therefore the diagEdges and diagEdgePyramid are the crucial parameters for experiments, because they show found edges in the model image and edge pixels found at different resolution levels, respectively.

The inMinAngle and inMaxAngle parameters describe possible rotation angles of the model, i.e. only those object occurrences will be later found by LocateMultipleObjects_Edges whose rotation angles are in the range <inMinAngle,inMaxAngle>. The inAnglePrecision parameter controls the angular resolution of the matching process. The model is created in several rotations. The angles of consecutive rotations differ by an angle step from each other. The value of this angle step is determined on the basis of inAnglePrecision. Its value represents the multiplicity of the automatically computed angle step used as an actual step. So the greater inAnglePrecision is, the greater accuracy can be achieved and the lower is the chance of missing object occurrences. In practice however increasing inAnglePrecision above a certain threshold (unique for every object) does not increase the accuracy, only the computation time.

The inReferenceFrame is a characteristic rectangle, the position of which will be returned by LocateMultipleObjects_Edges as an occurrence of the object. By default, it is set to the bounding box of the edges found in inTemplateRegion.

Examples

Description of usage of this filter can be found in examples and tutorial: Dynamic Template Matching.

Remarks

Read more about Local Coordinate Systems in Machine Vision Guide: Local Coordinate Systems.

Additional information about Template Matching can be found in Machine Vision Guide: Template Matching

Hardware Acceleration

This operation supports automatic parallelization for multicore and multiprocessor systems.

Errors

This filter can throw an exception to report error. Read how to deal with errors in Error Handling.

List of possible exceptions:

Error type Description
DomainError Incorrect scale range in CreateEdgeModel.
DomainError Minimal pyramid level cannot be greater than maximal pyramid level in CreateEdgeModel.
DomainError Region of interest exceeds an input image in CreateEdgeModel.

Complexity Level

This filter is available on Advanced Complexity Level.

See Also

  • LocateSingleObject_Edges – Finds a single occurrence of a predefined template on an image by comparing object edges.