Back to Aurora Vision Library website

You are here: Start » Function Reference » Computer Vision » Template Matching » LocateMultipleObjects_SAD

LocateMultipleObjects_SAD


Header: AVL.h
Namespace: avl
Module: MatchingBasic

Finds multiple occurrences of a predefined template on an image by analysing the Square Average Difference between pixel values.

Applications: Almost always inferior to NCC, so rarely used in real applications.

Syntax

C++
C#
 
void avl::LocateMultipleObjects_SAD
(
	const avl::Image& inImage,
	atl::Optional<const avl::ShapeRegion&> inSearchRegion,
	atl::Optional<const avl::CoordinateSystem2D&> inSearchRegionAlignment,
	const avl::GrayModel& inGrayModel,
	int inMinPyramidLevel,
	atl::Optional<int> inMaxPyramidLevel,
	bool inIgnoreBoundaryObjects,
	float inMaxDifference,
	float inMinDistance,
	atl::Array<avl::Object2D>& outObjects,
	atl::Optional<int&> outPyramidHeight = atl::NIL,
	atl::Optional<avl::ShapeRegion&> outAlignedSearchRegion = atl::NIL,
	atl::Array<avl::Image>& diagImagePyramid = atl::Dummy<atl::Array<avl::Image>>(),
	atl::Array<avl::Image>& diagMatchPyramid = atl::Dummy<atl::Array<avl::Image>>(),
	atl::Array<atl::Array<float> >& diagScores = atl::Dummy<atl::Array<atl::Array<float>>>()
)

Parameters

Name Type Range Default Description
Input value inImage const Image& Image on which model occurrences will be searched
Input value inSearchRegion Optional<const ShapeRegion&> NIL Possible centers of the object occurrences
Input value inSearchRegionAlignment Optional<const CoordinateSystem2D&> NIL Adjusts the region of interest to the position of the inspected object
Input value inGrayModel const GrayModel& Model which will be sought
Input value inMinPyramidLevel int 0 - 12 0 Defines the highest resolution level
Input value inMaxPyramidLevel Optional<int> 0 - 12 3 Defines the number of reduced resolution levels that can be used to speed up computations
Input value inIgnoreBoundaryObjects bool False Flag indicating whether objects crossing image boundary should be ignored or not
Input value inMaxDifference float 0.0 - 5.0f Maximum accepted average difference between pixel values
Input value inMinDistance float 0.0 - 10.0f Minimum distance between two matches
Output value outObjects Array<Object2D>& Found objects
Output value outPyramidHeight Optional<int&> NIL Highest pyramid level used to speed up computations
Output value outAlignedSearchRegion Optional<ShapeRegion&> NIL Transformed input shape region
Diagnostic input diagImagePyramid Array<Image>& Pyramid of iteratively downsampled input image
Diagnostic input diagMatchPyramid Array<Image>& Locations found on each pyramid level
Diagnostic input diagScores Array<Array<float> >& Scores of found matches on each pyramid level

Optional Outputs

The computation of following outputs can be switched off by passing value atl::NIL to these parameters: outPyramidHeight, outAlignedSearchRegion.

Read more about Optional Outputs.

Description

The operation matches the object model, inGrayModel, against the input image, inImage. The inSearchRegion region restricts the search area so that only in this region the centers of the objects can be presented. The inMaxDifference parameter determines the maximum average difference between corresponding pixel values of the valid object occurrence. The inMinDistance parameter determines minimum distance between any two valid occurrences (if two occurrences lie closer than inMinDistance from each other, the one with greater score is considered to be valid).

The computation time of the filter depends on the size of the model, the sizes of inImage and inSearchRegion, but also on the value of inMaxDifference. This parameter is a score threshold. Based on its value some partial computation can be sufficient to reject some locations as valid object instances. Moreover, the pyramid of the images is used. Thus, only the highest pyramid level is searched exhaustively, and potential candidates are later validated at lower levels. The inMinPyramidLevel parameter determines the lowest pyramid level used to validate such candidates. Setting this parameter to a value greater than 0 may speed up the computation significantly, especially for higher resolution images. However, the accuracy of the found object occurrences can be reduced. Lower inMaxDifference generates less potential candidates on the highest level to verify on lower levels. It should be noted that some valid occurrences with score above this score threshold can be missed. On higher levels score can be slightly lower than on lower levels. Thus, some valid object occurrences which on the lowest level would be deemed to be valid object instances can be incorrectly missed on some higher level. The diagMatchPyramid output represents all potential candidates recognized on each pyramid level and can be helpful during the difficult process of the proper parameter setting.

To be able to locate objects which are partially outside the image, the filter assumes that there are only black pixels beyond the image border.

The outObjects.Point array contains the model reference points of the matched object occurrences. The corresponding outObjects.Angle array contains the rotation angles of the objects. The corresponding outObjects.Match array provides information about both the position and the angle of each match combined into value of Rectangle2D type. Each element of the outObjects.Alignment array contains informations about the transform required for geometrical objects defined in the context of template image to be transformed into object in the context of corresponding outObjects.Match position. This array can be later used e.g. by 1D Edge Detection or Shape Fitting categories filters.

The SAD (Sum of Absolute Differences) method can be significantly slower than NCC (Normalized Cross-Correlation) method. Moreover, it is not illumination-invariant, as it is required in most applications. Thus, it is highly recommended to use the latter, NCC method instead.

Remarks

Read more about Local Coordinate Systems in Machine Vision Guide: Local Coordinate Systems.

Additional information about Template Matching can be found in Machine Vision Guide: Template Matching

Hardware Acceleration

This operation supports automatic parallelization for multicore and multiprocessor systems.

See Also

  • LocateSingleObject_SAD – Finds a single occurrence of a predefined template on an image by analysing the Square Average Difference between pixel values.
  • LocateMultipleObjects_NCC – Finds all occurrences of a predefined template on an image by analysing the normalized correlation between pixel values.