You are here: Start » AVL.NET » Function Reference » Computer Vision » Camera Calibration » AVL.CalibrateWorldPlane_Multigrid

AVL.CalibrateWorldPlane_Multigrid

Finds the image to world plane transformation parameters using multiple grids, using sparse world coordinate information.

Namespace:AvlNet
Assembly:AVL.NET.dll

Syntax

C++
C#
 
public static void CalibrateWorldPlane_Multigrid
(
	IList<IList<AvlNet.AnnotatedPoint2D>> inImageGrids,
	IList<IList<AvlNet.AnnotatedPoint2D>> inLabeledWorldPoints,
	float inGridThickness,
	bool inInvertedWorldY,
	AvlNet.RectificationTransform outTransform
)

Parameters

Name Type Range Default Description
inImageGridsSystem.Collections.Generic.IList<System.Collections.Generic.IList<AvlNet.AnnotatedPoint2D>>Array of annotated calibration grids.
inLabeledWorldPointsSystem.Collections.Generic.IList<System.Collections.Generic.IList<AvlNet.AnnotatedPoint2D>>Sparse array of world coordinate points. Annotations need to correspond to those in the inImageGrid input.
inGridThicknessfloat0.0fThe world plane will be shifted by given amount in direction perpendicular to the grid to compensate for grid thickness. Default value: 0.0f.
inInvertedWorldYboolFalseSet to true if world coordinate system has right-handed orientation, also known as mathematical or standard. Default value: False.
outTransformAvlNet.RectificationTransform

Description

The filter estimates the correspondence between the image plane and a "world plane" – a given planar surface in observed space. It is capable of using a multiple grids with sparse world coordinate information - i.e. grids for which only a few world plane coordinates are known.

The image plane, and thus inImageGrids points are assumed to be distorted, and to correct for the distortion the inCameraModel (calibration data) needs to be provided. The calculated result – outTransform contains all the information for transforming the distorted image plane to the world plane.

Currently, only pinhole camera model is supported, so the planar correspondence used is a homography.

For each grid, the algorithm requires at least four inImageGrids points. At least two inLabeledWorldPoints (in total, among all grids) are required to uniquely determine the outTransform.

The filter provides a few methods for judging the feasibility of calculated solution.

  • The outRmsImageError and outRmsWorldError could show if the error is due to the inImageGrids or inLabeledWorldPoints errors. Model mismatch (i.e. trying to calibrate a non-planar object, e.g. wavy surface) will also result in increased reprojection errors. Large difference between outRmsImageError and outMaxReprojectionErrors could be a sign of presence of outliers in inImageGrids input data.
  • The outReprojectionErrorSegments consists of segments connecting input image points to reprojected world points, and thus it can be readily used for visualization of per-point reprojection errors (excluding the errors due to incorrect world plane labeling).

Errors

List of possible exceptions:

Error type Description
DomainError Array inImageGrids and inLabeledWorldPoints sizes differ
DomainError inGridSpacing needs to be positive

Function Overrides

See also