Zivid C++ API 2.16.0+46cdaba6-1
Zivid::Experimental::Toolbox Namespace Reference

Classes

class  LocalPointCloudRegistrationResult
 The result of a call to localPointCloudRegistration() More...
 

Typedefs

using Pose = Zivid::Calibration::Pose
 

Functions

ZIVID_CORE_EXPORT std::ostream & operator<< (std::ostream &stream, const LocalPointCloudRegistrationResult &result)
 Serialize the value to a stream.
 
ZIVID_CORE_EXPORT LocalPointCloudRegistrationResult localPointCloudRegistration (const UnorganizedPointCloud &target, const UnorganizedPointCloud &source, const LocalPointCloudRegistrationParameters &params, const Pose &initialTransform=Pose{ Matrix4x4::identity() })
 Compute alignment transform between two point clouds.
 

Typedef Documentation

◆ Pose

Function Documentation

◆ localPointCloudRegistration()

ZIVID_CORE_EXPORT LocalPointCloudRegistrationResult Zivid::Experimental::Toolbox::localPointCloudRegistration ( const UnorganizedPointCloud & target,
const UnorganizedPointCloud & source,
const LocalPointCloudRegistrationParameters & params,
const Pose & initialTransform = PoseMatrix4x4::identity() } )

Compute alignment transform between two point clouds.

Given a source point cloud and a target point cloud, this function attempts to compute the transform that must be applied to the source in order to align it with the target. This can be used to create a "stitched" unorganized point cloud of an object by combining data collected from different camera angles.

This function takes an argument initialTransform which is used as a starting-point for the computation of the transform that best aligns source with target. This initial guess is usually found from e.g. reference markers or robot capture pose, and this function is then used to refine the alignment. If the overlap of source and target is already quite good, one can pass the identity matrix as initialTransform.

The returned transform represents the total transform needed to align source with target, i.e. it includes both initialTransform and the refinement found by the algorithm.

Performance is very dependent on the number of points in either point cloud. To improve performance, voxel downsample one or both point clouds before passing them into this function. The resulting alignment transform can then be applied to the non-downsampled point clouds to still obtain a dense result.

Performance is also very dependent on MaxCorrespondenceDistance. To improve performance, try reducing this value. However, keep the value larger than the typical point-to-point distance in the point clouds, and larger than the expected translation error in the initial guess.

Parameters
targetThe point cloud to align with
sourceThe point cloud to be aligned with target
paramsParameters for the registration process and its convergence criteria
initialTransformInitial guess applied to source point cloud before refinement
Returns
Instance of LocalPointCloudRegistrationResult

◆ operator<<()

ZIVID_CORE_EXPORT std::ostream & Zivid::Experimental::Toolbox::operator<< ( std::ostream & stream,
const LocalPointCloudRegistrationResult & result )

Serialize the value to a stream.