PointCloudRegistrationLocalPointCloudRegistration Method |
Compute alignment transform between two point clouds
Namespace:
Zivid.NET.Experimental.Toolbox
Assembly:
ZividCoreNET (in ZividCoreNET.dll) Version: 2.17.1.0
Syntaxpublic static LocalPointCloudRegistrationResult LocalPointCloudRegistration(
UnorganizedPointCloud target,
UnorganizedPointCloud source,
LocalPointCloudRegistrationParameters parameters,
Pose initialTransform
)
Public Shared Function LocalPointCloudRegistration (
target As UnorganizedPointCloud,
source As UnorganizedPointCloud,
parameters As LocalPointCloudRegistrationParameters,
initialTransform As Pose
) As LocalPointCloudRegistrationResult
public:
static LocalPointCloudRegistrationResult^ LocalPointCloudRegistration(
UnorganizedPointCloud^ target,
UnorganizedPointCloud^ source,
LocalPointCloudRegistrationParameters^ parameters,
Pose^ initialTransform
)
Parameters
- target
- Type: Zivid.NETUnorganizedPointCloud
The point cloud to align with - source
- Type: Zivid.NETUnorganizedPointCloud
The point cloud to be aligned with target - parameters
- Type: Zivid.NET.ExperimentalLocalPointCloudRegistrationParameters
Parameters for the registration process and its convergence criteria - initialTransform
- Type: Zivid.NET.CalibrationPose
Initial guess applied to source point cloud before refinement
Return Value
Type:
LocalPointCloudRegistrationResultInstance of LocalPointCloudRegistrationResult
Remarks
Given a `source` point cloud and a `target` point cloud, this function attempts to compute the transform
that must be applied to the `source` in order to align it with the `target`. This can be used to create a
"stitched" unorganized point cloud of an object by combining data collected from different camera angles.
This function takes an argument `initialTransform` which is used as a starting-point for the computation
of the transform that best aligns `source` with `target`. This initial guess is usually found from e.g.
reference markers or robot capture pose, and this function is then used to refine the alignment. If the
overlap of `source` and `target` is already quite good, one can pass the identity matrix as `initialTransform`.
The returned transform represents the total transform needed to align `source` with `target`, i.e. it
includes both `initialTransform` and the refinement found by the algorithm.
Performance is very dependent on the number of points in either point cloud. To improve performance,
voxel downsample one or both point clouds before passing them into this function. The resulting alignment
transform can then be applied to the non-downsampled point clouds to still obtain a dense result.
Performance is also very dependent on `MaxCorrespondenceDistance`. To improve performance, try reducing
this value. However, keep the value larger than the typical point-to-point distance in the point clouds,
and larger than the expected translation error in the initial guess.
See Also