Open3D (C++ API)
0.18.0+252c867
|
Data Structures | |
class | ICPConvergenceCriteria |
Class that defines the convergence criteria of ICP. More... | |
class | RegistrationResult |
class | RobustKernel |
class | TransformationEstimation |
class | TransformationEstimationPointToPoint |
class | TransformationEstimationPointToPlane |
class | TransformationEstimationForColoredICP |
class | TransformationEstimationForDopplerICP |
Enumerations | |
enum class | RobustKernelMethod { L2Loss = 0 , L1Loss = 1 , HuberLoss = 2 , CauchyLoss = 3 , GMLoss = 4 , TukeyLoss = 5 , GeneralizedLoss = 6 } |
enum class | TransformationEstimationType { Unspecified = 0 , PointToPoint = 1 , PointToPlane = 2 , ColoredICP = 3 , DopplerICP = 4 } |
Functions | |
core::Tensor | ComputeFPFHFeature (const geometry::PointCloud &input, const utility::optional< int > max_nn, const utility::optional< double > radius) |
core::Tensor | CorrespondencesFromFeatures (const core::Tensor &source_features, const core::Tensor &target_features, bool mutual_filter=false, float mutual_consistency_ratio=0.1) |
Function to find correspondences via 1-nearest neighbor feature matching. Target is used to construct a nearest neighbor search object, in order to query source. More... | |
RegistrationResult | EvaluateRegistration (const geometry::PointCloud &source, const geometry::PointCloud &target, double max_correspondence_distance, const core::Tensor &transformation=core::Tensor::Eye(4, core::Float64, core::Device("CPU:0"))) |
Function for evaluating registration between point clouds. More... | |
RegistrationResult | ICP (const geometry::PointCloud &source, const geometry::PointCloud &target, const double max_correspondence_distance, const core::Tensor &init_source_to_target=core::Tensor::Eye(4, core::Float64, core::Device("CPU:0")), const TransformationEstimation &estimation=TransformationEstimationPointToPoint(), const ICPConvergenceCriteria &criteria=ICPConvergenceCriteria(), const double voxel_size=-1.0, const std::function< void(const std::unordered_map< std::string, core::Tensor > &)> &callback_after_iteration=nullptr) |
Functions for ICP registration. More... | |
RegistrationResult | MultiScaleICP (const geometry::PointCloud &source, const geometry::PointCloud &target, const std::vector< double > &voxel_sizes, const std::vector< ICPConvergenceCriteria > &criteria_list, const std::vector< double > &max_correspondence_distances, const core::Tensor &init_source_to_target=core::Tensor::Eye(4, core::Float64, core::Device("CPU:0")), const TransformationEstimation &estimation=TransformationEstimationPointToPoint(), const std::function< void(const std::unordered_map< std::string, core::Tensor > &)> &callback_after_iteration=nullptr) |
Functions for Multi-Scale ICP registration. It will run ICP on different voxel level, from coarse to dense. The vector of ICPConvergenceCriteria(relative fitness, relative rmse, max_iterations) contains the stopping condition for each voxel level. The length of voxel_sizes vector, criteria vector, max_correspondence_distances vector must be same, and voxel_sizes must contain positive values in strictly decreasing order [Lower the voxel size, higher is the resolution]. Only the last value of the voxel_sizes vector can be {-1}, as it allows to run on the original scale without downsampling. More... | |
core::Tensor | GetInformationMatrix (const geometry::PointCloud &source, const geometry::PointCloud &target, const double max_correspondence_distance, const core::Tensor &transformation) |
Computes Information Matrix , from the transfromation between source and target pointcloud. It returns the Information Matrix of shape {6, 6}, of dtype Float64 on device CPU:0 . More... | |
core::Tensor open3d::t::pipelines::registration::ComputeFPFHFeature | ( | const geometry::PointCloud & | input, |
const utility::optional< int > | max_nn = 100 , |
||
const utility::optional< double > | radius = utility::nullopt |
||
) |
Function to compute FPFH feature for a point cloud. It uses KNN search (Not recommended to use on GPU) if only max_nn parameter is provided, Radius search (Not recommended to use on GPU) if only radius parameter is provided, and Hybrid search (Recommended) if both are provided.
input | The input point cloud with data type float32 or float64. |
max_nn | [optional] Neighbor search max neighbors parameter. [Default = 100]. |
radius | [optional] Neighbor search radius parameter. [Recommended ~5x voxel size]. |
core::Tensor open3d::t::pipelines::registration::CorrespondencesFromFeatures | ( | const core::Tensor & | source_features, |
const core::Tensor & | target_features, | ||
bool | mutual_filter = false , |
||
float | mutual_consistency_ratio = 0.1 |
||
) |
Function to find correspondences via 1-nearest neighbor feature matching. Target is used to construct a nearest neighbor search object, in order to query source.
source_feats | (N, D) tensor |
target_feats | (M, D) tensor |
mutual_filter | Boolean flag, only return correspondences (i, j) s.t. source_features[i] and target_features[j] are mutually the nearest neighbor. |
mutual_consistency_ratio | Float threshold to decide whether the number of correspondences is sufficient. Only used when mutual_filter is set to True. |
RegistrationResult open3d::t::pipelines::registration::EvaluateRegistration | ( | const geometry::PointCloud & | source, |
const geometry::PointCloud & | target, | ||
double | max_correspondence_distance, | ||
const core::Tensor & | transformation = core::Tensor::Eye(4, core::Float64, core::Device("CPU:0")) |
||
) |
Function for evaluating registration between point clouds.
source | The source point cloud. (Float32 or Float64 type). |
target | The target point cloud. (Float32 or Float64 type). |
max_correspondence_distance | Maximum correspondence points-pair distance. |
transformation | The 4x4 transformation matrix to transform source to target of dtype Float64 on CPU device. |
core::Tensor open3d::t::pipelines::registration::GetInformationMatrix | ( | const geometry::PointCloud & | source, |
const geometry::PointCloud & | target, | ||
const double | max_correspondence_distance, | ||
const core::Tensor & | transformation | ||
) |
Computes Information Matrix
, from the transfromation between source and target pointcloud. It returns the Information Matrix
of shape {6, 6}, of dtype Float64
on device CPU:0
.
source | The source point cloud. (Float32 or Float64 type). |
target | The target point cloud. (Float32 or Float64 type). |
max_correspondence_distance | Maximum correspondence points-pair distance. |
transformation | The 4x4 transformation matrix to transform source to target . |
RegistrationResult open3d::t::pipelines::registration::ICP | ( | const geometry::PointCloud & | source, |
const geometry::PointCloud & | target, | ||
const double | max_correspondence_distance, | ||
const core::Tensor & | init_source_to_target = core::Tensor::Eye(4, core::Float64, core::Device("CPU:0")) , |
||
const TransformationEstimation & | estimation = TransformationEstimationPointToPoint() , |
||
const ICPConvergenceCriteria & | criteria = ICPConvergenceCriteria() , |
||
const double | voxel_size = -1.0 , |
||
const std::function< void(const std::unordered_map< std::string, core::Tensor > &)> & | callback_after_iteration = nullptr |
||
) |
Functions for ICP registration.
source | The source point cloud. (Float32 or Float64 type). |
target | The target point cloud. (Float32 or Float64 type). |
max_correspondence_distance | Maximum correspondence points-pair distance. |
init_source_to_target | Initial transformation estimation of type Float64 on CPU. |
estimation | Estimation method. |
criteria | Convergence criteria. |
voxel_size | The input pointclouds will be down-sampled to this voxel_size scale. If voxel_size < 0, original scale will be used. However it is highly recommended to down-sample the point-cloud for performance. By default original scale of the point-cloud will be used. |
callback_after_iteration | Optional lambda function, saves string to tensor map of attributes such as "iteration_index", "scale_index", "scale_iteration_index", "inlier_rmse", "fitness", "transformation", on CPU device, updated after each iteration. |
RegistrationResult open3d::t::pipelines::registration::MultiScaleICP | ( | const geometry::PointCloud & | source, |
const geometry::PointCloud & | target, | ||
const std::vector< double > & | voxel_sizes, | ||
const std::vector< ICPConvergenceCriteria > & | criteria_list, | ||
const std::vector< double > & | max_correspondence_distances, | ||
const core::Tensor & | init_source_to_target = core::Tensor::Eye(4, core::Float64, core::Device("CPU:0")) , |
||
const TransformationEstimation & | estimation = TransformationEstimationPointToPoint() , |
||
const std::function< void(const std::unordered_map< std::string, core::Tensor > &)> & | callback_after_iteration = nullptr |
||
) |
Functions for Multi-Scale ICP registration. It will run ICP on different voxel level, from coarse to dense. The vector of ICPConvergenceCriteria(relative fitness, relative rmse, max_iterations) contains the stopping condition for each voxel level. The length of voxel_sizes vector, criteria vector, max_correspondence_distances vector must be same, and voxel_sizes must contain positive values in strictly decreasing order [Lower the voxel size, higher is the resolution]. Only the last value of the voxel_sizes vector can be {-1}, as it allows to run on the original scale without downsampling.
source | The source point cloud. (Float32 or Float64 type). |
target | The target point cloud. (Float32 or Float64 type). |
voxel_sizes | VectorDouble of voxel scales of type double. |
criteria_list | Vector of ICPConvergenceCriteria objects for each scale. |
max_correspondence_distances | VectorDouble of maximum correspondence points-pair distances of type double, for each iteration. Must be of same length as voxel_sizes and criterias. |
init_source_to_target | Initial transformation estimation of type Float64 on CPU. |
estimation | Estimation method. |
callback_after_iteration | Optional lambda function, saves string to tensor map of attributes such as "iteration_index", "scale_index", "scale_iteration_index", "inlier_rmse", "fitness", "transformation", on CPU device, updated after each iteration. |