Open3D (C++ API)  0.18.0+252c867
Data Structures | Enumerations | Functions
open3d::t::pipelines::odometry Namespace Reference

Data Structures

class  OdometryConvergenceCriteria
 
class  OdometryResult
 
class  OdometryLossParams
 

Enumerations

enum class  Method { PointToPlane , Intensity , Hybrid }
 

Functions

OdometryResult RGBDOdometryMultiScalePointToPlane (const RGBDImage &source, const RGBDImage &target, const Tensor &intrinsics, const Tensor &trans, const float depth_scale, const float depth_max, const std::vector< OdometryConvergenceCriteria > &criteria, const OdometryLossParams &params)
 
OdometryResult RGBDOdometryMultiScaleIntensity (const RGBDImage &source, const RGBDImage &target, const Tensor &intrinsic, const Tensor &trans, const float depth_scale, const float depth_max, const std::vector< OdometryConvergenceCriteria > &criteria, const OdometryLossParams &params)
 
OdometryResult RGBDOdometryMultiScaleHybrid (const RGBDImage &source, const RGBDImage &target, const Tensor &intrinsics, const Tensor &trans, const float depth_scale, const float depth_max, const std::vector< OdometryConvergenceCriteria > &criteria, const OdometryLossParams &params)
 
OdometryResult RGBDOdometryMultiScale (const t::geometry::RGBDImage &source, const t::geometry::RGBDImage &target, const core::Tensor &intrinsics, const core::Tensor &init_source_to_target=core::Tensor::Eye(4, core::Float64, core::Device("CPU:0")), const float depth_scale=1000.0f, const float depth_max=3.0f, const std::vector< OdometryConvergenceCriteria > &criteria_list={10, 5, 3}, const Method method=Method::Hybrid, const OdometryLossParams &params=OdometryLossParams())
 Create an RGBD image pyramid given the original source and target RGBD images, and perform hierarchical odometry using specified method. Can be used for offline odometry where we do not expect to push performance to the extreme and not reuse vertex/normal map computed before. Input RGBD images hold a depth image (UInt16 or Float32) with a scale factor and a color image (UInt8 x 3). More...
 
OdometryResult ComputeOdometryResultPointToPlane (const core::Tensor &source_vertex_map, const core::Tensor &target_vertex_map, const core::Tensor &target_normal_map, const core::Tensor &intrinsics, const core::Tensor &init_source_to_target, const float depth_outlier_trunc, const float depth_huber_delta)
 Estimates the 4x4 rigid transformation T from source to target, with inlier rmse and fitness. Performs one iteration of RGBD odometry using loss function \([(V_p - V_q)^T N_p]^2\), where \( V_p \) denotes the vertex at pixel p in the source, \( V_q \) denotes the vertex at pixel q in the target, \( N_p \) denotes the normal at pixel p in the source. q is obtained by transforming p with init_source_to_target then projecting with intrinsics. KinectFusion, ISMAR 2011. More...
 
OdometryResult ComputeOdometryResultIntensity (const core::Tensor &source_depth, const core::Tensor &target_depth, const core::Tensor &source_intensity, const core::Tensor &target_intensity, const core::Tensor &target_intensity_dx, const core::Tensor &target_intensity_dy, const core::Tensor &source_vertex_map, const core::Tensor &intrinsics, const core::Tensor &init_source_to_target, const float depth_outlier_trunc, const float intensity_huber_delta)
 Estimates the 4x4 rigid transformation T from source to target, with inlier rmse and fitness. Performs one iteration of RGBD odometry using loss function \((I_p - I_q)^2\), where \( I_p \) denotes the intensity at pixel p in the source, \( I_q \) denotes the intensity at pixel q in the target. q is obtained by transforming p with init_source_to_target then projecting with intrinsics. Real-time visual odometry from dense RGB-D images, ICCV Workshops, 2011. More...
 
OdometryResult ComputeOdometryResultHybrid (const core::Tensor &source_depth, const core::Tensor &target_depth, const core::Tensor &source_intensity, const core::Tensor &target_intensity, const core::Tensor &target_depth_dx, const core::Tensor &target_depth_dy, const core::Tensor &target_intensity_dx, const core::Tensor &target_intensity_dy, const core::Tensor &source_vertex_map, const core::Tensor &intrinsics, const core::Tensor &init_source_to_target, const float depth_outlier_trunc, const float depth_huber_delta, const float intensity_huber_delta)
 Estimates the 4x4 rigid transformation T from source to target, with inlier rmse and fitness. Performs one iteration of RGBD odometry using loss function \((I_p - I_q)^2 + \lambda(D_p - (D_q)')^2\), where \( I_p \) denotes the intensity at pixel p in the source, \( I_q \) denotes the intensity at pixel q in the target. \( D_p \) denotes the depth pixel p in the source, \( D_q \) denotes the depth pixel q in the target. q is obtained by transforming p with init_source_to_target then projecting with intrinsics. Reference: J. Park, Q.Y. Zhou, and V. Koltun, Colored Point Cloud Registration Revisited, ICCV, 2017. More...
 
core::Tensor ComputeOdometryInformationMatrix (const geometry::Image &source_depth, const geometry::Image &target_depth, const core::Tensor &intrinsic, const core::Tensor &source_to_target, const float dist_thr, const float depth_scale, const float depth_max)
 

Enumeration Type Documentation

◆ Method

Enumerator
PointToPlane 
Intensity 
Hybrid 

Function Documentation

◆ ComputeOdometryInformationMatrix()

core::Tensor open3d::t::pipelines::odometry::ComputeOdometryInformationMatrix ( const geometry::Image source_depth,
const geometry::Image target_depth,
const core::Tensor intrinsic,
const core::Tensor source_to_target,
const float  dist_thr,
const float  depth_scale = 1000.0f,
const float  depth_max = 3.0f 
)

Estimates 6x6 information matrix from a pair of depth images. The process is akin to information matrix creation for point clouds.

◆ ComputeOdometryResultHybrid()

OdometryResult open3d::t::pipelines::odometry::ComputeOdometryResultHybrid ( const core::Tensor source_depth,
const core::Tensor target_depth,
const core::Tensor source_intensity,
const core::Tensor target_intensity,
const core::Tensor target_depth_dx,
const core::Tensor target_depth_dy,
const core::Tensor target_intensity_dx,
const core::Tensor target_intensity_dy,
const core::Tensor source_vertex_map,
const core::Tensor intrinsics,
const core::Tensor init_source_to_target,
const float  depth_outlier_trunc,
const float  depth_huber_delta,
const float  intensity_huber_delta 
)

Estimates the 4x4 rigid transformation T from source to target, with inlier rmse and fitness. Performs one iteration of RGBD odometry using loss function \((I_p - I_q)^2 + \lambda(D_p - (D_q)')^2\), where \( I_p \) denotes the intensity at pixel p in the source, \( I_q \) denotes the intensity at pixel q in the target. \( D_p \) denotes the depth pixel p in the source, \( D_q \) denotes the depth pixel q in the target. q is obtained by transforming p with init_source_to_target then projecting with intrinsics. Reference: J. Park, Q.Y. Zhou, and V. Koltun, Colored Point Cloud Registration Revisited, ICCV, 2017.

Parameters
source_depth(rows, cols, channels=1) Float32 source depth image obtained by PreprocessDepth before calling this function.
target_depth(rows, cols, channels=1) Float32 target depth image obtained by PreprocessDepth before calling this function.
source_intensity(rows, cols, channels=1) Float32 source intensity image obtained by RGBToGray before calling this function.
target_intensity(rows, cols, channels=1) Float32 target intensity image obtained by RGBToGray before calling this function.
target_depth_dx(rows, cols, channels=1) Float32 target depth gradient image along x-axis obtained by FilterSobel before calling this function.
target_depth_dy(rows, cols, channels=1) Float32 target depth gradient image along y-axis obtained by FilterSobel before calling this function.
target_intensity_dx(rows, cols, channels=1) Float32 target intensity gradient image along x-axis obtained by FilterSobel before calling this function.
target_intensity_dy(rows, cols, channels=1) Float32 target intensity gradient image along y-axis obtained by FilterSobel before calling this function.
source_vertex_map(rows, cols, channels=3) Float32 source vertex image obtained by CreateVertexMap before calling this function.
intrinsics(3, 3) intrinsic matrix for projection.
init_source_to_target(4, 4) initial transformation matrix from source to target.
depth_outlier_truncDepth difference threshold used to filter projective associations.
depth_huber_deltaHuber norm parameter used in depth loss.
intensity_huber_deltaHuber norm parameter used in intensity loss.
Returns
odometry result, with(4, 4) optimized transformation matrix from source to target, inlier ratio, and fitness.

◆ ComputeOdometryResultIntensity()

OdometryResult open3d::t::pipelines::odometry::ComputeOdometryResultIntensity ( const core::Tensor source_depth,
const core::Tensor target_depth,
const core::Tensor source_intensity,
const core::Tensor target_intensity,
const core::Tensor target_intensity_dx,
const core::Tensor target_intensity_dy,
const core::Tensor source_vertex_map,
const core::Tensor intrinsics,
const core::Tensor init_source_to_target,
const float  depth_outlier_trunc,
const float  intensity_huber_delta 
)

Estimates the 4x4 rigid transformation T from source to target, with inlier rmse and fitness. Performs one iteration of RGBD odometry using loss function \((I_p - I_q)^2\), where \( I_p \) denotes the intensity at pixel p in the source, \( I_q \) denotes the intensity at pixel q in the target. q is obtained by transforming p with init_source_to_target then projecting with intrinsics. Real-time visual odometry from dense RGB-D images, ICCV Workshops, 2011.

Parameters
source_depth(rows, cols, channels=1) Float32 source depth image obtained by PreprocessDepth before calling this function.
target_depth(rows, cols, channels=1) Float32 target depth image obtained by PreprocessDepth before calling this function.
source_intensity(rows, cols, channels=1) Float32 source intensity image obtained by RGBToGray before calling this function.
target_intensity(rows, cols, channels=1) Float32 target intensity image obtained by RGBToGray before calling this function.
target_intensity_dx(rows, cols, channels=1) Float32 target intensity gradient image along x-axis obtained by FilterSobel before calling this function.
target_intensity_dy(rows, cols, channels=1) Float32 target intensity gradient image along y-axis obtained by FilterSobel before calling this function.
source_vertex_map(rows, cols, channels=3) Float32 source vertex image obtained by CreateVertexMap before calling this function.
intrinsics(3, 3) intrinsic matrix for projection.
init_source_to_target(4, 4) initial transformation matrix from source to target.
depth_outlier_truncDepth difference threshold used to filter projective associations.
intensity_huber_deltaHuber norm parameter used in intensity loss.
Returns
odometry result, with(4, 4) optimized transformation matrix from source to target, inlier ratio, and fitness.

◆ ComputeOdometryResultPointToPlane()

OdometryResult open3d::t::pipelines::odometry::ComputeOdometryResultPointToPlane ( const core::Tensor source_vertex_map,
const core::Tensor target_vertex_map,
const core::Tensor target_normal_map,
const core::Tensor intrinsics,
const core::Tensor init_source_to_target,
const float  depth_outlier_trunc,
const float  depth_huber_delta 
)

Estimates the 4x4 rigid transformation T from source to target, with inlier rmse and fitness. Performs one iteration of RGBD odometry using loss function \([(V_p - V_q)^T N_p]^2\), where \( V_p \) denotes the vertex at pixel p in the source, \( V_q \) denotes the vertex at pixel q in the target, \( N_p \) denotes the normal at pixel p in the source. q is obtained by transforming p with init_source_to_target then projecting with intrinsics. KinectFusion, ISMAR 2011.

Parameters
source_vertex_map(rows, cols, channels=3) Float32 source vertex image obtained by CreateVertexMap before calling this function.
target_vertex_map(rows, cols, channels=3) Float32 target vertex image obtained by CreateVertexMap before calling this function.
target_normal_map(rows, cols, channels=3) Float32 target normal image obtained by CreateNormalMap before calling this function.
intrinsics(3, 3) intrinsic matrix for projection.
init_source_to_target(4, 4) initial transformation matrix from source to target.
depth_outlier_truncDepth difference threshold used to filter projective associations.
depth_huber_deltaHuber norm parameter used in depth loss.
Returns
odometry result, with (4, 4) optimized transformation matrix from source to target, inlier ratio, and fitness.

◆ RGBDOdometryMultiScale()

OdometryResult open3d::t::pipelines::odometry::RGBDOdometryMultiScale ( const t::geometry::RGBDImage source,
const t::geometry::RGBDImage target,
const core::Tensor intrinsics,
const core::Tensor init_source_to_target = core::Tensor::Eye(4, core::Float64core::Device("CPU:0")),
const float  depth_scale = 1000.0f,
const float  depth_max = 3.0f,
const std::vector< OdometryConvergenceCriteria > &  criteria_list = {10, 5, 3},
const Method  method = Method::Hybrid,
const OdometryLossParams params = OdometryLossParams() 
)

Create an RGBD image pyramid given the original source and target RGBD images, and perform hierarchical odometry using specified method. Can be used for offline odometry where we do not expect to push performance to the extreme and not reuse vertex/normal map computed before. Input RGBD images hold a depth image (UInt16 or Float32) with a scale factor and a color image (UInt8 x 3).

Parameters
sourceSource RGBD image.
targetTarget RGBD image.
intrinsics(3, 3) intrinsic matrix for projection of core::Float64 on CPU.
init_source_to_target(4, 4) initial transformation matrix from source to target of core::Float64 on CPU.
depth_scaleConverts depth pixel values to meters by dividing the scale factor.
depth_maxMax depth to truncate depth image with noisy measurements.
criteria_listCriteria used to define and terminate iterations. In multiscale odometry the order is from coarse to fine. Inputting a vector of iterations by default triggers the implicit conversion.
methodMethod used to apply RGBD odometry.
paramsParameters used in loss function, including outlier rejection threshold and Huber norm parameters.
Returns
odometry result, with (4, 4) optimized transformation matrix from source to target, inlier ratio, and fitness.

◆ RGBDOdometryMultiScaleHybrid()

OdometryResult open3d::t::pipelines::odometry::RGBDOdometryMultiScaleHybrid ( const RGBDImage source,
const RGBDImage target,
const Tensor intrinsics,
const Tensor trans,
const float  depth_scale,
const float  depth_max,
const std::vector< OdometryConvergenceCriteria > &  criteria,
const OdometryLossParams params 
)

◆ RGBDOdometryMultiScaleIntensity()

OdometryResult open3d::t::pipelines::odometry::RGBDOdometryMultiScaleIntensity ( const RGBDImage source,
const RGBDImage target,
const Tensor intrinsic,
const Tensor trans,
const float  depth_scale,
const float  depth_max,
const std::vector< OdometryConvergenceCriteria > &  criteria,
const OdometryLossParams params 
)

◆ RGBDOdometryMultiScalePointToPlane()

OdometryResult open3d::t::pipelines::odometry::RGBDOdometryMultiScalePointToPlane ( const RGBDImage source,
const RGBDImage target,
const Tensor intrinsics,
const Tensor trans,
const float  depth_scale,
const float  depth_max,
const std::vector< OdometryConvergenceCriteria > &  criteria,
const OdometryLossParams params 
)