|
OdometryResult | RGBDOdometryMultiScalePointToPlane (const RGBDImage &source, const RGBDImage &target, const Tensor &intrinsics, const Tensor &trans, const float depth_scale, const float depth_max, const std::vector< OdometryConvergenceCriteria > &criteria, const OdometryLossParams ¶ms) |
|
OdometryResult | RGBDOdometryMultiScaleIntensity (const RGBDImage &source, const RGBDImage &target, const Tensor &intrinsic, const Tensor &trans, const float depth_scale, const float depth_max, const std::vector< OdometryConvergenceCriteria > &criteria, const OdometryLossParams ¶ms) |
|
OdometryResult | RGBDOdometryMultiScaleHybrid (const RGBDImage &source, const RGBDImage &target, const Tensor &intrinsics, const Tensor &trans, const float depth_scale, const float depth_max, const std::vector< OdometryConvergenceCriteria > &criteria, const OdometryLossParams ¶ms) |
|
OdometryResult | RGBDOdometryMultiScale (const t::geometry::RGBDImage &source, const t::geometry::RGBDImage &target, const core::Tensor &intrinsics, const core::Tensor &init_source_to_target=core::Tensor::Eye(4, core::Float64, core::Device("CPU:0")), const float depth_scale=1000.0f, const float depth_max=3.0f, const std::vector< OdometryConvergenceCriteria > &criteria_list={10, 5, 3}, const Method method=Method::Hybrid, const OdometryLossParams ¶ms=OdometryLossParams()) |
| Create an RGBD image pyramid given the original source and target RGBD images, and perform hierarchical odometry using specified method . Can be used for offline odometry where we do not expect to push performance to the extreme and not reuse vertex/normal map computed before. Input RGBD images hold a depth image (UInt16 or Float32) with a scale factor and a color image (UInt8 x 3). More...
|
|
OdometryResult | ComputeOdometryResultPointToPlane (const core::Tensor &source_vertex_map, const core::Tensor &target_vertex_map, const core::Tensor &target_normal_map, const core::Tensor &intrinsics, const core::Tensor &init_source_to_target, const float depth_outlier_trunc, const float depth_huber_delta) |
| Estimates the 4x4 rigid transformation T from source to target, with inlier rmse and fitness. Performs one iteration of RGBD odometry using loss function \([(V_p - V_q)^T N_p]^2\), where \( V_p \) denotes the vertex at pixel p in the source, \( V_q \) denotes the vertex at pixel q in the target, \( N_p \) denotes the normal at pixel p in the source. q is obtained by transforming p with init_source_to_target then projecting with intrinsics . KinectFusion, ISMAR 2011. More...
|
|
OdometryResult | ComputeOdometryResultIntensity (const core::Tensor &source_depth, const core::Tensor &target_depth, const core::Tensor &source_intensity, const core::Tensor &target_intensity, const core::Tensor &target_intensity_dx, const core::Tensor &target_intensity_dy, const core::Tensor &source_vertex_map, const core::Tensor &intrinsics, const core::Tensor &init_source_to_target, const float depth_outlier_trunc, const float intensity_huber_delta) |
| Estimates the 4x4 rigid transformation T from source to target, with inlier rmse and fitness. Performs one iteration of RGBD odometry using loss function \((I_p - I_q)^2\), where \( I_p \) denotes the intensity at pixel p in the source, \( I_q \) denotes the intensity at pixel q in the target. q is obtained by transforming p with init_source_to_target then projecting with intrinsics . Real-time visual odometry from dense RGB-D images, ICCV Workshops, 2011. More...
|
|
OdometryResult | ComputeOdometryResultHybrid (const core::Tensor &source_depth, const core::Tensor &target_depth, const core::Tensor &source_intensity, const core::Tensor &target_intensity, const core::Tensor &target_depth_dx, const core::Tensor &target_depth_dy, const core::Tensor &target_intensity_dx, const core::Tensor &target_intensity_dy, const core::Tensor &source_vertex_map, const core::Tensor &intrinsics, const core::Tensor &init_source_to_target, const float depth_outlier_trunc, const float depth_huber_delta, const float intensity_huber_delta) |
| Estimates the 4x4 rigid transformation T from source to target, with inlier rmse and fitness. Performs one iteration of RGBD odometry using loss function \((I_p - I_q)^2 + \lambda(D_p - (D_q)')^2\), where \( I_p \) denotes the intensity at pixel p in the source, \( I_q \) denotes the intensity at pixel q in the target. \( D_p \) denotes the depth pixel p in the source, \( D_q \) denotes the depth pixel q in the target. q is obtained by transforming p with init_source_to_target then projecting with intrinsics . Reference: J. Park, Q.Y. Zhou, and V. Koltun, Colored Point Cloud Registration Revisited, ICCV, 2017. More...
|
|
core::Tensor | ComputeOdometryInformationMatrix (const geometry::Image &source_depth, const geometry::Image &target_depth, const core::Tensor &intrinsic, const core::Tensor &source_to_target, const float dist_thr, const float depth_scale, const float depth_max) |
|
OdometryResult open3d::t::pipelines::odometry::ComputeOdometryResultHybrid |
( |
const core::Tensor & |
source_depth, |
|
|
const core::Tensor & |
target_depth, |
|
|
const core::Tensor & |
source_intensity, |
|
|
const core::Tensor & |
target_intensity, |
|
|
const core::Tensor & |
target_depth_dx, |
|
|
const core::Tensor & |
target_depth_dy, |
|
|
const core::Tensor & |
target_intensity_dx, |
|
|
const core::Tensor & |
target_intensity_dy, |
|
|
const core::Tensor & |
source_vertex_map, |
|
|
const core::Tensor & |
intrinsics, |
|
|
const core::Tensor & |
init_source_to_target, |
|
|
const float |
depth_outlier_trunc, |
|
|
const float |
depth_huber_delta, |
|
|
const float |
intensity_huber_delta |
|
) |
| |
Estimates the 4x4 rigid transformation T from source to target, with inlier rmse and fitness. Performs one iteration of RGBD odometry using loss function \((I_p - I_q)^2 + \lambda(D_p - (D_q)')^2\), where \( I_p \) denotes the intensity at pixel p in the source, \( I_q \) denotes the intensity at pixel q in the target. \( D_p \) denotes the depth pixel p in the source, \( D_q \) denotes the depth pixel q in the target. q is obtained by transforming p with init_source_to_target
then projecting with intrinsics
. Reference: J. Park, Q.Y. Zhou, and V. Koltun, Colored Point Cloud Registration Revisited, ICCV, 2017.
- Parameters
-
source_depth | (rows, cols, channels=1) Float32 source depth image obtained by PreprocessDepth before calling this function. |
target_depth | (rows, cols, channels=1) Float32 target depth image obtained by PreprocessDepth before calling this function. |
source_intensity | (rows, cols, channels=1) Float32 source intensity image obtained by RGBToGray before calling this function. |
target_intensity | (rows, cols, channels=1) Float32 target intensity image obtained by RGBToGray before calling this function. |
target_depth_dx | (rows, cols, channels=1) Float32 target depth gradient image along x-axis obtained by FilterSobel before calling this function. |
target_depth_dy | (rows, cols, channels=1) Float32 target depth gradient image along y-axis obtained by FilterSobel before calling this function. |
target_intensity_dx | (rows, cols, channels=1) Float32 target intensity gradient image along x-axis obtained by FilterSobel before calling this function. |
target_intensity_dy | (rows, cols, channels=1) Float32 target intensity gradient image along y-axis obtained by FilterSobel before calling this function. |
source_vertex_map | (rows, cols, channels=3) Float32 source vertex image obtained by CreateVertexMap before calling this function. |
intrinsics | (3, 3) intrinsic matrix for projection. |
init_source_to_target | (4, 4) initial transformation matrix from source to target. |
depth_outlier_trunc | Depth difference threshold used to filter projective associations. |
depth_huber_delta | Huber norm parameter used in depth loss. |
intensity_huber_delta | Huber norm parameter used in intensity loss. |
- Returns
- odometry result, with(4, 4) optimized transformation matrix from source to target, inlier ratio, and fitness.