Customized Integration#
You can prototype a new RGB-D volumetric reconstruction algorithm with additional properties (e.g. semantic labels) while maintaining a reasonable performance. An example can be found at examples/python/t_reconstruction_system/integrate_custom.py
.
Activation#
The frustum block selection remains the same, but then we manually activate these blocks and obtain their buffer indices in the Hash map:
60# examples/python/t_reconstruction_system/integrate_custom.py
61 # Get active frustum block coordinates from input
62 frustum_block_coords = vbg.compute_unique_block_coordinates(
63 depth, intrinsic, extrinsic, config.depth_scale, config.depth_max)
64 # Activate them in the underlying hash map (may have been inserted)
65 vbg.hashmap().activate(frustum_block_coords)
66
67 # Find buf indices in the underlying engine
68 buf_indices, masks = vbg.hashmap().find(frustum_block_coords)
Voxel Indices#
We can then unroll voxel indices in these blocks into a flattened array, along with their corresponding voxel coordinates.
72# examples/python/t_reconstruction_system/integrate_custom.py
73 voxel_coords, voxel_indices = vbg.voxel_coordinates_and_flattened_indices(
74 buf_indices)
Up to now we have finished preparation. Then we can perform customized geometry transformation in the Tensor interface, with the same fashion as we conduct in numpy or pytorch.
Geometry transformation#
We first transform the voxel coordinates to the frame’s coordinate system, project them to the image space, and filter out-of-bound correspondences:
80# examples/python/t_reconstruction_system/integrate_custom.py
81 extrinsic_dev = extrinsic.to(device, o3c.float32)
82 xyz = extrinsic_dev[:3, :3] @ voxel_coords.T() + extrinsic_dev[:3, 3:]
83
84 intrinsic_dev = intrinsic.to(device, o3c.float32)
85 uvd = intrinsic_dev @ xyz
86 d = uvd[2]
87 u = (uvd[0] / d).round().to(o3c.int64)
88 v = (uvd[1] / d).round().to(o3c.int64)
89 o3d.core.cuda.synchronize()
90 end = time.time()
91
92 start = time.time()
93 mask_proj = (d > 0) & (u >= 0) & (v >= 0) & (u < depth.columns) & (
94 v < depth.rows)
95
96 v_proj = v[mask_proj]
97 u_proj = u[mask_proj]
98 d_proj = d[mask_proj]
Customized integration#
With the data association, we are able to conduct integration. In this example, we show the conventional TSDF integration written in vectorized Python code:
Read the associated RGB-D properties from the color/depth images at the associated
u, v
indices;Read the voxels from the voxel buffer arrays (
vbg.attribute
) at maskedvoxel_indices
;Perform in-place modification
98# examples/python/t_reconstruction_system/integrate_custom.py
99 depth_readings = depth.as_tensor()[v_proj, u_proj, 0].to(
100 o3c.float32) / config.depth_scale
101 sdf = depth_readings - d_proj
102
103 mask_inlier = (depth_readings > 0) \
104 & (depth_readings < config.depth_max) \
105 & (sdf >= -trunc)
106
107 sdf[sdf >= trunc] = trunc
108 sdf = sdf / trunc
109 weight = vbg.attribute('weight').reshape((-1, 1))
110 tsdf = vbg.attribute('tsdf').reshape((-1, 1))
111
112 valid_voxel_indices = voxel_indices[mask_proj][mask_inlier]
113 w = weight[valid_voxel_indices]
114 wp = w + 1
115
116 tsdf[valid_voxel_indices] \
117 = (tsdf[valid_voxel_indices] * w +
118 sdf[mask_inlier].reshape(w.shape)) / (wp)
119 if config.integrate_color:
120 color = o3d.t.io.read_image(color_file_names[i]).to(device)
121 color_readings = color.as_tensor()[v_proj, u_proj].to(o3c.float32)
122
123 color = vbg.attribute('color').reshape((-1, 3))
124 color[valid_voxel_indices] \
125 = (color[valid_voxel_indices] * w +
126 color_readings[mask_inlier]) / (wp)
127
128 weight[valid_voxel_indices] = wp
You may follow the example and adapt it to your customized properties. Open3D supports conversion from and to PyTorch tensors without memory any copy, see PyTorch I/O with DLPack memory map. This can be use to leverage PyTorch’s capabilities such as automatic differentiation and other operators.