Customized Integration¶
You can prototype a new RGB-D volumetric reconstruction algorithm with additional properties (e.g. semantic labels) while maintaining a reasonable performance. An example can be found at examples/python/t_reconstruction_system/integrate_custom.py
.
Activation¶
The frustum block selection remains the same, but then we manually activate these blocks and obtain their buffer indices in the Hash map:
78 79 80 81 82 83 84 85 86 87 | config): # (3, N) -> (2, N) start = time.time() extrinsic_dev = extrinsic.to(device, o3c.float32) xyz = extrinsic_dev[:3, :3] @ voxel_coords.T() + extrinsic_dev[:3, 3:] intrinsic_dev = intrinsic.to(device, o3c.float32) uvd = intrinsic_dev @ xyz d = uvd[2] |
Voxel Indices¶
We can then unroll voxel indices in these blocks into a flattened array, along with their corresponding voxel coordinates.
91 92 93 | config): start = time.time() |
Up to now we have finished preparation. Then we can perform customized geometry transformation in the Tensor interface, with the same fashion as we conduct in numpy or pytorch.
Geometry transformation¶
We first transform the voxel coordinates to the frame’s coordinate system, project them to the image space, and filter out-of-bound correspondences:
99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 | config): depth_readings = depth.as_tensor()[v_proj, u_proj, 0].to( o3c.float32) / config.depth_scale sdf = depth_readings - d_proj mask_inlier = (depth_readings > 0) \ & (depth_readings < config.depth_max) \ & (sdf >= -trunc) sdf[sdf >= trunc] = trunc sdf = sdf / trunc o3d.core.cuda.synchronize() end = time.time() start = time.time() weight = vbg.attribute('weight').reshape((-1, 1)) tsdf = vbg.attribute('tsdf').reshape((-1, 1)) valid_voxel_indices = voxel_indices[mask_proj][mask_inlier] w = weight[valid_voxel_indices] |
Customized integration¶
With the data association, we are able to conduct integration. In this example, we show the conventional TSDF integration written in vectorized Python code:
Read the associated RGB-D properties from the color/depth images at the associated
u, v
indices;Read the voxels from the voxel buffer arrays (
vbg.attribute
) at maskedvoxel_indices
;Perform in-place modification
118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 | config): wp = w + 1 tsdf[valid_voxel_indices] \ = (tsdf[valid_voxel_indices] * w + sdf[mask_inlier].reshape(w.shape)) / (wp) if config.integrate_color: color = o3d.t.io.read_image(color_file_names[i]).to(device) color_readings = color.as_tensor()[v_proj, u_proj].to(o3c.float32) weight[valid_voxel_indices] = wp o3d.core.cuda.synchronize() end = time.time() print('Saving to {}...'.format(config.path_npz)) vbg.save(config.path_npz) print('Saving finished') return vbg if __name__ == '__main__': parser = ConfigParser() parser.add( '--config', is_config_file=True, help='YAML config file path. Please refer to default_config.yml as a ' |
You may follow the example and adapt it to your customized properties. Open3D supports conversion from and to PyTorch tensors without memory any copy, see PyTorch I/O with DLPack memory map. This can be use to leverage PyTorch’s capabilities such as automatic differentiation and other operators.