Register fragments¶
Once the fragments of the scene are created, the next step is to align them in a global space.
Input arguments¶
This script runs with python run_system.py [config] --register
. In [config]
, ["path_dataset"]
should have subfolders fragments
which stores fragments in .ply
files and a pose graph in a .json
file.
The main function runs make_posegraph_for_scene
and
optimize_posegraph_for_scene
. The first function performs pairwise
registration. The second function performs multiway registration.
Preprocess point cloud¶
41 42 43 44 45 46 47 48 49 50 51 52 53 54 | # examples/python/reconstruction_system/register_fragments.py def preprocess_point_cloud(pcd, config): voxel_size = config["voxel_size"] pcd_down = pcd.voxel_down_sample(voxel_size) pcd_down.estimate_normals( o3d.geometry.KDTreeSearchParamHybrid(radius=voxel_size * 2.0, max_nn=30)) pcd_fpfh = o3d.pipelines.registration.compute_fpfh_feature( pcd_down, o3d.geometry.KDTreeSearchParamHybrid(radius=voxel_size * 5.0, max_nn=100)) return (pcd_down, pcd_fpfh) |
This function downsamples a point cloud to make it sparser and regularly distributed. Normals and FPFH feature are precomputed. See Voxel downsampling, Vertex normal estimation, and Extract geometric feature for more details.
Compute initial registration¶
85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 | # examples/python/reconstruction_system/register_fragments.py def compute_initial_registration(s, t, source_down, target_down, source_fpfh, target_fpfh, path_dataset, config): if t == s + 1: # odometry case print("Using RGBD odometry") pose_graph_frag = o3d.io.read_pose_graph( join(path_dataset, config["template_fragment_posegraph_optimized"] % s)) n_nodes = len(pose_graph_frag.nodes) transformation_init = np.linalg.inv(pose_graph_frag.nodes[n_nodes - 1].pose) (transformation, information) = \ multiscale_icp(source_down, target_down, [config["voxel_size"]], [50], config, transformation_init) else: # loop closure case (success, transformation, information) = register_point_cloud_fpfh(source_down, target_down, source_fpfh, target_fpfh, config) if not success: print("No reasonable solution. Skip this pair") return (False, np.identity(4), np.zeros((6, 6))) print(transformation) if config["debug_mode"]: draw_registration_result(source_down, target_down, transformation) return (True, transformation, information) |
This function computes a rough alignment between two fragments. If the fragments
are neighboring fragments, the rough alignment is determined by an aggregating
RGBD odometry obtained from Make fragments.
Otherwise, register_point_cloud_fpfh
is called to perform global
registration. Note that global registration is less reliable according to [Choi2015].
Pairwise global registration¶
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 | # examples/python/reconstruction_system/register_fragments.py def register_point_cloud_fpfh(source, target, source_fpfh, target_fpfh, config): o3d.utility.set_verbosity_level(o3d.utility.VerbosityLevel.Debug) distance_threshold = config["voxel_size"] * 1.4 if config["global_registration"] == "fgr": result = o3d.pipelines.registration.registration_fgr_based_on_feature_matching( source, target, source_fpfh, target_fpfh, o3d.pipelines.registration.FastGlobalRegistrationOption( maximum_correspondence_distance=distance_threshold)) if config["global_registration"] == "ransac": # Fallback to preset parameters that works better result = o3d.pipelines.registration.registration_ransac_based_on_feature_matching( source, target, source_fpfh, target_fpfh, False, distance_threshold, o3d.pipelines.registration.TransformationEstimationPointToPoint( False), 4, [ o3d.pipelines.registration. CorrespondenceCheckerBasedOnEdgeLength(0.9), o3d.pipelines.registration.CorrespondenceCheckerBasedOnDistance( distance_threshold) ], o3d.pipelines.registration.RANSACConvergenceCriteria( 1000000, 0.999)) if (result.transformation.trace() == 4.0): return (False, np.identity(4), np.zeros((6, 6))) information = o3d.pipelines.registration.get_information_matrix_from_point_clouds( source, target, distance_threshold, result.transformation) if information[5, 5] / min(len(source.points), len(target.points)) < 0.3: return (False, np.identity(4), np.zeros((6, 6))) return (True, result.transformation, information) |
This function uses RANSAC or Fast global registration for pairwise global registration.
Multiway registration¶
114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 | # examples/python/reconstruction_system/register_fragments.py def update_posegraph_for_scene(s, t, transformation, information, odometry, pose_graph): if t == s + 1: # odometry case odometry = np.dot(transformation, odometry) odometry_inv = np.linalg.inv(odometry) pose_graph.nodes.append( o3d.pipelines.registration.PoseGraphNode(odometry_inv)) pose_graph.edges.append( o3d.pipelines.registration.PoseGraphEdge(s, t, transformation, information, uncertain=False)) else: # loop closure case pose_graph.edges.append( o3d.pipelines.registration.PoseGraphEdge(s, t, transformation, information, uncertain=True)) return (odometry, pose_graph) |
This script uses the technique demonstrated in
Multiway registration. The function
update_posegraph_for_scene
builds a pose graph for multiway registration of
all fragments. Each graph node represents a fragment and its pose which
transforms the geometry to the global space.
Once a pose graph is built, the function optimize_posegraph_for_scene
is
called for multiway registration.
63 64 65 66 67 68 69 70 71 72 73 | # examples/python/reconstruction_system/optimize_posegraph.py def optimize_posegraph_for_scene(path_dataset, config): pose_graph_name = join(path_dataset, config["template_global_posegraph"]) pose_graph_optimized_name = join( path_dataset, config["template_global_posegraph_optimized"]) run_posegraph_optimization(pose_graph_name, pose_graph_optimized_name, max_correspondence_distance = config["voxel_size"] * 1.4, preference_loop_closure = \ config["preference_loop_closure_registration"]) |
Main registration loop¶
The function make_posegraph_for_scene
below calls all the functions
introduced above. The main workflow is: pairwise global registration ->
multiway registration.
167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 | # examples/python/reconstruction_system/register_fragments.py def make_posegraph_for_scene(ply_file_names, config): pose_graph = o3d.pipelines.registration.PoseGraph() odometry = np.identity(4) pose_graph.nodes.append(o3d.pipelines.registration.PoseGraphNode(odometry)) n_files = len(ply_file_names) matching_results = {} for s in range(n_files): for t in range(s + 1, n_files): matching_results[s * n_files + t] = matching_result(s, t) if config["python_multi_threading"] == True: from joblib import Parallel, delayed import multiprocessing import subprocess MAX_THREAD = min(multiprocessing.cpu_count(), max(len(matching_results), 1)) results = Parallel(n_jobs=MAX_THREAD)(delayed( register_point_cloud_pair)(ply_file_names, matching_results[r].s, matching_results[r].t, config) for r in matching_results) for i, r in enumerate(matching_results): matching_results[r].success = results[i][0] matching_results[r].transformation = results[i][1] matching_results[r].information = results[i][2] else: for r in matching_results: (matching_results[r].success, matching_results[r].transformation, matching_results[r].information) = \ register_point_cloud_pair(ply_file_names, matching_results[r].s, matching_results[r].t, config) for r in matching_results: if matching_results[r].success: (odometry, pose_graph) = update_posegraph_for_scene( matching_results[r].s, matching_results[r].t, matching_results[r].transformation, matching_results[r].information, odometry, pose_graph) o3d.io.write_pose_graph( join(config["path_dataset"], config["template_global_posegraph"]), pose_graph) |
Results¶
The pose graph optimization outputs the following messages:
[GlobalOptimizationLM] Optimizing PoseGraph having 14 nodes and 42 edges.
Line process weight : 55.885667
[Initial ] residual : 7.791139e+04, lambda : 1.205976e+00
[Iteration 00] residual : 6.094275e+02, valid edges : 22, time : 0.001 sec.
[Iteration 01] residual : 4.526879e+02, valid edges : 22, time : 0.000 sec.
[Iteration 02] residual : 4.515039e+02, valid edges : 22, time : 0.000 sec.
[Iteration 03] residual : 4.514832e+02, valid edges : 22, time : 0.000 sec.
[Iteration 04] residual : 4.514825e+02, valid edges : 22, time : 0.000 sec.
Current_residual - new_residual < 1.000000e-06 * current_residual
[GlobalOptimizationLM] total time : 0.003 sec.
[GlobalOptimizationLM] Optimizing PoseGraph having 14 nodes and 35 edges.
Line process weight : 60.762800
[Initial ] residual : 6.336097e+01, lambda : 1.324043e+00
[Iteration 00] residual : 6.334147e+01, valid edges : 22, time : 0.000 sec.
[Iteration 01] residual : 6.334138e+01, valid edges : 22, time : 0.000 sec.
Current_residual - new_residual < 1.000000e-06 * current_residual
[GlobalOptimizationLM] total time : 0.001 sec.
CompensateReferencePoseGraphNode : reference : 0
There are 14 fragments and 52 valid matching pairs among the fragments. After 23 iterations, 11 edges are detected to be false positives. After they are pruned, pose graph optimization runs again to achieve tight alignment.