Refine registration#
Input arguments#
This script runs with python run_system.py [config] --refine
. In [config]
,
["path_dataset"]
should have subfolders fragments
which stores fragments
in .ply
files and a pose graph in a .json
file.
The main function runs local_refinement
and optimize_posegraph_for_scene
.
The first function performs pairwise registration on the pairs detected by
Register fragments. The second function performs
multiway registration.
Fine-grained registration#
49def multiscale_icp(source,
50 target,
51 voxel_size,
52 max_iter,
53 config,
54 init_transformation=np.identity(4)):
55 current_transformation = init_transformation
56 for i, scale in enumerate(range(len(max_iter))): # multi-scale approach
57 iter = max_iter[scale]
58 distance_threshold = config["voxel_size"] * 1.4
59 print("voxel_size {}".format(voxel_size[scale]))
60 source_down = source.voxel_down_sample(voxel_size[scale])
61 target_down = target.voxel_down_sample(voxel_size[scale])
62 if config["icp_method"] == "point_to_point":
63 result_icp = o3d.pipelines.registration.registration_icp(
64 source_down, target_down, distance_threshold,
65 current_transformation,
66 o3d.pipelines.registration.TransformationEstimationPointToPoint(
67 ),
68 o3d.pipelines.registration.ICPConvergenceCriteria(
69 max_iteration=iter))
70 else:
71 source_down.estimate_normals(
72 o3d.geometry.KDTreeSearchParamHybrid(radius=voxel_size[scale] *
73 2.0,
74 max_nn=30))
75 target_down.estimate_normals(
76 o3d.geometry.KDTreeSearchParamHybrid(radius=voxel_size[scale] *
77 2.0,
78 max_nn=30))
79 if config["icp_method"] == "point_to_plane":
80 result_icp = o3d.pipelines.registration.registration_icp(
81 source_down, target_down, distance_threshold,
82 current_transformation,
83 o3d.pipelines.registration.
84 TransformationEstimationPointToPlane(),
85 o3d.pipelines.registration.ICPConvergenceCriteria(
86 max_iteration=iter))
87 if config["icp_method"] == "color":
88 # Colored ICP is sensitive to threshold.
89 # Fallback to preset distance threshold that works better.
90 # TODO: make it adjustable in the upgraded system.
91 result_icp = o3d.pipelines.registration.registration_colored_icp(
92 source_down, target_down, voxel_size[scale],
93 current_transformation,
94 o3d.pipelines.registration.
95 TransformationEstimationForColoredICP(),
96 o3d.pipelines.registration.ICPConvergenceCriteria(
97 relative_fitness=1e-6,
98 relative_rmse=1e-6,
99 max_iteration=iter))
100 if config["icp_method"] == "generalized":
101 result_icp = o3d.pipelines.registration.registration_generalized_icp(
102 source_down, target_down, distance_threshold,
103 current_transformation,
104 o3d.pipelines.registration.
105 TransformationEstimationForGeneralizedICP(),
106 o3d.pipelines.registration.ICPConvergenceCriteria(
107 relative_fitness=1e-6,
108 relative_rmse=1e-6,
109 max_iteration=iter))
110 current_transformation = result_icp.transformation
111 if i == len(max_iter) - 1:
112 information_matrix = o3d.pipelines.registration.get_information_matrix_from_point_clouds(
113 source_down, target_down, voxel_size[scale] * 1.4,
114 result_icp.transformation)
115
116 if config["debug_mode"]:
117 draw_registration_result_original_color(source, target,
118 result_icp.transformation)
119 return (result_icp.transformation, information_matrix)
Two options are given for the fine-grained registration. The color
option is
recommended since it uses color information to prevent drift. See [Park2017]
for details.
Multiway registration#
26def update_posegraph_for_scene(s, t, transformation, information, odometry,
27 pose_graph):
28 if t == s + 1: # odometry case
29 odometry = np.dot(transformation, odometry)
30 odometry_inv = np.linalg.inv(odometry)
31 pose_graph.nodes.append(
32 o3d.pipelines.registration.PoseGraphNode(odometry_inv))
33 pose_graph.edges.append(
34 o3d.pipelines.registration.PoseGraphEdge(s,
35 t,
36 transformation,
37 information,
38 uncertain=False))
39 else: # loop closure case
40 pose_graph.edges.append(
41 o3d.pipelines.registration.PoseGraphEdge(s,
42 t,
43 transformation,
44 information,
45 uncertain=True))
46 return (odometry, pose_graph)
This script uses the technique demonstrated in Multiway registration. Function update_posegraph_for_scene
builds a pose graph for multiway registration of all fragments. Each graph node represents a fragment and its pose which transforms the geometry to the global space.
Once a pose graph is built, function optimize_posegraph_for_scene
is called
for multiway registration.
46def optimize_posegraph_for_scene(path_dataset, config):
47 pose_graph_name = join(path_dataset, config["template_global_posegraph"])
48 pose_graph_optimized_name = join(
49 path_dataset, config["template_global_posegraph_optimized"])
50 run_posegraph_optimization(pose_graph_name, pose_graph_optimized_name,
51 max_correspondence_distance = config["voxel_size"] * 1.4,
52 preference_loop_closure = \
53 config["preference_loop_closure_registration"])
Main registration loop#
The function make_posegraph_for_refined_scene
below calls all the functions introduced above.
158def make_posegraph_for_refined_scene(ply_file_names, config):
159 pose_graph = o3d.io.read_pose_graph(
160 join(config["path_dataset"],
161 config["template_global_posegraph_optimized"]))
162
163 n_files = len(ply_file_names)
164 matching_results = {}
165 for edge in pose_graph.edges:
166 s = edge.source_node_id
167 t = edge.target_node_id
168 matching_results[s * n_files + t] = \
169 matching_result(s, t, edge.transformation)
170
171 if config["python_multi_threading"] is True:
172 os.environ['OMP_NUM_THREADS'] = '1'
173 max_workers = max(
174 1, min(multiprocessing.cpu_count() - 1, len(pose_graph.edges)))
175 mp_context = multiprocessing.get_context('spawn')
176 with mp_context.Pool(processes=max_workers) as pool:
177 args = [(ply_file_names, v.s, v.t, v.transformation, config)
178 for k, v in matching_results.items()]
179 results = pool.starmap(register_point_cloud_pair, args)
180
181 for i, r in enumerate(matching_results):
182 matching_results[r].transformation = results[i][0]
183 matching_results[r].information = results[i][1]
184 else:
185 for r in matching_results:
186 (matching_results[r].transformation,
187 matching_results[r].information) = \
188 register_point_cloud_pair(ply_file_names,
189 matching_results[r].s, matching_results[r].t,
190 matching_results[r].transformation, config)
191
192 pose_graph_new = o3d.pipelines.registration.PoseGraph()
193 odometry = np.identity(4)
194 pose_graph_new.nodes.append(
195 o3d.pipelines.registration.PoseGraphNode(odometry))
196 for r in matching_results:
197 (odometry, pose_graph_new) = update_posegraph_for_scene(
198 matching_results[r].s, matching_results[r].t,
199 matching_results[r].transformation, matching_results[r].information,
200 odometry, pose_graph_new)
201 print(pose_graph_new)
202 o3d.io.write_pose_graph(
203 join(config["path_dataset"], config["template_refined_posegraph"]),
204 pose_graph_new)
The main workflow is: pairwise local refinement -> multiway registration.
Results#
The pose graph optimization outputs the following messages:
[GlobalOptimizationLM] Optimizing PoseGraph having 14 nodes and 35 edges.
Line process weight : 789.730200
[Initial ] residual : 1.208286e+04, lambda : 1.706306e+01
[Iteration 00] residual : 2.410383e+03, valid edges : 22, time : 0.000 sec.
[Iteration 01] residual : 8.127856e+01, valid edges : 22, time : 0.000 sec.
[Iteration 02] residual : 8.031355e+01, valid edges : 22, time : 0.000 sec.
Delta.norm() < 1.000000e-06 * (x.norm() + 1.000000e-06)
[GlobalOptimizationLM] total time : 0.001 sec.
[GlobalOptimizationLM] Optimizing PoseGraph having 14 nodes and 35 edges.
Line process weight : 789.730200
[Initial ] residual : 8.031355e+01, lambda : 1.716826e+01
Delta.norm() < 1.000000e-06 * (x.norm() + 1.000000e-06)
[GlobalOptimizationLM] total time : 0.000 sec.
CompensateReferencePoseGraphNode : reference : 0
There are 14 fragments and 52 valid matching pairs between fragments. After 23 iterations, 11 edges are detected to be false positives. After they are pruned, pose graph optimization runs again to achieve tight alignment.