System overview¶
The system has 4 main steps:
Step 1. Make fragments: build local geometric surfaces (referred to as fragments) from short subsequences of the input RGBD sequence. This part uses RGBD Odometry, Multiway registration, and RGBD integration.
Step 2. Register fragments: the fragments are aligned in a global space to detect loop closure. This part uses Global registration, ICP registration, and Multiway registration.
Step 3. Refine registration: the rough alignments are aligned more tightly. This part uses ICP registration, and Multiway registration.
Step 4. Integrate scene: integrate RGB-D images to generate a mesh model for the scene. This part uses RGBD integration.
Example dataset¶
We use the SceneNN dataset to demonstrate the system in this tutorial. Alternatively, there are lots of excellent RGBD datasets such as Redwood data, TUM RGBD data, ICL-NUIM data, and SUN3D data.
The tutorial uses sequence 016
from the SceneNN dataset. This is a
quick link
to download the RGBD sequence used in this tutorial. Alternatively, you can
download the original dataset from
SceneNN oni file archive,
and then extract the oni
file into color and depth image sequence using
OniParser from the Redwood reconstruction system
or other conversion tools. Some helper scripts can be found from
reconstruction_system/scripts
.
Quick start¶
Put all color images in the image
folder, and all depth images in the
depth
folder. Run the following commands from the root folder.
cd examples/python/reconstruction_system/
python run_system.py [config_file] [--make] [--register] [--refine] [--integrate]
config_file
has parameters and file paths. For example,
reconstruction_system/config/tutorial.json
has the following script.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | {
"name": "Open3D reconstruction tutorial http://open3d.org/docs/release/tutorial/reconstruction_system/system_overview.html",
"path_dataset": "dataset/tutorial/",
"path_intrinsic": "",
"max_depth": 3.0,
"voxel_size": 0.05,
"max_depth_diff": 0.07,
"preference_loop_closure_odometry": 0.1,
"preference_loop_closure_registration": 5.0,
"tsdf_cubic_size": 3.0,
"icp_method": "color",
"global_registration": "ransac",
"python_multi_threading": true
}
|
We assume that the color images and the depth images are synchronized and
registered. "path_intrinsic"
specifies path to a json file that stores the
camera intrinsic matrix (See
Read camera intrinsic for
details). If it is not given, the PrimeSense factory setting is used. For your
own dataset, use an appropriate camera intrinsic and visualize a depth image
(likewise RGBD images) prior to using the system.
Note
"python_multi_threading": true
utilizes joblib
to parallelize
the system using every CPU cores. With this option, Mac users may encounter an
unexpected program termination. To avoid this issue, set this flag to false
.
Capture your own dataset¶
This tutorial provides an example that can record synchronized and aligned RGBD images using the Intel RealSense camera. For more details, please see Capture your own dataset.