open3d.t.geometry.PointCloud#
- class open3d.t.geometry.PointCloud#
A point cloud contains a list of 3D points. The point cloud class stores the attribute data in key-value maps, where the key is a string representing the attribute name and the value is a Tensor containing the attribute data.
The attributes of the point cloud have different levels:
import open3d as o3d device = o3d.core.Device("CPU:0") dtype = o3d.core.float32 # Create an empty point cloud # Use pcd.point to access the points' attributes pcd = o3d.t.geometry.PointCloud(device) # Default attribute: "positions". # This attribute is created by default and is required by all point clouds. # The shape must be (N, 3). The device of "positions" determines the device # of the point cloud. pcd.point.positions = o3d.core.Tensor([[0, 0, 0], [1, 1, 1], [2, 2, 2]], dtype, device) # Common attributes: "normals", "colors". # Common attributes are used in built-in point cloud operations. The # spellings must be correct. For example, if "normal" is used instead of # "normals", some internal operations that expects "normals" will not work. # "normals" and "colors" must have shape (N, 3) and must be on the same # device as the point cloud. pcd.point.normals = o3d.core.Tensor([[0, 0, 1], [0, 1, 0], [1, 0, 0]], dtype, device) pcd.point.colors = o3d.core.Tensor([[0.0, 0.0, 0.0], [0.1, 0.1, 0.1], [0.2, 0.2, 0.2]], dtype, device) # User-defined attributes. # You can also attach custom attributes. The value tensor must be on the # same device as the point cloud. The are no restrictions on the shape and # dtype, e.g., pcd.point.intensities = o3d.core.Tensor([0.3, 0.1, 0.4], dtype, device) pcd.point.labels = o3d.core.Tensor([3, 1, 4], o3d.core.int32, device)
- __init__(*args, **kwargs)#
Overloaded function.
__init__(self: open3d.cpu.pybind.t.geometry.PointCloud, device: open3d.cpu.pybind.core.Device = CPU:0) -> None
Construct an empty pointcloud on the provided
device
(default: ‘CPU:0’).__init__(self: open3d.cpu.pybind.t.geometry.PointCloud, positions: open3d.cpu.pybind.core.Tensor) -> None
__init__(self: open3d.cpu.pybind.t.geometry.PointCloud, map_keys_to_tensors: dict[str, open3d.cpu.pybind.core.Tensor]) -> None
__init__(self: open3d.cpu.pybind.t.geometry.PointCloud, arg0: open3d.cpu.pybind.t.geometry.PointCloud) -> None
Copy constructor
- append(self: open3d.cpu.pybind.t.geometry.PointCloud, arg0: open3d.cpu.pybind.t.geometry.PointCloud) open3d.cpu.pybind.t.geometry.PointCloud #
- clear(self)#
Clear all elements in the geometry.
- Returns:
open3d.t.geometry.Geometry
- clone(self: open3d.cpu.pybind.t.geometry.PointCloud) open3d.cpu.pybind.t.geometry.PointCloud #
Returns a copy of the point cloud on the same device.
- cluster_dbscan(self: open3d.cpu.pybind.t.geometry.PointCloud, eps: float, min_points: int, print_progress: bool = False) open3d.cpu.pybind.core.Tensor #
Cluster PointCloud using the DBSCAN algorithm Ester et al.,’A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise’, 1996. This is a wrapper for a CPU implementation and a copy of the point cloud data and resulting labels will be made.
- Parameters:
eps – Density parameter that is used to find neighbouring points.
min_points – Minimum number of points to form a cluster.
print_progress (default False): If ‘True’ the progress is visualized in the console.
- Returns:
A Tensor list of point labels on the same device as the point cloud, -1 indicates noise according to the algorithm.
Example
We use Redwood dataset for demonstration:
import matplotlib.pyplot as plt sample_ply_data = o3d.data.PLYPointCloud() pcd = o3d.t.io.read_point_cloud(sample_ply_data.path) labels = pcd.cluster_dbscan(eps=0.02, min_points=10, print_progress=True) max_label = labels.max().item() colors = plt.get_cmap("tab20")( labels.numpy() / (max_label if max_label > 0 else 1)) colors = o3d.core.Tensor(colors[:, :3], o3d.core.float32) colors[labels < 0] = 0 pcd.point.colors = colors o3d.visualization.draw([pcd])
- compute_boundary_points(self: open3d.cpu.pybind.t.geometry.PointCloud, radius: float, max_nn: int = 30, angle_threshold: float = 90.0) tuple[open3d.cpu.pybind.t.geometry.PointCloud, open3d.cpu.pybind.core.Tensor] #
Compute the boundary points of a point cloud. The implementation is inspired by the PCL implementation. Reference: https://pointclouds.org/documentation/classpcl_1_1_boundary_estimation.html
- Parameters:
radius – Neighbor search radius parameter.
max_nn (default 30) – Maximum number of neighbors to search.
angle_threshold (default 90.0) – Angle threshold to decide if a point is on the boundary.
- Returns:
Tensor of boundary points and its boolean mask tensor.
Example
We will load the DemoCropPointCloud dataset, compute its boundary points:
ply_point_cloud = o3d.data.DemoCropPointCloud() pcd = o3d.t.io.read_point_cloud(ply_point_cloud.point_cloud_path) boundaries, mask = pcd.compute_boundary_points(radius, max_nn) boundaries.paint_uniform_color([1.0, 0.0, 0.0]) o3d.visualization.draw([pcd, boundaries])
- compute_convex_hull(self: open3d.cpu.pybind.t.geometry.PointCloud, joggle_inputs: bool = False) open3d.cpu.pybind.t.geometry.TriangleMesh #
Compute the convex hull of a triangle mesh using qhull. This runs on the CPU.
- Parameters:
joggle_inputs (default False) – Handle precision problems by randomly perturbing the input data. Set to True if perturbing the input is acceptable but you need convex simplicial output. If False, neighboring facets may be merged in case of precision problems. See QHull docs for more details.
- Returns:
TriangleMesh representing the convexh hull. This contains an extra vertex property point_indices that contains the index of the corresponding vertex in the original mesh.
Example
We will load the Eagle dataset, compute and display it’s convex hull:
eagle = o3d.data.EaglePointCloud() pcd = o3d.t.io.read_point_cloud(eagle.path) hull = pcd.compute_convex_hull() o3d.visualization.draw([{'name': 'eagle', 'geometry': pcd}, {'name': 'convex hull', 'geometry': hull}])
- compute_metrics(self: open3d.cpu.pybind.t.geometry.PointCloud, pcd2: open3d.cpu.pybind.t.geometry.PointCloud, metrics: list[open3d.cpu.pybind.t.geometry.Metric], params: open3d.cpu.pybind.t.geometry.MetricParameters) open3d.cpu.pybind.core.Tensor #
Compute various metrics between two point clouds. Currently, Chamfer distance, Hausdorff distance and F-Score <a href=”../tutorial/reference.html#Knapitsch2017”>[[Knapitsch2017]]</a> are supported. The Chamfer distance is the sum of the mean distance to the nearest neighbor from the points of the first point cloud to the second point cloud. The F-Score at a fixed threshold radius is the harmonic mean of the Precision and Recall. Recall is the percentage of surface points from the first point cloud that have the second point cloud points within the threshold radius, while Precision is the percentage of points from the second point cloud that have the first point cloud points within the threhold radius.
- Parameters:
pcd2 (t.geometry.PointCloud) – Other point cloud to compare with.
metrics (Sequence[t.geometry.Metric]) – List of Metric s to compute. Multiple metrics can be computed at once for efficiency.
params (t.geometry.MetricParameters) – This holds parameters required by different metrics.
- Returns:
Tensor containing the requested metrics.
Example:
from open3d.t.geometry import TriangleMesh, PointCloud, Metric, MetricParameters # box is a cube with one vertex at the origin and a side length 1 pos = TriangleMesh.create_box().vertex.positions pcd1 = PointCloud(pos.clone()) pcd2 = PointCloud(pos * 1.1) # (1, 3, 3, 1) vertices are shifted by (0, 0.1, 0.1*sqrt(2), 0.1*sqrt(3)) # respectively metric_params = MetricParameters( fscore_radius=o3d.utility.FloatVector((0.01, 0.11, 0.15, 0.18))) metrics = pcd1.compute_metrics( pcd2, (Metric.ChamferDistance, Metric.HausdorffDistance, Metric.FScore), metric_params) print(metrics) np.testing.assert_allclose( metrics.cpu().numpy(), (0.22436734, np.sqrt(3) / 10, 100. / 8, 400. / 8, 700. / 8, 100.), rtol=1e-6)
- cpu(self: open3d.cpu.pybind.t.geometry.PointCloud) open3d.cpu.pybind.t.geometry.PointCloud #
Transfer the point cloud to CPU. If the point cloud is already on CPU, no copy will be performed.
- static create_from_depth_image(depth, intrinsics, extrinsics=[[1 0 0 0], [0 1 0 0], [0 0 1 0], [0 0 0 1]] Tensor[shape={4, 4}, stride={4, 1}, Float32, , depth_scale=1000.0, depth_max=3.0, stride=1, with_normals=False)#
Factory function to create a pointcloud (with only ‘points’) from a depth image and a camera model.
Given depth value d at (u, v) image coordinate, the corresponding 3d point is:
z = d / depth_scale
x = (u - cx) * z / fx
y = (v - cy) * z / fy
- Parameters:
depth (open3d.t.geometry.Image) – The input depth image should be a uint16_t image.
intrinsics (open3d.core.Tensor) – Intrinsic parameters of the camera.
extrinsics (open3d.core.Tensor, optional, default=[[1 0 0 0], [0 1 0 0], [0 0 1 0], [0 0 0 1]] Tensor[shape={4, 4}, stride={4, 1}, Float32) – Extrinsic parameters of the camera. ()
depth_scale (float, optional, default=1000.0) – The depth is scaled by 1 / depth_scale.
depth_max (float, optional, default=3.0) – Truncated at depth_max distance.
stride (int, optional, default=1) – Sampling factor to support coarse point cloud extraction. Unless normals are requested, there is no low pass filtering, so aliasing is possible for stride>1.
with_normals (bool, optional, default=False) – Also compute normals for the point cloud. If True, the point cloud will only contain points with valid normals. If normals are requested, the depth map is first filtered to ensure smooth normals.
- Returns:
open3d.t.geometry.PointCloud
- static create_from_rgbd_image(rgbd_image, intrinsics, extrinsics=[[1 0 0 0], [0 1 0 0], [0 0 1 0], [0 0 0 1]] Tensor[shape={4, 4}, stride={4, 1}, Float32, , depth_scale=1000.0, depth_max=3.0, stride=1, with_normals=False)#
Factory function to create a pointcloud (with properties {‘points’, ‘colors’}) from an RGBD image and a camera model.
Given depth value d at (u, v) image coordinate, the corresponding 3d point is:
z = d / depth_scale
x = (u - cx) * z / fx
y = (v - cy) * z / fy
- Parameters:
rgbd_image (open3d.t.geometry.RGBDImage) – The input RGBD image should have a uint16_t depth image and RGB image with any DType and the same size.
intrinsics (open3d.core.Tensor) – Intrinsic parameters of the camera.
extrinsics (open3d.core.Tensor, optional, default=[[1 0 0 0], [0 1 0 0], [0 0 1 0], [0 0 0 1]] Tensor[shape={4, 4}, stride={4, 1}, Float32) – Extrinsic parameters of the camera. ()
depth_scale (float, optional, default=1000.0) – The depth is scaled by 1 / depth_scale.
depth_max (float, optional, default=3.0) – Truncated at depth_max distance.
stride (int, optional, default=1) – Sampling factor to support coarse point cloud extraction. Unless normals are requested, there is no low pass filtering, so aliasing is possible for stride>1.
with_normals (bool, optional, default=False) – Also compute normals for the point cloud. If True, the point cloud will only contain points with valid normals. If normals are requested, the depth map is first filtered to ensure smooth normals.
- Returns:
open3d.t.geometry.PointCloud
- crop(*args, **kwargs)#
Overloaded function.
- cuda(self: open3d.cpu.pybind.t.geometry.PointCloud, device_id: int = 0) open3d.cpu.pybind.t.geometry.PointCloud #
Transfer the point cloud to a CUDA device. If the point cloud is already on the specified CUDA device, no copy will be performed.
- estimate_color_gradients(self: open3d.cpu.pybind.t.geometry.PointCloud, max_nn: int | None = 30, radius: float | None = None) None #
Function to estimate point color gradients. It uses KNN search (Not recommended to use on GPU) if only max_nn parameter is provided, Radius search (Not recommended to use on GPU) if only radius is provided and Hybrid Search (Recommended) if radius parameter is also provided.
- estimate_normals(self, max_nn=30, radius=None)#
Function to estimate point normals. If the point cloud normals exist, the estimated normals are oriented with respect to the same. It uses KNN search (Not recommended to use on GPU) if only max_nn parameter is provided, Radius search (Not recommended to use on GPU) if only radius is provided and Hybrid Search (Recommended) if radius parameter is also provided.
- Parameters:
max_nn (Optional[int], optional, default=30) – Neighbor search max neighbors parameter [default = 30].
radius (Optional[float], optional, default=None) – neighbors search radius parameter to use HybridSearch. [Recommended ~1.4x voxel size].
- Returns:
None
- extrude_linear(self: open3d.cpu.pybind.t.geometry.PointCloud, vector: open3d.cpu.pybind.core.Tensor, scale: float = 1.0, capping: bool = True) open3d.cpu.pybind.t.geometry.LineSet #
Sweeps the point cloud along a direction vector.
- Parameters:
vector (open3d.core.Tensor) – The direction vector.
scale (float) – Scalar factor which essentially scales the direction vector.
- Returns:
A line set with the result of the sweep operation.
Example
This code generates a set of straight lines from a point cloud:
import open3d as o3d import numpy as np pcd = o3d.t.geometry.PointCloud(np.random.rand(10,3)) lines = pcd.extrude_linear([0,1,0]) o3d.visualization.draw([{'name': 'lines', 'geometry': lines}])
- extrude_rotation(self: open3d.cpu.pybind.t.geometry.PointCloud, angle: float, axis: open3d.cpu.pybind.core.Tensor, resolution: int = 16, translation: float = 0.0, capping: bool = True) open3d.cpu.pybind.t.geometry.LineSet #
Sweeps the point set rotationally about an axis.
- Parameters:
angle (float) – The rotation angle in degree.
axis (open3d.core.Tensor) – The rotation axis.
resolution (int) – The resolution defines the number of intermediate sweeps about the rotation axis.
translation (float) – The translation along the rotation axis.
- Returns:
A line set with the result of the sweep operation.
Example
This code generates a number of helices from a point cloud:
import open3d as o3d import numpy as np pcd = o3d.t.geometry.PointCloud(np.random.rand(10,3)) helices = pcd.extrude_rotation(3*360, [0,1,0], resolution=3*16, translation=2) o3d.visualization.draw([{'name': 'helices', 'geometry': helices}])
- farthest_point_down_sample(self, num_samples)#
Downsample a pointcloud into output pointcloud with a set of points has farthest distance.The sampling is performed by selecting the farthest point from previous selected points iteratively
- Parameters:
num_samples (int) – Number of points to be sampled.
- Returns:
open3d.t.geometry.PointCloud
- static from_legacy(pcd_legacy: open3d.cpu.pybind.geometry.PointCloud, dtype: open3d.cpu.pybind.core.Dtype = Float32, device: open3d.cpu.pybind.core.Device = CPU:0) open3d.cpu.pybind.t.geometry.PointCloud #
Create a PointCloud from a legacy Open3D PointCloud.
- get_axis_aligned_bounding_box(self: open3d.cpu.pybind.t.geometry.PointCloud) open3d.cpu.pybind.t.geometry.AxisAlignedBoundingBox #
Create an axis-aligned bounding box from attribute ‘positions’.
- get_center(self: open3d.cpu.pybind.t.geometry.PointCloud) open3d.cpu.pybind.core.Tensor #
Returns the center for point coordinates.
- get_max_bound(self: open3d.cpu.pybind.t.geometry.PointCloud) open3d.cpu.pybind.core.Tensor #
Returns the max bound for point coordinates.
- get_min_bound(self: open3d.cpu.pybind.t.geometry.PointCloud) open3d.cpu.pybind.core.Tensor #
Returns the min bound for point coordinates.
- get_oriented_bounding_box(self: open3d.cpu.pybind.t.geometry.PointCloud) open3d.cpu.pybind.t.geometry.OrientedBoundingBox #
Create an oriented bounding box from attribute ‘positions’.
- has_valid_material(self: open3d.cpu.pybind.t.geometry.DrawableGeometry) bool #
Returns true if the geometry’s material is valid.
Removes hidden points from a point cloud and returns a mesh of the remaining points. Based on Katz et al. ‘Direct Visibility of Point Sets’, 2007. Additional information about the choice of radius for noisy point clouds can be found in Mehra et. al. ‘Visibility of Noisy Point Cloud Data’, 2010. This is a wrapper for a CPU implementation and a copy of the point cloud data and resulting visible triangle mesh and indiecs will be made.
- Parameters:
camera_location – All points not visible from that location will be removed.
radius – The radius of the spherical projection.
- Returns:
Tuple of visible triangle mesh and indices of visible points on the same device as the point cloud.
Example
We use armadillo mesh to compute the visible points from given camera:
# Convert mesh to a point cloud and estimate dimensions. armadillo_data = o3d.data.ArmadilloMesh() pcd = o3d.io.read_triangle_mesh( armadillo_data.path).sample_points_poisson_disk(5000) diameter = np.linalg.norm( np.asarray(pcd.get_max_bound()) - np.asarray(pcd.get_min_bound())) # Define parameters used for hidden_point_removal. camera = o3d.core.Tensor([0, 0, diameter], o3d.core.float32) radius = diameter * 100 # Get all points that are visible from given view point. pcd = o3d.t.geometry.PointCloud.from_legacy(pcd) _, pt_map = pcd.hidden_point_removal(camera, radius) pcd = pcd.select_by_index(pt_map) o3d.visualization.draw([pcd], point_size=5)
- is_empty(self)#
Returns
True
iff the geometry is empty.- Returns:
bool
- normalize_normals(self: open3d.cpu.pybind.t.geometry.PointCloud) open3d.cpu.pybind.t.geometry.PointCloud #
Normalize point normals to length 1.
- orient_normals_consistent_tangent_plane(self: open3d.cpu.pybind.t.geometry.PointCloud, k: int, lambda: float = 0.0, cos_alpha_tol: float = 1.0) None #
Function to consistently orient the normals of a point cloud based on tangent planes.
The algorithm is described in Hoppe et al., “Surface Reconstruction from Unorganized Points”, 1992. Additional information about the choice of lambda and cos_alpha_tol for complex point clouds can be found in Piazza, Valentini, Varetti, “Mesh Reconstruction from Point Cloud”, 2023 (https://eugeniovaretti.github.io/meshreco/Piazza_Valentini_Varetti_MeshReconstructionFromPointCloud_2023.pdf).
- Parameters:
k (int) – Number of neighbors to use for tangent plane estimation.
lambda (float) – A non-negative real parameter that influences the distance metric used to identify the true neighbors of a point in complex geometries. It penalizes the distance between a point and the tangent plane defined by the reference point and its normal vector, helping to mitigate misclassification issues encountered with traditional Euclidean distance metrics.
cos_alpha_tol (float) – Cosine threshold angle used to determine the inclusion boundary of neighbors based on the direction of the normal vector.
Example
We use Bunny point cloud to compute its normals and orient them consistently. The initial reconstruction adheres to Hoppe’s algorithm (raw), whereas the second reconstruction utilises the lambda and cos_alpha_tol parameters. Due to the high density of the Bunny point cloud available in Open3D a larger value of the parameter k is employed to test the algorithm. Usually you do not have at disposal such a refined point clouds, thus you cannot find a proper choice of k: refer to https://eugeniovaretti.github.io/meshreco for these cases.:
import open3d as o3d import numpy as np # Load point cloud data = o3d.data.BunnyMesh() # Case 1, Hoppe (raw): pcd = o3d.io.read_point_cloud(data.path) # Compute normals and orient them consistently, using k=100 neighbours pcd.estimate_normals() pcd.orient_normals_consistent_tangent_plane(100) # Create mesh from point cloud using Poisson Algorithm poisson_mesh = o3d.geometry.TriangleMesh.create_from_point_cloud_poisson(pcd, depth=8, width=0, scale=1.1, linear_fit=False)[0] poisson_mesh.paint_uniform_color(np.array([[0.5],[0.5],[0.5]])) poisson_mesh.compute_vertex_normals() o3d.visualization.draw_geometries([poisson_mesh]) # Case 2, reconstruction using lambda and cos_alpha_tol parameters: pcd_robust = o3d.io.read_point_cloud(data.path) # Compute normals and orient them consistently, using k=100 neighbours pcd_robust.estimate_normals() pcd_robust.orient_normals_consistent_tangent_plane(100, 10, 0.5) # Create mesh from point cloud using Poisson Algorithm poisson_mesh_robust = o3d.geometry.TriangleMesh.create_from_point_cloud_poisson(pcd_robust, depth=8, width=0, scale=1.1, linear_fit=False)[0] poisson_mesh_robust.paint_uniform_color(np.array([[0.5],[0.5],[0.5]])) poisson_mesh_robust.compute_vertex_normals() o3d.visualization.draw_geometries([poisson_mesh_robust])
- orient_normals_to_align_with_direction(self, orientation_reference=[0 0 1] Tensor[shape={3}, stride={1}, Float32)#
Function to orient the normals of a point cloud.
- Parameters:
orientation_reference (open3d.core.Tensor, optional, default=[0 0 1] Tensor[shape={3}, stride={1}, Float32) – Normals are oriented with respect to orientation_reference. ()
- Returns:
None
- orient_normals_towards_camera_location(self, camera_location=[0 0 0] Tensor[shape={3}, stride={1}, Float32)#
Function to orient the normals of a point cloud.
- Parameters:
camera_location (open3d.core.Tensor, optional, default=[0 0 0] Tensor[shape={3}, stride={1}, Float32) – Normals are oriented with towards the camera_location. ()
- Returns:
None
- paint_uniform_color(self, color)#
Assigns uniform color to the point cloud.
- Parameters:
color (open3d.core.Tensor) – Color of the pointcloud. Floating color values are clipped between 0.0 and 1.0.
- Returns:
open3d.t.geometry.PointCloud
- pca_partition(self: open3d.cpu.pybind.t.geometry.PointCloud, max_points: int) int #
Partition the point cloud by recursively doing PCA.
This function creates a new point attribute with the name “partition_ids” storing the partition id for each point.
- Parameters:
max_points (int) – The maximum allowed number of points in a partition.
Example
This code computes parititions a point cloud such that each partition contains at most 20 points:
import open3d as o3d import numpy as np pcd = o3d.t.geometry.PointCloud(np.random.rand(100,3)) num_partitions = pcd.pca_partition(max_points=20) # print the partition ids and the number of points for each of them. print(np.unique(pcd.point.partition_ids.numpy(), return_counts=True))
- project_to_depth_image(self: open3d.cpu.pybind.t.geometry.PointCloud, width: int, height: int, intrinsics: open3d.cpu.pybind.core.Tensor, extrinsics: open3d.cpu.pybind.core.Tensor = [[1 0 0 0], [0 1 0 0], [0 0 1 0], [0 0 0 1]] Tensor[shape={4, 4}, stride={4, 1}, Float32, CPU:0, 0x55c3a4f28430], depth_scale: float = 1000.0, depth_max: float = 3.0) open3d.cpu.pybind.t.geometry.Image #
Project a point cloud to a depth image.
- project_to_rgbd_image(self: open3d.cpu.pybind.t.geometry.PointCloud, width: int, height: int, intrinsics: open3d.cpu.pybind.core.Tensor, extrinsics: open3d.cpu.pybind.core.Tensor = [[1 0 0 0], [0 1 0 0], [0 0 1 0], [0 0 0 1]] Tensor[shape={4, 4}, stride={4, 1}, Float32, CPU:0, 0x55c3a4f29590], depth_scale: float = 1000.0, depth_max: float = 3.0) open3d.cpu.pybind.t.geometry.RGBDImage #
Project a colored point cloud to a RGBD image.
- random_down_sample(self, sampling_ratio)#
Downsample a pointcloud by selecting random index point and its attributes.
- Parameters:
sampling_ratio (float) – Sampling ratio, the ratio of sample to total number of points in the pointcloud.
- Returns:
open3d.t.geometry.PointCloud
- remove_duplicated_points(self: open3d.cpu.pybind.t.geometry.PointCloud) tuple[open3d.cpu.pybind.t.geometry.PointCloud, open3d.cpu.pybind.core.Tensor] #
Remove duplicated points and there associated attributes.
- remove_non_finite_points(self: open3d.cpu.pybind.t.geometry.PointCloud, remove_nan: bool = True, remove_infinite: bool = True) tuple[open3d.cpu.pybind.t.geometry.PointCloud, open3d.cpu.pybind.core.Tensor] #
Remove all points from the point cloud that have a nan entry, or infinite value. It also removes the corresponding attributes.
- Parameters:
remove_nan – Remove NaN values from the PointCloud.
remove_infinite – Remove infinite values from the PointCloud.
- Returns:
Tuple of filtered point cloud and boolean mask tensor for selected values w.r.t. input point cloud.
- remove_radius_outliers(self, nb_points, search_radius)#
Remove points that have less than nb_points neighbors in a sphere of a given search radius.
- Parameters:
nb_points (int) – Number of neighbor points required within the radius.
search_radius (float) – Radius of the sphere.
nb_points – Number of neighbor points required within the radius.
search_radius – Radius of the sphere.
- Returns:
Tuple of filtered point cloud and boolean mask tensor for selected values w.r.t. input point cloud.
- Returns:
tuple[open3d.t.geometry.PointCloud, open3d.core.Tensor]
- remove_statistical_outliers(self: open3d.cpu.pybind.t.geometry.PointCloud, nb_neighbors: int, std_ratio: float) tuple[open3d.cpu.pybind.t.geometry.PointCloud, open3d.cpu.pybind.core.Tensor] #
Remove points that are further away from their p nb_neighbor neighbors in average. This function is not recommended to use on GPU.
- Parameters:
nb_neighbors – Number of neighbors around the target point.
std_ratio – Standard deviation ratio.
- Returns:
Tuple of filtered point cloud and boolean mask tensor for selected values w.r.t. input point cloud.
- rotate(self: open3d.cpu.pybind.t.geometry.PointCloud, R: open3d.cpu.pybind.core.Tensor, center: open3d.cpu.pybind.core.Tensor) open3d.cpu.pybind.t.geometry.PointCloud #
Rotate points and normals (if exist).
- scale(self: open3d.cpu.pybind.t.geometry.PointCloud, scale: float, center: open3d.cpu.pybind.core.Tensor) open3d.cpu.pybind.t.geometry.PointCloud #
Scale points.
- segment_plane(self: open3d.cpu.pybind.t.geometry.PointCloud, distance_threshold: float = 0.01, ransac_n: int = 3, num_iterations: int = 100, probability: float = 0.999) tuple[open3d.cpu.pybind.core.Tensor, open3d.cpu.pybind.core.Tensor] #
Segments a plane in the point cloud using the RANSAC algorithm. This is a wrapper for a CPU implementation and a copy of the point cloud data and resulting plane model and inlier indiecs will be made.
- Parameters:
distance_threshold (default 0.01) – Max distance a point can be from the plane model, and still be considered an inlier.
ransac_n (default 3) – Number of initial points to be considered inliers in each iteration.
num_iterations (default 100) – Maximum number of iterations.
probability (default 0.999) – Expected probability of finding the optimal plane.
- Returns:
Tuple of the plane model ax + by + cz + d = 0 and the indices of the plane inliers on the same device as the point cloud.
Example
We use Redwood dataset to compute its plane model and inliers:
sample_pcd_data = o3d.data.PCDPointCloud() pcd = o3d.t.io.read_point_cloud(sample_pcd_data.path) plane_model, inliers = pcd.segment_plane(distance_threshold=0.01, ransac_n=3, num_iterations=1000) inlier_cloud = pcd.select_by_index(inliers) inlier_cloud = inlier_cloud.paint_uniform_color([1.0, 0, 0]) outlier_cloud = pcd.select_by_index(inliers, invert=True) o3d.visualization.draw([inlier_cloud, outlier_cloud])
- select_by_index(self, indices, invert=False, remove_duplicates=False)#
Select points from input pointcloud, based on indices into output point cloud.
- Parameters:
indices (open3d.core.Tensor) – Int64 indexing tensor of shape {n,} containing index value that is to be selected.
invert (bool, optional, default=False) – Set to True to invert the selection of indices, and also ignore the duplicated indices.
remove_duplicates (bool, optional, default=False) – Set to True to remove the duplicated indices.
- Returns:
open3d.t.geometry.PointCloud
- select_by_mask(self, boolean_mask, invert=False)#
Select points from input pointcloud, based on boolean mask indices into output point cloud.
- Parameters:
boolean_mask (open3d.core.Tensor) – Boolean indexing tensor of shape {n,} containing true value for the indices that is to be selected..
invert (bool, optional, default=False) – Set to True to invert the selection of indices.
- Returns:
open3d.t.geometry.PointCloud
- to(self: open3d.cpu.pybind.t.geometry.PointCloud, device: open3d.cpu.pybind.core.Device, copy: bool = False) open3d.cpu.pybind.t.geometry.PointCloud #
Transfer the point cloud to a specified device.
- to_legacy(self: open3d.cpu.pybind.t.geometry.PointCloud) open3d.cpu.pybind.geometry.PointCloud #
Convert to a legacy Open3D PointCloud.
- transform(self: open3d.cpu.pybind.t.geometry.PointCloud, transformation: open3d.cpu.pybind.core.Tensor) open3d.cpu.pybind.t.geometry.PointCloud #
Transforms the points and normals (if exist).
- translate(self: open3d.cpu.pybind.t.geometry.PointCloud, translation: open3d.cpu.pybind.core.Tensor, relative: bool = True) open3d.cpu.pybind.t.geometry.PointCloud #
Translates points.
- uniform_down_sample(self, every_k_points)#
Downsamples a point cloud by selecting every kth index point and its attributes.
- Parameters:
every_k_points (int) – Sample rate, the selected point indices are [0, k, 2k, …].
- Returns:
open3d.t.geometry.PointCloud
- voxel_down_sample(self, voxel_size, reduction='mean')#
Downsamples a point cloud with a specified voxel size.
- Parameters:
voxel_size (float) – The size of the voxel used to downsample the point cloud.
reduction (str) – The approach to pool point properties in a voxel. Can only be “mean” at current.
- Returns:
A downsampled point cloud with point properties reduced in each voxel.
Example
We will load the Eagle dataset, downsample it, and show the result:
eagle = o3d.data.EaglePointCloud() pcd = o3d.t.io.read_point_cloud(eagle.path) pcd_down = pcd.voxel_down_sample(voxel_size=0.05) o3d.visualization.draw([{'name': 'pcd', 'geometry': pcd}, {'name': 'pcd_down', 'geometry': pcd_down}])
- Parameters:
voxel_size (float) – Voxel size. A positive number.
reduction (str, optional, default='mean') –
- Returns:
open3d.t.geometry.PointCloud
- property device#
Returns the device of the geometry.
- property is_cpu#
Returns true if the geometry is on CPU.
- property is_cuda#
Returns true if the geometry is on CUDA.
- property material#
- property point#
positions, colors, normals, etc.
- Type:
Point’s attributes