[1]:
import open3d.core as o3c
import numpy as np
Using external Open3D-ML in /home/runner/work/Open3D/Open3D/Open3D-ML
Open3D was not compiled with BUILD_GUI, but script is importing open3d.visualization.gui
Open3D was not compiled with BUILD_GUI, but script is importing open3d.visualization.rendering
Tensor¶
Tensor is a “view” of a data Blob with shape, stride, and a data pointer. It is a multidimensional and homogeneous matrix containing elements of single data type. It is used in Open3D to perform numerical operations. It supports GPU operations as well.
Tensor creation¶
Tensor can be created from list, numpy array, another tensor. A tensor of specific data type and device can be constructed by passing a o3c.Dtype
and/or o3c.Device
to a constructor. If not passed, the default data type is inferred from the data, and the default device is CPU. Note that while creating tensor from a list or numpy array, the underlying memory is not shared and a copy is created.
[1]:
# Tensor from list.
a = o3c.Tensor([0, 1, 2])
print("Created from list:\n{}".format(a))
# Tensor from Numpy.
a = o3c.Tensor(np.array([0, 1, 2]))
print("\nCreated from numpy array:\n{}".format(a))
# Dtype and inferred from list.
a_float = o3c.Tensor([0.0, 1.0, 2.0])
print("\nDefault dtype and device:\n{}".format(a_float))
# Specify dtype.
a = o3c.Tensor(np.array([0, 1, 2]), dtype=o3c.Dtype.Float64)
print("\nSpecified data type:\n{}".format(a))
# Specify device.
a = o3c.Tensor(np.array([0, 1, 2]), device=o3c.Device("CUDA:0"))
print("\nSpecified device:\n{}".format(a))
Created from list:
[0 1 2]
Tensor[shape={3}, stride={1}, Int64, CPU:0, 0x564f36b7a520]
Created from numpy array:
[0 1 2]
Tensor[shape={3}, stride={1}, Int64, CPU:0, 0x564f36dd6610]
Default dtype and device:
[0.0 1.0 2.0]
Tensor[shape={3}, stride={1}, Float64, CPU:0, 0x564f36ed7630]
Specified data type:
[0.0 1.0 2.0]
Tensor[shape={3}, stride={1}, Float64, CPU:0, 0x564f369467a0]
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-1-a10e55b872d3> in <module>
16
17 # Specify device.
---> 18 a = o3c.Tensor(np.array([0, 1, 2]), device=o3c.Device("CUDA:0"))
19 print("\nSpecified device:\n{}".format(a))
/opt/hostedtoolcache/Python/3.6.12/x64/lib/python3.6/site-packages/open3d/core.py in __init__(self, data, dtype, device)
220 if device is None:
221 device = Device("CPU:0")
--> 222 super(Tensor, self).__init__(data, dtype, device)
223
224 @cast_to_py_tensor
RuntimeError: [Open3D ERROR] MemoryManager::GetDeviceMemoryManager: Unimplemented device
Tensor can also be created from another tensor by invoking the copy constructor. This is a shallow copy, the data_ptr will be copied but the memory it points to will not be copied.
[3]:
# Shallow copy constructor.
vals = np.array([1, 2, 3])
src = o3c.Tensor(vals)
dst = src
src[0] += 10
# Changes in one will get reflected in other.
print("Source tensor:\n{}".format(src))
print("\nTarget tensor:\n{}".format(dst))
Source tensor:
[11 2 3]
Tensor[shape={3}, stride={1}, Int64, CPU:0, 0x56324c5a1440]
Target tensor:
[11 2 3]
Tensor[shape={3}, stride={1}, Int64, CPU:0, 0x56324c5a1440]
Properties of a tensor¶
[4]:
vals = np.array((range(24))).reshape(2, 3, 4)
a = o3c.Tensor(vals, dtype=o3c.Dtype.Float64, device=o3c.Device("CUDA:0"))
print(f"a.shape: {a.shape}")
print(f"a.strides: {a.strides}")
print(f"a.dtype: {a.dtype}")
print(f"a.device: {a.device}")
print(f"a.ndim: {a.ndim}")
a.shape: {2, 3, 4}
a.strides: {12, 4, 1}
a.dtype: Dtype.Float64
a.device: CUDA:0
a.ndim: 3
Copy & device transfer¶
We can transfer tensors across host and multiple devices.
[5]:
# Host -> Device.
a_cpu = o3c.Tensor([0, 1, 2])
a_gpu = a_cpu.cuda(0)
print(a_gpu)
# Device -> Host.
a_gpu = o3c.Tensor([0, 1, 2], device=o3c.Device("CUDA:0"))
a_cpu = a_gpu.cpu()
print(a_cpu)
# Device -> another Device.
a_gpu_0 = o3c.Tensor([0, 1, 2], device=o3c.Device("CUDA:0"))
a_gpu_1 = a_gpu_0.cuda(0)
print(a_gpu_1)
[0 1 2]
Tensor[shape={3}, stride={1}, Int64, CUDA:0, 0x7f7733c00000]
[0 1 2]
Tensor[shape={3}, stride={1}, Int64, CPU:0, 0x56324e54e410]
[0 1 2]
Tensor[shape={3}, stride={1}, Int64, CUDA:0, 0x7f7733c00600]
Data Types¶
Open3d defines seven tensor data types.
Data type |
dtype |
byte_size |
---|---|---|
Uninitialized Tensor |
o3c.Dtype.Undefined |
|
32-bit floating point |
o3c.Dtype.Float32 |
4 |
64-bit floating point |
o3c.Dtype.Float64 |
8 |
32-bit integer (signed) |
o3c.Dtype.Int32 |
4 |
64-bit integer (signed) |
o3c.Dtype.Int64 |
8 |
8-bit integer (unsigned) |
o3c.Dtype.UInt8 |
1 |
Boolean |
o3c.Dtype.Bool |
1 |
Type casting¶
We can cast tensor’s data type. Forced casting might result in data loss.
[6]:
# E.g. float -> int
a = o3c.Tensor([0.1, 1.5, 2.7])
b = a.to(o3c.Dtype.Int32)
print(a)
print(b)
[0.1 1.5 2.7]
Tensor[shape={3}, stride={1}, Float64, CPU:0, 0x56324be46b60]
[0 1 2]
Tensor[shape={3}, stride={1}, Int32, CPU:0, 0x56324e54f440]
[7]:
# E.g. int -> float
a = o3c.Tensor([1, 2, 3])
b = a.to(o3c.Dtype.Float32)
print(a)
print(b)
[1 2 3]
Tensor[shape={3}, stride={1}, Int64, CPU:0, 0x56324be46b40]
[1.0 2.0 3.0]
Tensor[shape={3}, stride={1}, Float32, CPU:0, 0x56324b6ff3d0]
Numpy I/O with direct memory map¶
Tensors created by passing numpy array to the constructor(o3c.Tensor(np.array(...)
) do not share memory with the numpy aray. To have shared memory, you can use o3c.Tensor.from_numpy(...)
and o3c.Tensor.numpy(...)
. Changes in either of them will get reflected in other.
[8]:
# Using constructor.
np_a = np.ones((5,), dtype=np.int32)
o3_a = o3c.Tensor(np_a)
print(f"np_a: {np_a}")
print(f"o3_a: {o3_a}")
print("")
# Changes to numpy array will not reflect as memory is not shared.
np_a[0] += 100
o3_a[1] += 200
print(f"np_a: {np_a}")
print(f"o3_a: {o3_a}")
np_a: [1 1 1 1 1]
o3_a: [1 1 1 1 1]
Tensor[shape={5}, stride={1}, Int32, CPU:0, 0x56324e4eb680]
np_a: [101 1 1 1 1]
o3_a: [1 201 1 1 1]
Tensor[shape={5}, stride={1}, Int32, CPU:0, 0x56324e4eb680]
[9]:
# From numpy.
np_a = np.ones((5,), dtype=np.int32)
o3_a = o3c.Tensor.from_numpy(np_a)
# Changes to numpy array reflects on open3d Tensor and vice versa.
np_a[0] += 100
o3_a[1] += 200
print(f"np_a: {np_a}")
print(f"o3_a: {o3_a}")
np_a: [101 201 1 1 1]
o3_a: [101 201 1 1 1]
Tensor[shape={5}, stride={1}, Int32, CPU:0, 0x56324e54edd0]
[10]:
# To numpy.
o3_a = o3c.Tensor([1, 1, 1, 1, 1], dtype=o3c.Dtype.Int32)
np_a = o3_a.numpy()
# Changes to numpy array reflects on open3d Tensor and vice versa.
np_a[0] += 100
o3_a[1] += 200
print(f"np_a: {np_a}")
print(f"o3_a: {o3_a}")
# For CUDA Tensor, call cpu() before calling numpy().
o3_a = o3c.Tensor([1, 1, 1, 1, 1], device=o3c.Device("CUDA:0"))
print(f"\no3_a.cpu().numpy(): {o3_a.cpu().numpy()}")
np_a: [101 201 1 1 1]
o3_a: [101 201 1 1 1]
Tensor[shape={5}, stride={1}, Int32, CPU:0, 0x56324e54ff30]
o3_a.cpu().numpy(): [1 1 1 1 1]
PyTorch I/O with DLPack memory map¶
We can convert tensors from/to DLManagedTensor.
[11]:
import torch
import torch.utils.dlpack
# From PyTorch
th_a = torch.ones((5,)).cuda(0)
o3_a = o3c.Tensor.from_dlpack(torch.utils.dlpack.to_dlpack(th_a))
print(f"th_a: {th_a}")
print(f"o3_a: {o3_a}")
print("")
# Changes to PyTorch array reflects on open3d Tensor and vice versa
th_a[0] = 100
o3_a[1] = 200
print(f"th_a: {th_a}")
print(f"o3_a: {o3_a}")
th_a: tensor([1., 1., 1., 1., 1.], device='cuda:0')
o3_a: [1.0 1.0 1.0 1.0 1.0]
Tensor[shape={5}, stride={1}, Float32, CUDA:0, 0x7f768aa00000]
th_a: tensor([100., 200., 1., 1., 1.], device='cuda:0')
o3_a: [100.0 200.0 1.0 1.0 1.0]
Tensor[shape={5}, stride={1}, Float32, CUDA:0, 0x7f768aa00000]
[12]:
# To PyTorch
o3_a = o3c.Tensor([1, 1, 1, 1, 1], device=o3c.Device("CUDA:0"))
th_a = torch.utils.dlpack.from_dlpack(o3_a.to_dlpack())
o3_a = o3c.Tensor.from_dlpack(torch.utils.dlpack.to_dlpack(th_a))
print(f"th_a: {th_a}")
print(f"o3_a: {o3_a}")
print("")
# Changes to PyTorch array reflects on open3d Tensor and vice versa
th_a[0] = 100
o3_a[1] = 200
print(f"th_a: {th_a}")
print(f"o3_a: {o3_a}")
th_a: tensor([1, 1, 1, 1, 1], device='cuda:0')
o3_a: [1 1 1 1 1]
Tensor[shape={5}, stride={1}, Int64, CUDA:0, 0x7f7733c00200]
th_a: tensor([100, 200, 1, 1, 1], device='cuda:0')
o3_a: [100 200 1 1 1]
Tensor[shape={5}, stride={1}, Int64, CUDA:0, 0x7f7733c00200]
Binary element-wise operation:¶
Supported element-wise binary operations are: 1. Add(+)
2. Sub(-)
3. Mul(*)
4. Div(/)
5. Add_(+=)
6. Sub_(-=)
7. Mul_(*=)
8. Div_(/=)
Note that the operands have to be of same Device, dtype and Broadcast compatible.
[13]:
a = o3c.Tensor([1, 1, 1], dtype=o3c.Dtype.Float32)
b = o3c.Tensor([2, 2, 2], dtype=o3c.Dtype.Float32)
print("a + b = {}".format(a + b))
print("a - b = {}".format(a - b))
print("a * b = {}".format(a * b))
print("a / b = {}".format(a / b))
a + b = [3.0 3.0 3.0]
Tensor[shape={3}, stride={1}, Float32, CPU:0, 0x5632ae1b2cf0]
a - b = [-1.0 -1.0 -1.0]
Tensor[shape={3}, stride={1}, Float32, CPU:0, 0x5632ae161590]
a * b = [2.0 2.0 2.0]
Tensor[shape={3}, stride={1}, Float32, CPU:0, 0x56324fc38320]
a / b = [0.5 0.5 0.5]
Tensor[shape={3}, stride={1}, Float32, CPU:0, 0x56324fc8e1c0]
Broadcasting follows the same numpy broadcasting rule as given here. Automatic type casting is done in a way to avoid data loss.
[14]:
# Automatic broadcasting.
a = o3c.Tensor.ones((2, 3), dtype=o3c.Dtype.Float32)
b = o3c.Tensor.ones((3,), dtype=o3c.Dtype.Float32)
print("a + b = \n{}\n".format(a + b))
# Automatic type casting.
a = a[0]
print("a + 1 = {}".format(a + 1)) # Float + Int -> Float.
print("a + True = {}".format(a + True)) # Float + Bool -> Float.
# Inplace.
a -= True
print("a = {}".format(a))
a + b =
[[2.0 2.0 2.0],
[2.0 2.0 2.0]]
Tensor[shape={2, 3}, stride={3, 1}, Float32, CPU:0, 0x5632ae181cf0]
a + 1 = [2.0 2.0 2.0]
Tensor[shape={3}, stride={1}, Float32, CPU:0, 0x5632ae1568a0]
a + True = [2.0 2.0 2.0]
Tensor[shape={3}, stride={1}, Float32, CPU:0, 0x5632ae1f1070]
a = [0.0 0.0 0.0]
Tensor[shape={3}, stride={1}, Float32, CPU:0, 0x56324d3a1420]
Unary element-wise operation:¶
Supported unary element-wise operations are: 1. sqrt
, sqrt_
(inplace)) 2. sin
, sin_
3. cos
, cos_
4. neg
, neg_
5. exp
, exp_
6. abs
, abs_
[15]:
a = o3c.Tensor([4, 9, 16], dtype=o3c.Dtype.Float32)
print("a = {}\n".format(a))
print("a.sqrt = {}\n".format(a.sqrt()))
print("a.sin = {}\n".format(a.sin()))
print("a.cos = {}\n".format(a.cos()))
# Inplace operation
a.sqrt_()
print(a)
a = [4.0 9.0 16.0]
Tensor[shape={3}, stride={1}, Float32, CPU:0, 0x5632ae1c8730]
a.sqrt = [2.0 3.0 4.0]
Tensor[shape={3}, stride={1}, Float32, CPU:0, 0x5632ae1d5fa0]
a.sin = [-0.756802 0.412118 -0.287903]
Tensor[shape={3}, stride={1}, Float32, CPU:0, 0x5632ae176fd0]
a.cos = [-0.653644 -0.91113 -0.957659]
Tensor[shape={3}, stride={1}, Float32, CPU:0, 0x5632ae1ad660]
[2.0 3.0 4.0]
Tensor[shape={3}, stride={1}, Float32, CPU:0, 0x5632ae1c8730]
Reduction:¶
Open3D supports following reduction operations. 1. sum
- returns a tensor with sum of values over a given axis. 2. mean
- returns a tensor with mean of values over a given axis. 3. prod
- returns a tensor with product of values over a given axis. 4. min
- returns a tensor of minimum values along a given axis. 5. max
- returns a tensor of maximum values along a given axis. 6. argmin
- returns a tensor of minimum value indices over a given axis. 7. argmax
- returns a
tensor of maximum value indices over a given axis.
[16]:
vals = np.array(range(24)).reshape((2, 3, 4))
a = o3c.Tensor(vals)
print("a.sum = {}\n".format(a.sum()))
print("a.min = {}\n".format(a.min()))
print("a.ArgMax = {}\n".format(a.argmax()))
a.sum = 276
Tensor[shape={}, stride={}, Int64, CPU:0, 0x5632ae1640d0]
a.min = 0
Tensor[shape={}, stride={}, Int64, CPU:0, 0x5632ae201e20]
a.ArgMax = 23
Tensor[shape={}, stride={}, Int64, CPU:0, 0x5632ae17f1a0]
[17]:
# With specified dimension.
vals = np.array(range(24)).reshape((2, 3, 4))
a = o3c.Tensor(vals)
print("Along dim=0\n{}".format(a.sum(dim=(0))))
print("Along dim=(0, 2)\n{}\n".format(a.sum(dim=(0, 2))))
# Retention of reduced dimension.
print("Shape without retention : {}".format(a.sum(dim=(0, 2)).shape))
print("Shape with retention : {}".format(a.sum(dim=(0, 2), keepdim=True).shape))
Along dim=0
[[12 14 16 18],
[20 22 24 26],
[28 30 32 34]]
Tensor[shape={3, 4}, stride={4, 1}, Int64, CPU:0, 0x5632accdb2b0]
Along dim=(0, 2)
[60 92 124]
Tensor[shape={3}, stride={1}, Int64, CPU:0, 0x56324fcde960]
Shape without retention : {3}
Shape with retention : {1, 3, 1}
Slicing, indexing, getitem, and setitem¶
Basic slicing is done by passing an integer, slice object(start:stop:step
), index array or boolean array. Slicing and indexing produce a view of the tensor. Hence any change in it will also get reflected in the original tensor.
[18]:
vals = np.array(range(24)).reshape((2, 3, 4))
a = o3c.Tensor(vals)
print("a = \n{}\n".format(a))
# Indexing __getitem__.
print("a[1, 2] = {}\n".format(a[1, 2]))
# Slicing __getitem__.
print("a[1:] = \n{}\n".format(a[1:]))
# slice object.
print("a[:, 0:3:2, :] = \n{}\n".format(a[:, 0:3:2, :]))
# Combined __getitem__
print("a[:-1, 0:3:2, 2] = \n{}\n".format(a[:-1, 0:3:2, 2]))
a =
[[[0 1 2 3],
[4 5 6 7],
[8 9 10 11]],
[[12 13 14 15],
[16 17 18 19],
[20 21 22 23]]]
Tensor[shape={2, 3, 4}, stride={12, 4, 1}, Int64, CPU:0, 0x56324fd0d920]
a[1, 2] = [20 21 22 23]
Tensor[shape={4}, stride={1}, Int64, CPU:0, 0x56324fd0d9c0]
a[1:] =
[[[12 13 14 15],
[16 17 18 19],
[20 21 22 23]]]
Tensor[shape={1, 3, 4}, stride={12, 4, 1}, Int64, CPU:0, 0x56324fd0d980]
a[:, 0:3:2, :] =
[[[0 1 2 3],
[8 9 10 11]],
[[12 13 14 15],
[20 21 22 23]]]
Tensor[shape={2, 2, 4}, stride={12, 8, 1}, Int64, CPU:0, 0x56324fd0d920]
a[:-1, 0:3:2, 2] =
[[2 10]]
Tensor[shape={1, 2}, stride={12, 8}, Int64, CPU:0, 0x56324fd0d930]
[19]:
vals = np.array(range(24)).reshape((2, 3, 4))
a = o3c.Tensor(vals)
# Changes get reflected.
b = a[:-1, 0:3:2, 2]
b[0] += 100
print("b = {}\n".format(b))
print("a = \n{}".format(a))
b = [[102 110]]
Tensor[shape={1, 2}, stride={12, 8}, Int64, CPU:0, 0x56324fd0da00]
a =
[[[0 1 102 3],
[4 5 6 7],
[8 9 110 11]],
[[12 13 14 15],
[16 17 18 19],
[20 21 22 23]]]
Tensor[shape={2, 3, 4}, stride={12, 4, 1}, Int64, CPU:0, 0x56324fd0d9f0]
[20]:
vals = np.array(range(24)).reshape((2, 3, 4))
a = o3c.Tensor(vals)
# Example __setitem__
a[:, :, 2] += 100
print(a)
[[[0 1 102 3],
[4 5 106 7],
[8 9 110 11]],
[[12 13 114 15],
[16 17 118 19],
[20 21 122 23]]]
Tensor[shape={2, 3, 4}, stride={12, 4, 1}, Int64, CPU:0, 0x56324fc3ec00]
Advanced indexing¶
Advanced indexing is triggered while passing an index array or a boolean array or their combination with integer/slice object. Note that advanced indexing always returns a copy of the data (contrast with basic slicing that returns a view). ### Integer array indexing Integer array indexing allows selection of arbitrary items in the tensor based on their dimensional index. Indexes passed should be broadcast compatible.
[21]:
vals = np.array(range(24)).reshape((2, 3, 4))
a = o3c.Tensor(vals)
# Along each dimension, a specific element is selected.
print("a[[0, 1], [1, 2], [1, 0]] = {}\n".format(a[[0, 1], [1, 2], [1, 0]]))
# Changes not reflected as it is a copy.
b = a[[0, 0], [0, 1], [1, 1]]
b[0] += 100
print("b = {}\n".format(b))
print("a[[0, 0], [0, 1], [1, 1]] = {}".format(a[[0, 0], [0, 1], [1, 1]]))
a[[0, 1], [1, 2], [1, 0]] = [5 20]
Tensor[shape={2}, stride={1}, Int64, CPU:0, 0x5632ae171940]
b = [101 5]
Tensor[shape={2}, stride={1}, Int64, CPU:0, 0x56324fcd7d40]
a[[0, 0], [0, 1], [1, 1]] = [1 5]
Tensor[shape={2}, stride={1}, Int64, CPU:0, 0x5632ae1f1070]
Combining advanced and basic indexing¶
When there is at least one slice(:
), ellipse(...
), or newaxis in the index, then the behaviour can be more complicated. It is like concatenating the indexing result for each advanced index element. Under the advanced indexing mode, some preprocessing is done before sending to the advanced indexing engine. 1. Specific index positions are converted to a Indextensor with the specified index. 2. If slice is non-full slice, then we slice the tensor first, then use full slice for advanced
indexing engine.
dst = src[1, 0:2, [1, 2]]
is done in two steps: temp = src[:, 0:2, :]
dst = temp[[1], :, [1, 2]]
There are two parts to the indexing operation, the subspace defined by the basic indexing, and the subspace from the advanced indexing part.
The advanced indexes are separated by a slice, Ellipse, or newaxis. For example
x[arr1, :, arr2]
.The advanced indexes are all next to each other. For example
x[..., arr1, arr2, :]
, but notx[arr1, :, 1]
since1
is an advanced index here.
In the first case, the dimensions resulting from the advanced indexing operation come first in the result array, and the subspace dimensions after that. In the second case, the dimensions from the advanced indexing operations are inserted into the result array at the same spot as they were in the initial array.
[22]:
vals = np.array(range(24)).reshape((2, 3, 4))
a = o3c.Tensor(vals)
print("a[1, 0:2, [1, 2]] = \n{}\n".format(a[1, 0:2, [1, 2]]))
# Subtle difference in selection and advanced indexing.
print("a[(0, 1)] = {}\n".format(a[(0, 1)]))
print("a[[0, 1] = \n{}\n".format(a[[0, 1]]))
a = o3c.Tensor(np.array(range(120)).reshape((2, 3, 4, 5)))
# Interleaving slice and advanced indexing.
print("a[1, [[1, 2], [2, 1]], 0:4:2, [3, 4]] = \n{}\n".format(
a[1, [[1, 2], [2, 1]], 0:4:2, [3, 4]]))
a[1, 0:2, [1, 2]] =
[[13 17],
[14 18]]
Tensor[shape={2, 2}, stride={2, 1}, Int64, CPU:0, 0x56324fd0abb0]
a[(0, 1)] = [4 5 6 7]
Tensor[shape={4}, stride={1}, Int64, CPU:0, 0x56324fc3ec20]
a[[0, 1] =
[[[0 1 2 3],
[4 5 6 7],
[8 9 10 11]],
[[12 13 14 15],
[16 17 18 19],
[20 21 22 23]]]
Tensor[shape={2, 3, 4}, stride={12, 4, 1}, Int64, CPU:0, 0x5632ae46db40]
a[1, [[1, 2], [2, 1]], 0:4:2, [3, 4]] =
[[[83 93],
[104 114]],
[[103 113],
[84 94]]]
Tensor[shape={2, 2, 2}, stride={4, 2, 1}, Int64, CPU:0, 0x5632ae474df0]
Boolean array indexing¶
Advanced indexing gets triggered when we pass a boolean array as an index, or it is returned from comparision operators. Boolean array should have exactly as many dimensions as it is supposed to work with.
[23]:
a = o3c.Tensor(np.array([1, -1, -2, 3]))
print("a = {}\n".format(a))
# Add constant to all negative numbers.
a[a < 0] += 20
print("a = {}\n".format(a))
a = [1 -1 -2 3]
Tensor[shape={4}, stride={1}, Int64, CPU:0, 0x56324fd0abb0]
a = [1 19 18 3]
Tensor[shape={4}, stride={1}, Int64, CPU:0, 0x56324fd0abb0]
Logical operations¶
Open3D supports following logical operators: 1. logical_and
- returns tensor with element wise logical AND. 2. logical_or
- returns tensor with element wise logical OR. 3. logical_xor
- returns tensor with element wise logical XOR. 4. logical_not
- returns tensor with element wise logical NOT. 5. all
- returns true if all elements in the tensor are true. 6. any
- returns true if any element in the tensor is true. 7. allclose
- returns true if two tensors are element
wise equal within a tolerance. 8. isclose
- returns tensor with element wise allclose
operation. 9. issame
- returns true if and only if two tensors are same(even same underlying memory).
[24]:
a = o3c.Tensor(np.array([True, False, True, False]))
b = o3c.Tensor(np.array([True, True, False, False]))
print("a AND b = {}".format(a.logical_and(b)))
print("a OR b = {}".format(a.logical_or(b)))
print("a XOR b = {}".format(a.logical_xor(b)))
print("NOT a = {}\n".format(a.logical_not()))
# Only works for boolean tensors.
print("a.any = {}".format(a.any()))
print("a.all = {}\n".format(a.all()))
# If tensor is not boolean, 0 will be treated as False, while non-zero as true.
# The tensor will be filled with 0 or 1 casted to tensor's dtype.
c = o3c.Tensor(np.array([2.0, 0.0, 3.5, 0.0]))
d = o3c.Tensor(np.array([0.0, 3.0, 1.5, 0.0]))
print("c AND d = {}".format(c.logical_and(d)))
a AND b = [True False False False]
Tensor[shape={4}, stride={1}, Bool, CPU:0, 0x56324fc75d30]
a OR b = [True True True False]
Tensor[shape={4}, stride={1}, Bool, CPU:0, 0x56324fc8e1c0]
a XOR b = [False True True False]
Tensor[shape={4}, stride={1}, Bool, CPU:0, 0x5632ae1e0cc0]
NOT a = [False True False True]
Tensor[shape={4}, stride={1}, Bool, CPU:0, 0x56324e54ff10]
a.any = True
a.all = False
c AND d = [False False True False]
Tensor[shape={4}, stride={1}, Bool, CPU:0, 0x56324fd0e030]
[25]:
a = o3c.Tensor(np.array([1, 2, 3, 4]), dtype=o3c.Dtype.Float64)
b = o3c.Tensor(np.array([1, 1.99999, 3, 4]))
# Throws exception if the device/dtype is not same.
# Returns false if the shape is not same.
print("allclose : {}".format(a.allclose(b)))
# Throws exception if the device/dtype/shape is not same.
print("isclose : {}".format(a.isclose(b)))
# Returns false if the device/dtype/shape/ is not same.
print("issame : {}".format(a.issame(b)))
allclose : True
isclose : [True True True True]
Tensor[shape={4}, stride={1}, Bool, CPU:0, 0x5632ae1e3800]
issame : False
Comparision Operations¶
[26]:
a = o3c.Tensor([0, 1, -1])
b = o3c.Tensor([0, 0, 0])
print("a > b = {}".format(a > b))
print("a >= b = {}".format(a >= b))
print("a < b = {}".format(a < b))
print("a <= b = {}".format(a <= b))
print("a == b = {}".format(a == b))
print("a != b = {}".format(a != b))
# Throws exception if device/dtype is not shape.
# If shape is not same, then tensors should be broadcast compatible.
print("a > b = {}".format(a > b[0]))
a > b = [False True False]
Tensor[shape={3}, stride={1}, Bool, CPU:0, 0x5632ae16edf0]
a >= b = [True True False]
Tensor[shape={3}, stride={1}, Bool, CPU:0, 0x5632ae174480]
a < b = [False False True]
Tensor[shape={3}, stride={1}, Bool, CPU:0, 0x5632ada96370]
a <= b = [True False True]
Tensor[shape={3}, stride={1}, Bool, CPU:0, 0x56324fc9b020]
a == b = [True False False]
Tensor[shape={3}, stride={1}, Bool, CPU:0, 0x56324fcd7d40]
a != b = [False True True]
Tensor[shape={3}, stride={1}, Bool, CPU:0, 0x56324fc75d30]
a > b = [False True False]
Tensor[shape={3}, stride={1}, Bool, CPU:0, 0x56324e54ff10]
Nonzero operations¶
When
as_tuple
isFalse
(default), it returns a tensor indices of the elements that are non-zero. Each row in the result contains the indices of a non-zero element in the input. If the input has \(n\) dimensions, then the resulting tensor is of size \((z x n)\), where \(z\) is the total number of non-zero elements in the input tensor.When
as_tuple
isTrue
, it returns a tuple of 1D tensors, one for each dimension in input, each containing the indices of all non-zero elements of input. If the input has \(n\) dimension, then the resulting tuple contains \(n\) tensors of size \(z\), where \(z\) is the total number of non-zero elements in the input tensor.
[27]:
a = o3c.Tensor([[3, 0, 0], [0, 4, 0], [5, 6, 0]])
print("a = \n{}\n".format(a))
print("a.nonzero() = \n{}\n".format(a.nonzero()))
print("a.nonzero(as_tuple = 1) = \n{}".format(a.nonzero(as_tuple=1)))
a =
[[3 0 0],
[0 4 0],
[5 6 0]]
Tensor[shape={3, 3}, stride={3, 1}, Int64, CPU:0, 0x5632ae474d50]
a.nonzero() =
[[0 1 2 2],
[0 1 0 1]]
Tensor[shape={2, 4}, stride={4, 1}, Int64, CPU:0, 0x56324e4eddd0]
a.nonzero(as_tuple = 1) =
[[0 1 2 2]
Tensor[shape={4}, stride={1}, Int64, CPU:0, 0x56324fc3ecd0], [0 1 0 1]
Tensor[shape={4}, stride={1}, Int64, CPU:0, 0x5632ae474ca0]]
TensorList¶
A tensorlist is a list of tensors of the same shape, similar to std::vector<Tensor>
. Internally, a tensorlist stores the tensors in one big internal tensor, where the begin dimension of the internal tensor is extendable. This enables storing of 3D points, colours in a contiguous manner.
[28]:
vals = np.array(range(24), dtype=np.float32).reshape((2, 3, 4))
# Empty TensorList.
a = o3c.TensorList([3, 4])
print("a = {}".format(a))
# TensorList with single Tensor.
b = o3c.TensorList([3, 4], size=1)
print("b = {}".format(b))
a =
TensorList[size=0, shape={3, 4}, Float32, CPU:0]
b =
TensorList[size=1, shape={3, 4}, Float32, CPU:0]
from_tensor¶
We can create tensorlist from a single tensor where we breaking first dimension into multiple tensors. The first dimension of the tensor will be used as the size
dimension of the tensorlist. Remaining dimensions will be used as the element shape of the tensor list. For example, if the input tensor has shape (2, 3, 4)
, the resulting tensorlist will have size 2 and element shape (3, 4)
. Here the memory will be copied by default. If inplace == true
, the tensorlist will share the
same memory with the input tensor. The input tensor must be contiguous. The resulting tensorlist will not be resizable, and hence we cannot do certain operations like resize, push_back, extend, concatenate, and clear.
from_tensors¶
Tensorlist can also be created from a list of tensors. The tensors must have the same shape, dtype and device. Here the values will be copied.
[29]:
vals = np.array(range(24), dtype=np.float32).reshape((2, 3, 4))
# TensorList from tensor.
c = o3c.TensorList.from_tensor(o3c.Tensor(vals))
print("from tensor = {}\n".format(c))
# TensorList from multiple tensors.
d = o3c.TensorList.from_tensors([o3c.Tensor(vals[0]), o3c.Tensor(vals[1])])
print("from tensors = {}\n".format(d))
# Below operations are only valid for resizable tensorlist.
# Concatenate TensorLists.
print("b + c = {}".format(b + c))
print("concat(b, c) = {}\n".format(o3c.TensorList.concat(b, c)))
# Append a Tensor to TensorList.
d.push_back(o3c.Tensor(vals[0]))
print("d = {}\n".format(d))
# Append a TensorList to another TensorList.
d.extend(b)
print("extended d = {}".format(d))
from tensor =
TensorList[size=2, shape={3, 4}, Float32, CPU:0]
from tensors =
TensorList[size=2, shape={3, 4}, Float32, CPU:0]
b + c =
TensorList[size=3, shape={3, 4}, Float32, CPU:0]
concat(b, c) =
TensorList[size=3, shape={3, 4}, Float32, CPU:0]
d =
TensorList[size=3, shape={3, 4}, Float32, CPU:0]
extended d =
TensorList[size=4, shape={3, 4}, Float32, CPU:0]
[ ]: