open3d.ml.tf.dataloaders.TFDataloader¶
-
class
open3d.ml.tf.dataloaders.
TFDataloader
(*args, dataset=None, model=None, use_cache=True, steps_per_epoch=None, preprocess=None, transform=None, get_batch_gen=None, **kwargs)¶ This class allows you to load datasets for a TensorFlow framework.
Example: This example loads the SemanticKITTI dataset using the a point cloud to the visualizer:
import tensorflow as tf from ..dataloaders import TFDataloader train_split = TFDataloader(dataset=tf.dataset.get_split('training'), model=model, use_cache=tf.dataset.cfg.use_cache, steps_per_epoch=tf.dataset.cfg.get( 'steps_per_epoch_train', None))
-
__init__
(*args, dataset=None, model=None, use_cache=True, steps_per_epoch=None, preprocess=None, transform=None, get_batch_gen=None, **kwargs)¶ - Initializes the class, and includes the following steps:
Checks if preprocess is available. If yes, then uses the preprocessed data. Checks if cache is used. If not, then uses data from the cache.
Args:
dataset: The 3D ML dataset class. You can use the base dataset, sample datasets , or a custom dataset. preprocess: The model’s preprocess method. transform: The model’s transform method. use_cache: Indicates if preprocessed data should be cached.
get_batch_gen: <NTD> model_cfg: The configuration file of the model.
steps_per_epoch: The number of steps per epoch that indicates the bactches of samples to train. If it is None, then the step number will be the number of samples in the data.
Returns:
class: The corresponding class.
-
get_loader
(batch_size=1, num_threads=3)¶ This constructs the tensorflow dataloader.
- Parameters
batch_size – The batch size to be used for data loading.
num_threads – The number of parallel threads to be used to data loading.
- Returns
The tensorflow dataloader and the number of steps in one epoch.
-
read_data
(index)¶ Returns the data at the index.
- This does one of the following:
If cache is available, then gets the data from the cache.
If preprocess is available, then gets the preprocessed dataset and then the data.
If cache or preprocess is not available, then get the data from the dataset.
-