open3d.ml.torch.pipelines.SemanticSegmentation#
- class open3d.ml.torch.pipelines.SemanticSegmentation(model, dataset=None, name='SemanticSegmentation', batch_size=4, val_batch_size=4, test_batch_size=3, max_epoch=100, learning_rate=0.01, lr_decays=0.95, save_ckpt_freq=20, adam_lr=0.01, scheduler_gamma=0.95, momentum=0.98, main_log_dir='./logs/', device='cuda', split='train', train_sum_dir='train_log', **kwargs)#
This class allows you to perform semantic segmentation for both training and inference using the Torch. This pipeline has multiple stages: Pre- processing, loading dataset, testing, and inference or training.
- Example:
This example loads the Semantic Segmentation and performs a training using the SemanticKITTI dataset.
import torch import torch.nn as nn
from .base_pipeline import BasePipeline from torch.utils.tensorboard import SummaryWriter from ..dataloaders import get_sampler, TorchDataloader, DefaultBatcher, ConcatBatcher
Mydataset = TorchDataloader(dataset=dataset.get_split(‘training’)), MyModel = SemanticSegmentation(self,model,dataset=Mydataset, name=’SemanticSegmentation’, name=’MySemanticSegmentation’, batch_size=4, val_batch_size=4, test_batch_size=3, max_epoch=100, learning_rate=1e-2, lr_decays=0.95, save_ckpt_freq=20, adam_lr=1e-2, scheduler_gamma=0.95, momentum=0.98, main_log_dir=’./logs/’, device=’gpu’, split=’train’, train_sum_dir=’train_log’)
- Args:
dataset: The 3D ML dataset class. You can use the base dataset, sample datasets , or a custom dataset. model: The model to be used for building the pipeline. name: The name of the current training. batch_size: The batch size to be used for training. val_batch_size: The batch size to be used for validation. test_batch_size: The batch size to be used for testing. max_epoch: The maximum size of the epoch to be used for training. leanring_rate: The hyperparameter that controls the weights during training. Also, known as step size. lr_decays: The learning rate decay for the training. save_ckpt_freq: The frequency in which the checkpoint should be saved. adam_lr: The leanring rate to be applied for Adam optimization. scheduler_gamma: The decaying factor associated with the scheduler. momentum: The momentum that accelerates the training rate schedule. main_log_dir: The directory where logs are stored. device: The device to be used for training. split: The dataset split to be used. In this example, we have used “train”. train_sum_dir: The directory where the trainig summary is stored.
- Returns:
class: The corresponding class.
- __init__(model, dataset=None, name='SemanticSegmentation', batch_size=4, val_batch_size=4, test_batch_size=3, max_epoch=100, learning_rate=0.01, lr_decays=0.95, save_ckpt_freq=20, adam_lr=0.01, scheduler_gamma=0.95, momentum=0.98, main_log_dir='./logs/', device='cuda', split='train', train_sum_dir='train_log', **kwargs)#
Initialize.
- Parameters:
model – A network model.
dataset – A dataset, or None for inference model.
device – ‘cuda’ or ‘cpu’.
distributed – Whether to use multiple gpus.
kwargs –
- Returns:
The corresponding class.
- Return type:
class
- get_3d_summary(results, input_data, epoch, save_gt=True)#
Create visualization for network inputs and outputs.
- Parameters:
results – Model output (see below).
input_data – Model input (see below).
epoch (int) – step
save_gt (bool) – Save ground truth (for ‘train’ or ‘valid’ stages).
- RandLaNet:
results (Tensor(B, N, C)): Prediction scores for all classes inputs_batch: Batch of pointclouds and labels as a Dict with keys:
‘xyz’: First element is Tensor(B,N,3) points ‘labels’: (B, N) (optional) labels
- SparseConvUNet:
- results (Tensor(SN, C)): Prediction scores for all classes. SN is
total points in the batch.
- input_batch (Dict): Batch of pointclouds and labels. Keys should be:
‘point’ [Tensor(SN,3), float]: Concatenated points. ‘batch_lengths’ [Tensor(B,), int]: Number of points in each
point cloud of the batch.
‘label’ [Tensor(SN,) (optional)]: Concatenated labels.
- Returns:
- [Dict] visualizations of inputs and outputs suitable to save as an
Open3D for TensorBoard summary.
- get_batcher(device, split='training')#
Get the batcher to be used based on the device and split.
- load_ckpt(ckpt_path=None, is_resume=True)#
Load a checkpoint. You must pass the checkpoint and indicate if you want to resume.
- run_inference(data)#
Run inference on given data.
- Parameters:
data – A raw data.
- Returns:
Returns the inference results.
- run_test()#
Run the test using the data passed.
- run_train()#
Run training on train sets.
- save_ckpt(epoch)#
Save a checkpoint at the passed epoch.
- save_config(writer)#
Save experiment configuration with tensorboard summary.
- save_logs(writer, epoch)#
Save logs from the training and send results to TensorBoard.
- update_tests(sampler, inputs, results)#
Update tests using sampler, inputs, and results.