open3d.ml.tf.models.KPFCNN#

class open3d.ml.tf.models.KPFCNN(*args, **kwargs)#

Class defining KPFCNN.

A model for Semantic Segmentation.

__init__(name='KPFCNN', lbl_values=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19], num_classes=19, ignored_label_inds=[0], ckpt_path=None, batcher='ConcatBatcher', architecture=['simple', 'resnetb', 'resnetb_strided', 'resnetb', 'resnetb', 'resnetb_strided', 'resnetb', 'resnetb', 'resnetb_strided', 'resnetb', 'resnetb', 'resnetb_strided', 'resnetb', 'nearest_upsample', 'unary', 'nearest_upsample', 'unary', 'nearest_upsample', 'unary', 'nearest_upsample', 'unary'], in_radius=4.0, max_in_points=100000, batch_num=8, batch_limit=30000, val_batch_num=8, num_kernel_points=15, first_subsampling_dl=0.06, conv_radius=2.5, deform_radius=6.0, KP_extent=1.2, KP_influence='linear', aggregation_mode='sum', density_parameter=5.0, first_features_dim=128, in_features_dim=2, modulated=False, use_batch_norm=True, batch_norm_momentum=0.02, deform_fitting_mode='point2point', deform_fitting_power=1.0, repulse_extent=1.2, augment_scale_anisotropic=True, augment_symmetries=[True, False, False], augment_rotation='vertical', augment_scale_min=0.8, augment_scale_max=1.2, augment_noise=0.001, augment_color=0.8, in_points_dim=3, fixed_kernel_points='center', num_layers=5, l_relu=0.1, reduce_fc=False, **kwargs)#
augment_input(stacked_points, batch_inds, is_test)#
big_neighborhood_filter(neighbors, layer)#

Filter neighborhoods with max number of neighbors.

Limit is set to keep XX% of the neighborhoods untouched. Limit is computed at initialization

call(flat_inputs, training=False)#
get_batch_gen(dataset, steps_per_epoch=None, batch_size=1)#
get_batch_inds(stacks_len)#

Method computing the batch indices of all points, given the batch element sizes (stack lengths).

Example: From [3, 2, 5], it would return [0, 0, 0, 1, 1, 2, 2, 2, 2, 2]

get_loss(Loss, logits, inputs)#

Runs the loss on outputs of the model.

Parameters:
  • outputs – logits

  • labels – labels

Returns:

loss

get_optimizer(cfg_pipeline)#

Returns an optimizer object for the model.

Parameters:

cfg_pipeline – A Config object with the configuration of the pipeline.

Returns:

Returns a new optimizer object.

inference_begin(data)#

Function called right before running inference.

Parameters:

data – A data from the dataset.

inference_end(results)#

This function is called after the inference.

This function can be implemented to apply post-processing on the network outputs.

Parameters:

results – The model outputs as returned by the call() function. Post-processing is applied on this object.

Returns:

Returns True if the inference is complete and otherwise False. Returning False can be used to implement inference for large point clouds which require multiple passes.

inference_preprocess()#

This function prepares the inputs for the model.

Returns:

The inputs to be consumed by the call() function of the model.

organise_inputs(flat_inputs)#
preprocess(data, attr)#

Data preprocessing function.

This function is called before training to preprocess the data from a dataset.

Parameters:
  • data – A sample from the dataset.

  • attr – The corresponding attributes.

Returns:

Returns the preprocessed data

segmentation_inputs(stacked_points, stacked_features, point_labels, stacks_lengths, batch_inds, object_labels=None)#
stack_batch_inds(stacks_len)#
transform(stacked_points, stacked_colors, point_labels, stacks_lengths, point_inds, cloud_inds, is_test=False)#

[None, 3], [None, 3], [None], [None]

transform_inference(data)#