Callbacks

Model Checkpoint

class tensornet.engine.ops.ModelCheckpoint(path, monitor='val_loss', mode='auto', verbose=0, save_best_only=True, best_value=None)[source]

Store model checkpoint while training.

Parameters
  • path (str) – Path to the directory where the checkpoints will be stored.

  • monitor (str, optional) – Metric to monitor. (default: ‘val_loss’)

  • mode (str, optional) – Comparison mode for monitored quantity. One of {auto, min, max}. (default: ‘auto’)

  • verbose (int, optional) – verbosity mode, 0 or 1. (default: 0)

  • save_best_only (bool, optional) – If True, only the model with the best value of monitoring quantity will be saved. (default: True)

  • best_value (float, optional) – Best value of the monitored metric, this is useful when resuming training. This param will work only when save_best_only is True.

__call__(model, current_value, epoch=None, **kwargs)[source]

Compare the current value with the best value and save the model accordingly.

Parameters
  • model (torch.nn.Module) – Model Instance.

  • optimizer (torch.optim) – Optimizer for the model.

  • current_value (float) – Current value of the monitored quantity.

  • epoch (int) – Epoch count.

  • **kwargs – Other keyword arguments.

TensorBoard

class tensornet.engine.ops.TensorBoard(logdir=None, images=None, device='cpu')[source]

Setup Tensorboard.

Parameters
  • logdir (str, optional) – Save directory location. Default is runs/CURRENT_DATETIME_HOSTNAME, which changes after each run.

  • images (torch.Tensor, optional) – Batch of images for which predictions will be done.

  • device (str or torch.device, optional) – Device where the data will be loaded. (default=’cpu’)

write_model(model)[source]

Write graph to tensorboard.

Parameters

model (torch.nn.Module) – Model Instance.

write_image(image, image_name)[source]

Write image to tensorboard.

Parameters
  • image (torch.Tensor) – Image tensor.

  • image_name (str, optional) – Name of the image to be written.

write_images(model, activation_fn=None, image_name=None)[source]

Write images to tensorboard.

Parameters
  • model (torch.nn.Module) – Model Instance.

  • activation_fn (optional) – Activation function to apply on model outputs.

  • image_name (str, optional) – Name of the image to be written.

write_scalar(scalar, value, step_value)[source]

Write scalar metrics to tensorboard.

Parameters
  • scalar (str) – Data identifier.

  • value (float or string/blobname) – Value to save.

  • step_value (int) – Global step value to record.

LR Schedulers

tensornet.engine.ops.lr_scheduler.step_lr(optimizer, step_size, gamma=0.1, last_epoch=- 1)[source]

Create LR step scheduler.

Parameters
  • optimizer (torch.optim) – Model optimizer.

  • step_size (int) – Frequency for changing learning rate.

  • gamma (float, optional) – Factor for changing learning rate. (default: 0.1)

  • last_epoch (int, optional) – The index of last epoch. (default: -1)

Returns

Learning rate scheduler.

Return type

StepLR

tensornet.engine.ops.lr_scheduler.reduce_lr_on_plateau(optimizer, factor=0.1, patience=10, verbose=False, min_lr=0)[source]

Create LR plateau reduction scheduler.

Parameters
  • optimizer (torch.optim) – Model optimizer.

  • factor (float, optional) – Factor by which the learning rate will be reduced. (default: 0.1)

  • patience (int, optional) – Number of epoch with no improvement after which learning rate will be will be reduced. (default: 10)

  • verbose (bool, optional) – If True, prints a message to stdout for each update. (default: False)

  • min_lr (float, optional) – A scalar or a list of scalars. A lower bound on the learning rate of all param groups or each group respectively. (default: 0)

Returns

ReduceLROnPlateau instance.

tensornet.engine.ops.lr_scheduler.one_cycle_lr(optimizer, max_lr, epochs, steps_per_epoch, pct_start=0.5, div_factor=10.0, final_div_factor=10000)[source]

Create One Cycle Policy for Learning Rate.

Parameters
  • optimizer (torch.optim) – Model optimizer.

  • max_lr (float) – Upper learning rate boundary in the cycle.

  • epochs (int) – The number of epochs to train for. This is used along with steps_per_epoch in order to infer the total number of steps in the cycle.

  • steps_per_epoch (int) – The number of steps per epoch to train for. This is used along with epochs in order to infer the total number of steps in the cycle.

  • pct_start (float, optional) – The percentage of the cycle (in number of steps) spent increasing the learning rate. (default: 0.5)

  • div_factor (float, optional) – Determines the initial learning rate via initial_lr = max_lr / div_factor. (default: 10.0)

  • final_div_factor (float, optional) – Determines the minimum learning rate via min_lr = initial_lr / final_div_factor. (default: 1e4)

Returns

OneCycleLR instance.

tensornet.engine.ops.lr_scheduler.cyclic_lr(optimizer, base_lr, max_lr, step_size_up=2000, step_size_down=None, mode='triangular', gamma=1.0, scale_fn=None, scale_mode='cycle', cycle_momentum=True, base_momentum=0.8, max_momentum=0.9, last_epoch=- 1, verbose=False)[source]

Create Cyclic LR Policy.

Parameters
  • optimizer (torch.optim) – Model optimizer.

  • base_lr (float) – Lower learning rate boundary in the cycle.

  • max_lr (float) – Upper learning rate boundary in the cycle.

  • step_size_up (int) – Number of training iterations in the increasing half of a cycle. (default: 2000)

  • step_size_down (int) – Number of training iterations in the decreasing half of a cycle. If step_size_down is None, it is set to step_size_up. (default: None)

  • mode (str) – One of triangular, triangular2, exp_range. If scale_fn is not None, this argument is ignored. (default: ‘triangular’)

  • gamma (float) – Constant in ‘exp_range’ scaling function: gamma**(cycle iterations). (default: 1.0)

  • scale_fn – Custom scaling policy defined by a single argument lambda function, where 0 <= scale_fn(x) <= 1 for all x >= 0. If specified, then ‘mode’ is ignored. (default: None)

  • scale_mode (str) – ‘cycle’, ‘iterations’. Defines whether scale_fn is evaluated on cycle number or cycle iterations (training iterations since start of cycle). (default: ‘cycle’)

  • cycle_momentum (bool) – If True, momentum is cycled inversely to learning rate between ‘base_momentum’ and ‘max_momentum’. (default: True)

  • base_momentum (float) – Lower momentum boundaries in the cycle. (default: 0.8)

  • max_momentum (float) – Upper momentum boundaries in the cycle. Functionally, it defines the cycle amplitude (max_momentum - base_momentum). (default: 0.9)

  • last_epoch (int) – The index of the last batch. This parameter is used when resuming a training job.(default: -1)

  • verbose (bool) – If True, prints a message to stdout for each update. (default: False)

Returns

CyclicLR instance.