Optimizers

This module contains optimizers.

tensornet.models.optimizer.sgd(model: torch.nn.modules.module.Module, learning_rate: float = 0.01, momentum: int = 0, dampening: int = 0, l2_factor: float = 0.0, nesterov: bool = False)[source]

SGD optimizer.

Parameters
  • model (torch.nn.Module) – Model Instance.

  • learning_rate (float, optional) – Learning rate for the optimizer. (default: 0.01)

  • momentum (float, optional) – Momentum factor. (default: 0)

  • dampening (float, optional) – Dampening for momentum. (default: 0)

  • l2_factor (float, optional) – Factor for L2 regularization. (default: 0)

  • nesterov (bool, optional) – Enables nesterov momentum. (default: False)

Returns

SGD optimizer.

tensornet.models.optimizer.adam(model: torch.nn.modules.module.Module, learning_rate: float = 0.001, betas: Tuple[float] = 0.9, 0.999, eps: float = 1e-08, l2_factor: float = 0.0, amsgrad: bool = False)[source]

Adam optimizer.

Parameters
  • model (torch.nn.Module) – Model Instance.

  • learning_rate (float, optional) – Learning rate for the optimizer. (default: 0.001)

  • betas (tuple, optional) – Coefficients used for computing running averages of gradient and its square. (default: (0.9, 0.999))

  • eps (float, optional) – Term added to the denominator to improve numerical stability. (default: 1e-8)

  • l2_factor (float, optional) – Factor for L2 regularization. (default: 0)

  • amsgrad (bool, optional) – Whether to use the AMSGrad variant of this algorithm from the paper On the Convergence of Adam and Beyond. (default: False)

Returns

Adam optimizer.