analogvnn.backward.BackwardModule#

Module Contents#

Classes#

BackwardModule

Base class for all backward modules.

class analogvnn.backward.BackwardModule.BackwardModule(layer: torch.nn.Module = None)[source]#

Bases: abc.ABC

Base class for all backward modules.

A backward module is a module that can be used to compute the backward gradient of a given function. It is used to compute the gradient of the input of a function with respect to the output of the function.

Variables:
  • _layer (Optional[nn.Module]) – The layer for which the backward gradient is computed.

  • _empty_holder_tensor (Tensor) – A placeholder tensor which always requires gradient for backward gradient computation.

  • _autograd_backward (Type[AutogradBackward]) – The autograd backward function.

  • _disable_autograd_backward (bool) – If True the autograd backward function is disabled.

class AutogradBackward[source]#

Bases: torch.autograd.Function

Optimization and proper calculation of gradients when using the autograd engine.

static forward(ctx: Any, backward_module: BackwardModule, _: torch.Tensor, *args: torch.Tensor, **kwargs: torch.Tensor) analogvnn.utils.common_types.TENSORS[source]#

Forward pass of the autograd function.

Parameters:
  • ctx – The context of the autograd function.

  • backward_module (BackwardModule) – The backward module.

  • _ (Tensor) – placeholder tensor which always requires grad.

  • *args (Tensor) – The arguments of the function.

  • **kwargs (Tensor) – The keyword arguments of the function.

Returns:

The output of the function.

Return type:

TENSORS

static backward(ctx: Any, *grad_outputs: torch.Tensor) Tuple[None, None, analogvnn.utils.common_types.TENSORS][source]#

Backward pass of the autograd function.

Parameters:
  • ctx – The context of the autograd function.

  • *grad_outputs (Tensor) – The gradients of the output of the function.

Returns:

The gradients of the input of the function.

Return type:

TENSORS

property layer: Optional[torch.nn.Module][source]#

Gets the layer for which the backward gradient is computed.

Returns:

layer

Return type:

Optional[nn.Module]

_layer: Optional[torch.nn.Module][source]#
_empty_holder_tensor: torch.Tensor[source]#
_autograd_backward: Type[AutogradBackward][source]#
_disable_autograd_backward: bool = False[source]#
__call__: Callable[Ellipsis, Any][source]#
abstract forward(*inputs: torch.Tensor, **inputs_kwarg: torch.Tensor) analogvnn.utils.common_types.TENSORS[source]#

Forward pass of the layer.

Parameters:
  • *inputs (Tensor) – The inputs of the layer.

  • **inputs_kwarg (Tensor) – The keyword inputs of the layer.

Returns:

The output of the layer.

Return type:

TENSORS

Raises:

NotImplementedError – If the forward pass is not implemented.

abstract backward(*grad_outputs: torch.Tensor, **grad_output_kwarg: torch.Tensor) analogvnn.utils.common_types.TENSORS[source]#

Backward pass of the layer.

Parameters:
  • *grad_outputs (Tensor) – The gradients of the output of the layer.

  • **grad_output_kwarg (Tensor) – The keyword gradients of the output of the layer.

Returns:

The gradients of the input of the layer.

Return type:

TENSORS

Raises:

NotImplementedError – If the backward pass is not implemented.

_call_impl_forward(*args: torch.Tensor, **kwarg: torch.Tensor) analogvnn.utils.common_types.TENSORS[source]#

Calls Forward pass of the layer.

Parameters:
  • *inputs (Tensor) – The inputs of the layer.

  • **inputs_kwarg (Tensor) – The keyword inputs of the layer.

Returns:

The output of the layer.

Return type:

TENSORS

_call_impl_backward(*grad_output: torch.Tensor, **grad_output_kwarg: torch.Tensor) analogvnn.utils.common_types.TENSORS[source]#

Calls Backward pass of the layer.

Parameters:
  • *grad_outputs (Tensor) – The gradients of the output of the layer.

  • **grad_output_kwarg (Tensor) – The keyword gradients of the output of the layer.

Returns:

The gradients of the input of the layer.

Return type:

TENSORS

auto_apply(*args: torch.Tensor, to_apply=True, **kwargs: torch.Tensor) analogvnn.utils.common_types.TENSORS[source]#

Applies the backward module to the given layer using the proper method.

Parameters:
  • *args (Tensor) – The inputs of the layer.

  • to_apply (bool) – if True and is training then the AutogradBackward is applied,

  • applied. (otherwise the backward module is) –

  • **kwargs (Tensor) – The keyword inputs of the layer.

Returns:

The output of the layer.

Return type:

TENSORS

has_forward() bool[source]#

Checks if the forward pass is implemented.

Returns:

True if the forward pass is implemented, False otherwise.

Return type:

bool

get_layer() Optional[torch.nn.Module][source]#

Gets the layer for which the backward gradient is computed.

Returns:

layer

Return type:

Optional[nn.Module]

set_layer(layer: Optional[torch.nn.Module]) BackwardModule[source]#

Sets the layer for which the backward gradient is computed.

Parameters:

layer (nn.Module) – The layer for which the backward gradient is computed.

Returns:

self

Return type:

BackwardModule

Raises:
  • ValueError – If self is a subclass of nn.Module.

  • ValueError – If the layer is already set.

  • ValueError – If the layer is not an instance of nn.Module.

_set_autograd_backward()[source]#
static set_grad_of(tensor: torch.Tensor, grad: torch.Tensor) Optional[torch.Tensor][source]#

Sets the gradient of the given tensor.

Parameters:
  • tensor (Tensor) – The tensor.

  • grad (Tensor) – The gradient.

Returns:

the gradient of the tensor.

Return type:

Optional[Tensor]

__getattr__(name: str) Any[source]#

Gets the attribute of the layer.

Parameters:

name (str) – The name of the attribute.

Returns:

The attribute of the layer.

Return type:

Any

Raises:

AttributeError – If the attribute is not found.