analogvnn.nn.activation.ELU#

Module Contents#

Classes#

SELU

Implements the scaled exponential linear unit (SELU) activation function.

ELU

Implements the exponential linear unit (ELU) activation function.

class analogvnn.nn.activation.ELU.SELU(alpha: float = 1.0507, scale_factor: float = 1.0)[source]#

Bases: analogvnn.nn.activation.Activation.Activation

Implements the scaled exponential linear unit (SELU) activation function.

Variables:
  • alpha (nn.Parameter) – the alpha parameter.

  • scale_factor (nn.Parameter) – the scale factor parameter.

__constants__ = ['alpha', 'scale_factor'][source]#
alpha: torch.nn.Parameter[source]#
scale_factor: torch.nn.Parameter[source]#
forward(x: torch.Tensor) torch.Tensor[source]#

Forward pass of the scaled exponential linear unit (SELU) activation function.

Parameters:

x (Tensor) – the input tensor.

Returns:

the output tensor.

Return type:

Tensor

backward(grad_output: Optional[torch.Tensor]) Optional[torch.Tensor][source]#

Backward pass of the scaled exponential linear unit (SELU) activation function.

Parameters:

grad_output (Optional[Tensor]) – the gradient of the output tensor.

Returns:

the gradient of the input tensor.

Return type:

Optional[Tensor]

static initialise(tensor: torch.Tensor) torch.Tensor[source]#

Initialisation of tensor using xavier uniform, gain associated with SELU activation function.

Parameters:

tensor (Tensor) – the tensor to be initialized.

Returns:

the initialized tensor.

Return type:

Tensor

static initialise_(tensor: torch.Tensor) torch.Tensor[source]#

In-place initialisation of tensor using xavier uniform, gain associated with SELU activation function.

Parameters:

tensor (Tensor) – the tensor to be initialized.

Returns:

the initialized tensor.

Return type:

Tensor

class analogvnn.nn.activation.ELU.ELU(alpha: float = 1.0507)[source]#

Bases: SELU

Implements the exponential linear unit (ELU) activation function.

Variables:
  • alpha (nn.Parameter) – 1.0507

  • scale_factor (nn.Parameter) –