analogvnn.graph.ForwardGraph#

Module Contents#

Classes#

ForwardGraph

The forward graph.

class analogvnn.graph.ForwardGraph.ForwardGraph(graph_state: analogvnn.graph.ModelGraphState.ModelGraphState = None)[source]#

Bases: analogvnn.graph.AcyclicDirectedGraph.AcyclicDirectedGraph

The forward graph.

__call__(inputs: analogvnn.utils.common_types.TENSORS, is_training: bool) analogvnn.graph.ArgsKwargs.ArgsKwargsOutput[source]#

Forward pass through the forward graph.

Parameters:
  • inputs (TENSORS) – Input to the graph

  • is_training (bool) – Is training or not

Returns:

Output of the graph

Return type:

ArgsKwargsOutput

compile(is_static: bool = True)[source]#

Compile the graph.

Parameters:

is_static (bool) – If True, the graph is not changing during runtime and will be cached.

Returns:

self.

Return type:

ForwardGraph

Raises:

ValueError – If no forward pass has been performed yet.

calculate(inputs: analogvnn.utils.common_types.TENSORS, is_training: bool = True, **kwargs) analogvnn.graph.ArgsKwargs.ArgsKwargsOutput[source]#

Calculate the output of the graph.

Parameters:
  • inputs (TENSORS) – Input to the graph

  • is_training (bool) – Is training or not

  • **kwargs – Additional arguments

Returns:

Output of the graph

Return type:

ArgsKwargsOutput

_pass(from_node: analogvnn.graph.GraphEnum.GraphEnum, *inputs: torch.Tensor) Dict[analogvnn.graph.GraphEnum.GraphEnum, analogvnn.graph.ArgsKwargs.InputOutput][source]#

Perform the forward pass through the graph.

Parameters:
  • from_node (GraphEnum) – The node to start the forward pass from

  • *inputs (Tensor) – Input to the graph

Returns:

The input and output of each node

Return type:

Dict[GraphEnum, InputOutput]

static _detach_tensor(tensor: torch.Tensor) torch.Tensor[source]#

Detach the tensor from the autograd graph.

Parameters:

tensor (torch.Tensor) – Tensor to detach

Returns:

Detached tensor

Return type:

torch.Tensor