Skip to content

Layers

Reusable model layers and wrappers.

SeqLinear

SeqLinear(input_size: int, output_size: int, hidden_size: int = 100, hidden_layer: int = 1, act=Mish, batch_first: bool = True)

Bases: Module

Pointwise MLP applied independently at each sequence position via 1x1 convolutions.

Parameters:

Name Type Description Default
input_size int

number of input features

required
output_size int

number of output features

required
hidden_size int

number of hidden units per layer

100
hidden_layer int

number of hidden layers

1
act

activation function class

Mish
batch_first bool

if True, input shape is [batch, seq, features]

True
Source code in tsfast/models/layers.py
def __init__(
    self,
    input_size: int,
    output_size: int,
    hidden_size: int = 100,
    hidden_layer: int = 1,
    act=Mish,
    batch_first: bool = True,
):
    super().__init__()
    self.batch_first = batch_first

    def conv_act(inp, out):
        return nn.Sequential(nn.Conv1d(inp, out, 1), act())

    if hidden_layer < 1:
        self.lin = nn.Conv1d(input_size, output_size, 1)
    else:
        self.lin = nn.Sequential(
            conv_act(input_size, hidden_size),
            *[conv_act(hidden_size, hidden_size) for _ in range(hidden_layer - 1)],
            nn.Conv1d(hidden_size, output_size, 1),
        )

AR_Model

AR_Model(model: Module, ar: bool = True, model_has_state: bool = False, return_state: bool = False, out_sz: int | None = None)

Bases: Module

Autoregressive model container.

Runs autoregressively when the output sequence is not provided, otherwise uses teacher forcing. Normalization should be handled externally via ScaledModel wrapping.

Parameters:

Name Type Description Default
model Module

inner model to wrap

required
ar bool

if True, default to autoregressive mode in forward

True
model_has_state bool

if True, the inner model accepts and returns hidden state

False
return_state bool

if True, return (output, hidden_state) tuple

False
out_sz int | None

output feature size, used to initialize autoregressive seed

None
Source code in tsfast/models/layers.py
def __init__(
    self,
    model: nn.Module,
    ar: bool = True,
    model_has_state: bool = False,
    return_state: bool = False,
    out_sz: int | None = None,
):
    super().__init__()
    self.model = model
    self.ar = ar
    self.model_has_state = model_has_state
    self.return_state = return_state
    self.out_sz = out_sz
    if return_state and not model_has_state:
        raise ValueError("return_state=True requires model_has_state=True")

SeqAggregation

SeqAggregation(func: Callable = lambda x, dim: x.select(dim, -1), dim: int = 1)

Bases: Module

Aggregation layer that reduces the sequence dimension.

Parameters:

Name Type Description Default
func Callable

aggregation function taking (tensor, dim) and returning reduced tensor

lambda x, dim: select(dim, -1)
dim int

sequence dimension to aggregate over

1
Source code in tsfast/models/layers.py
def __init__(
    self,
    func: Callable = lambda x, dim: x.select(dim, -1),
    dim: int = 1,
):
    super().__init__()
    self.func = func
    self.dim = dim

forward

forward(x: Tensor) -> torch.Tensor

Apply the aggregation function to the input tensor.

Source code in tsfast/models/layers.py
def forward(self, x: torch.Tensor) -> torch.Tensor:
    "Apply the aggregation function to the input tensor."
    return self.func(x, dim=self.dim)