Layers¶
Reusable model layers and wrappers.
SeqLinear ¶
SeqLinear(input_size: int, output_size: int, hidden_size: int = 100, hidden_layer: int = 1, act=Mish, batch_first: bool = True)
Bases: Module
Pointwise MLP applied independently at each sequence position via 1x1 convolutions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_size
|
int
|
number of input features |
required |
output_size
|
int
|
number of output features |
required |
hidden_size
|
int
|
number of hidden units per layer |
100
|
hidden_layer
|
int
|
number of hidden layers |
1
|
act
|
activation function class |
Mish
|
|
batch_first
|
bool
|
if |
True
|
Source code in tsfast/models/layers.py
AR_Model ¶
AR_Model(model: Module, ar: bool = True, model_has_state: bool = False, return_state: bool = False, out_sz: int | None = None)
Bases: Module
Autoregressive model container.
Runs autoregressively when the output sequence is not provided, otherwise uses teacher forcing. Normalization should be handled externally via ScaledModel wrapping.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
Module
|
inner model to wrap |
required |
ar
|
bool
|
if |
True
|
model_has_state
|
bool
|
if |
False
|
return_state
|
bool
|
if |
False
|
out_sz
|
int | None
|
output feature size, used to initialize autoregressive seed |
None
|
Source code in tsfast/models/layers.py
SeqAggregation ¶
Bases: Module
Aggregation layer that reduces the sequence dimension.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
func
|
Callable
|
aggregation function taking (tensor, dim) and returning reduced tensor |
lambda x, dim: select(dim, -1)
|
dim
|
int
|
sequence dimension to aggregate over |
1
|