Losses and Metrics¶
Loss functions and metrics for training.
nan_mean ¶
Wrap a per-element loss into a NaN-safe, CUDA-graph-compatible mean.
NaN targets are replaced with fill (static shapes preserved), and a masked mean ensures only valid positions contribute to the gradient.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
fn
|
Callable
|
per-element function |
required |
fill
|
list | float
|
value to substitute for NaN targets |
required |
Source code in tsfast/training/losses.py
mse ¶
ignore_nan ¶
Decorator that removes NaN samples from (inp, targ) before computing a loss.
A sample is removed if any feature in the target is NaN. Reduces tensors to a flat array.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
func
|
Callable
|
loss function with signature (inp, targ) -> Tensor |
required |
Source code in tsfast/training/losses.py
float64_func ¶
Decorator that computes a function in float64 and converts the result back.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
func
|
Callable
|
function to wrap with float64 promotion |
required |
Source code in tsfast/training/losses.py
cut_loss ¶
Loss-function modifier that slices the sequence from l_cut to r_cut.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
fn
|
Callable
|
base loss function to wrap |
required |
l_cut
|
int
|
left index to start the slice |
0
|
r_cut
|
int | None
|
right index to end the slice (None keeps the rest) |
None
|
Source code in tsfast/training/losses.py
norm_loss ¶
Loss wrapper that normalizes predictions and targets before computing loss.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
fn
|
Callable
|
base loss function to wrap |
required |
norm_stats
|
normalization statistics used to build the scaler |
required | |
scaler_cls
|
type | None
|
scaler class to use (defaults to StandardScaler) |
None
|
Source code in tsfast/training/losses.py
weighted_mae ¶
Weighted MAE with log-spaced weights decaying along the sequence axis.
Source code in tsfast/training/losses.py
rand_seq_len_loss ¶
rand_seq_len_loss(fn: Callable, min_idx: int = 1, max_idx: int | None = None, mid_idx: int | None = None) -> Callable
Loss-function modifier that randomly truncates each sequence in the minibatch individually.
Uses a triangular distribution. Slow for very large batch sizes.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
fn
|
Callable
|
base loss function to wrap |
required |
min_idx
|
int
|
minimum sequence length |
1
|
max_idx
|
int | None
|
maximum sequence length (defaults to full sequence) |
None
|
mid_idx
|
int | None
|
mode of the triangular distribution (defaults to min_idx) |
None
|
Source code in tsfast/training/losses.py
fun_rmse ¶
cos_sim_loss ¶
cos_sim_loss_pow ¶
nrmse ¶
RMSE loss normalized by variance of each target variable.
nrmse_std ¶
RMSE loss normalized by standard deviation of each target variable.