Learner

Pytorch Modules for Training Models for sequential data
from tsfast.datasets import create_dls_test
dls = create_dls_test()
model = SimpleRNN(1,1)

Loss Functions


source

ignore_nan

 ignore_nan (func)

remove nan values from tensors before function execution, reduces tensor to a flat array, apply to functions such as mse

n = 1000
y_t = torch.ones(32,n,6)
y_t[:,20]=np.nan
y_p = torch.ones(32,n,6)*1.1
(~torch.isnan(y_t)).shape
torch.Size([32, 1000, 6])
y_t.shape
torch.Size([32, 1000, 6])
assert torch.isnan(mse(y_p,y_t))
test_close(mse_nan(y_p,y_t),0.01)

source

float64_func

 float64_func (func)

calculate function internally with float64 and convert the result back

Learner(dls,model,loss_func=float64_func(nn.MSELoss())).fit(1)
epoch train_loss valid_loss time
0 0.055763 0.059929 00:01
/var/folders/pc/13zbh_m514n1tp522cx9npt00000gn/T/ipykernel_76375/3967170634.py:17: UserWarning: Float64 precision not supported on mps:0 device. Using original precision. This may reduce numerical accuracy. Error: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.
  warnings.warn(f"Float64 precision not supported on {args[0].device} device. Using original precision. This may reduce numerical accuracy. Error: {e}")

source

SkipNLoss

 SkipNLoss (fn, n_skip=0)

Loss-Function modifier that skips the first n samples of sequential data

Learner(dls,model,loss_func=SkipNLoss(nn.MSELoss(),n_skip=30)).fit(1)
epoch train_loss valid_loss time
0 0.051679 0.052046 00:01

source

CutLoss

 CutLoss (fn, l_cut=0, r_cut=None)

Loss-Function modifier that skips the first n samples of sequential data

Learner(dls,model,loss_func=CutLoss(nn.MSELoss(),l_cut=30)).fit(1)
epoch train_loss valid_loss time
0 0.028736 0.018902 00:01

source

weighted_mae

 weighted_mae (input, target)
Learner(dls,model,loss_func=SkipNLoss(weighted_mae,n_skip=30)).fit(1)
epoch train_loss valid_loss time
0 0.084046 0.065088 00:01
/var/folders/pc/13zbh_m514n1tp522cx9npt00000gn/T/ipykernel_76375/691337496.py:13: UserWarning: torch.logspace not supported on mps:0 device. Using cpu. This may reduce numerical performance
  warnings.warn(f"torch.logspace not supported on {device} device. Using cpu. This may reduce numerical performance")

source

RandSeqLenLoss

 RandSeqLenLoss (fn, min_idx=1, max_idx=None, mid_idx=None)

Loss-Function modifier that truncates the sequence length of every sequence in the minibatch inidiviually randomly. At the moment slow for very big batchsizes.

Learner(dls,model,loss_func=RandSeqLenLoss(nn.MSELoss())).fit(1)
epoch train_loss valid_loss time
0 0.036072 0.037482 00:07

source

fun_rmse

 fun_rmse (inp, targ)

rmse loss function defined as a function not as a AccumMetric

Learner(dls,model,loss_func=nn.MSELoss(),metrics=SkipNLoss(fun_rmse,n_skip=30)).fit(1)
epoch train_loss valid_loss fun_rmse time
0 0.010846 0.010722 0.051925 00:01

source

cos_sim_loss

 cos_sim_loss (inp, targ)

rmse loss function defined as a function not as a AccumMetric

Learner(dls,model,loss_func=cos_sim_loss,metrics=SkipNLoss(fun_rmse,n_skip=30)).fit(1)
epoch train_loss valid_loss fun_rmse time
0 0.234125 0.254100 0.051972 00:01

source

cos_sim_loss_pow

 cos_sim_loss_pow (inp, targ)

rmse loss function defined as a function not as a AccumMetric

Learner(dls,model,loss_func=cos_sim_loss_pow,metrics=SkipNLoss(fun_rmse,n_skip=30)).fit(1)
epoch train_loss valid_loss fun_rmse time
0 0.468299 0.509000 0.051983 00:01

source

nrmse

 nrmse (inp, targ)

rmse loss function scaled by variance of each target variable

dls.one_batch()[0].shape
torch.Size([64, 100, 1])
Learner(dls,model,loss_func=nn.MSELoss(),metrics=SkipNLoss(nrmse,n_skip=30)).fit(1)
epoch train_loss valid_loss nrmse time
0 0.010644 0.010091 0.181790 00:01

source

nrmse_std

 nrmse_std (inp, targ)

rmse loss function scaled by standard deviation of each target variable

Learner(dls,model,loss_func=nn.MSELoss(),metrics=SkipNLoss(nrmse_std,n_skip=30)).fit(1)
epoch train_loss valid_loss nrmse_std time
0 0.010193 0.009726 0.078454 00:01

source

mean_vaf

 mean_vaf (inp, targ)
Learner(dls,model,loss_func=nn.MSELoss(),metrics=SkipNLoss(mean_vaf,n_skip=30)).fit(1)
epoch train_loss valid_loss mean_vaf time
0 0.009576 0.009391 97.983543 00:01