from tsfast.datasets.core import *
Learner
Pytorch Modules for Training Models for sequential data
= create_dls_test(prediction=True).cpu()
dls = SimpleRNN(1,1) model
Create Learner Models
Create Learner with different kinds of models with fitting Parameters and regularizations.
get_inp_out_size
get_inp_out_size (dls)
returns input and output size of a timeseries databunch
2,1)) test_eq(get_inp_out_size(dls),(
RNN Learner
The Learners include model specific optimizations. Removing the first n_skip samples of the loss function of transient time, greatly improves training stability. In
RNNLearner
RNNLearner (dls, loss_func=L1Loss(), metrics=[<function fun_rmse at 0x7fa58be4a9e0>], n_skip=0, num_layers=1, hidden_size=100, stateful=False, opt_func=<function Adam>, cbs=None, linear_layers=0, return_state=False, hidden_p=0.0, input_p=0.0, weight_p=0.0, rnn_type='gru', ret_full_hidden=False, normalization='', **kwargs)
='gru').fit(1,1e-4) RNNLearner(dls,rnn_type
epoch | train_loss | valid_loss | fun_rmse | time |
---|---|---|---|---|
0 | 0.162996 | 0.160148 | 0.202793 | 00:00 |
='gru',stateful=True).fit(1,1e-4) RNNLearner(dls,rnn_type
epoch | train_loss | valid_loss | fun_rmse | time |
---|---|---|---|---|
0 | 0.161937 | 0.172497 | 0.214227 | 00:00 |
='gru',stateful=True, n_skip=20).fit(1,1e-4) RNNLearner(dls,rnn_type
epoch | train_loss | valid_loss | fun_rmse | time |
---|---|---|---|---|
0 | 0.198024 | 0.207973 | 0.267253 | 00:00 |
TCN Learner
Performs better on multi input data. Higher beta values allow a way smoother prediction. Way faster then RNNs in prediction.
TCNLearner
TCNLearner (dls, num_layers=3, hidden_size=100, loss_func=L1Loss(), metrics=[<function fun_rmse at 0x7fa58be4a9e0>], n_skip=None, opt_func=<function Adam>, cbs=None, hl_depth=1, hl_width=10, act=<class 'torch.nn.modules.activation.Mish'>, bn=False, stateful=False, **kwargs)
=6,loss_func=nn.L1Loss()).fit(1) TCNLearner(dls,num_layers
epoch | train_loss | valid_loss | fun_rmse | time |
---|---|---|---|---|
0 | 0.224532 | 0.101968 | 0.128896 | 00:00 |
CRNN Learner
CRNNLearner
CRNNLearner (dls, loss_func=L1Loss(), metrics=[<function fun_rmse at 0x7fa58be4a9e0>], n_skip=0, opt_func=<function Adam>, cbs=None, num_ft=10, num_cnn_layers=4, num_rnn_layers=2, hs_cnn=10, hs_rnn=10, hidden_p=0, input_p=0, weight_p=0, rnn_type='gru', stateful=False, **kwargs)
='gru').fit(1,3e-2) CRNNLearner(dls,rnn_type
epoch | train_loss | valid_loss | fun_rmse | time |
---|---|---|---|---|
0 | 0.145227 | 0.084660 | 0.121797 | 00:00 |
Autoregressive Learner
AR_TCNLearner
AR_TCNLearner (dls, hl_depth=3, alpha=1, beta=1, early_stop=0, metrics=None, n_skip=None, opt_func=<function Adam>, hl_width=10, act=<class 'torch.nn.modules.activation.Mish'>, bn=False, stateful=False, **kwargs)
1) AR_TCNLearner(dls).fit(
0.00% [0/1 00:00<?]
epoch | train_loss | valid_loss | fun_rmse | time |
---|
0.00% [0/12 00:00<?]
AR_RNNLearner
AR_RNNLearner (dls, alpha=0, beta=0, early_stop=0, metrics=None, n_skip=0, opt_func=<function Adam>, num_layers=1, hidden_size=100, linear_layers=0, return_state=False, hidden_p=0.0, input_p=0.0, weight_p=0.0, rnn_type='gru', ret_full_hidden=False, stateful=False, normalization='', **kwargs)
1) AR_RNNLearner(dls).fit(
epoch | train_loss | valid_loss | fun_rmse | time |
---|---|---|---|---|
0 | 0.127089 | 0.075756 | 0.095623 | 00:00 |