Learner

Pytorch Modules for Training Models for sequential data
from tsfast.datasets.core import *
dls = create_dls_test(prediction=True).cpu()
model = SimpleRNN(1,1)

Create Learner Models

Create Learner with different kinds of models with fitting Parameters and regularizations.


source

get_inp_out_size

 get_inp_out_size (dls)

returns input and output size of a timeseries databunch

test_eq(get_inp_out_size(dls),(2,1))

RNN Learner

The Learners include model specific optimizations. Removing the first n_skip samples of the loss function of transient time, greatly improves training stability. In


source

RNNLearner

 RNNLearner (dls, loss_func=L1Loss(), metrics=[<function fun_rmse at
             0x7fa58be4a9e0>], n_skip=0, num_layers=1, hidden_size=100,
             stateful=False, opt_func=<function Adam>, cbs=None,
             linear_layers=0, return_state=False, hidden_p=0.0,
             input_p=0.0, weight_p=0.0, rnn_type='gru',
             ret_full_hidden=False, normalization='', **kwargs)
RNNLearner(dls,rnn_type='gru').fit(1,1e-4)
epoch train_loss valid_loss fun_rmse time
0 0.162996 0.160148 0.202793 00:00
RNNLearner(dls,rnn_type='gru',stateful=True).fit(1,1e-4)
epoch train_loss valid_loss fun_rmse time
0 0.161937 0.172497 0.214227 00:00
RNNLearner(dls,rnn_type='gru',stateful=True, n_skip=20).fit(1,1e-4)
epoch train_loss valid_loss fun_rmse time
0 0.198024 0.207973 0.267253 00:00

TCN Learner

Performs better on multi input data. Higher beta values allow a way smoother prediction. Way faster then RNNs in prediction.


source

TCNLearner

 TCNLearner (dls, num_layers=3, hidden_size=100, loss_func=L1Loss(),
             metrics=[<function fun_rmse at 0x7fa58be4a9e0>], n_skip=None,
             opt_func=<function Adam>, cbs=None, hl_depth=1, hl_width=10,
             act=<class 'torch.nn.modules.activation.Mish'>, bn=False,
             stateful=False, **kwargs)
TCNLearner(dls,num_layers=6,loss_func=nn.L1Loss()).fit(1)
epoch train_loss valid_loss fun_rmse time
0 0.224532 0.101968 0.128896 00:00

CRNN Learner


source

CRNNLearner

 CRNNLearner (dls, loss_func=L1Loss(), metrics=[<function fun_rmse at
              0x7fa58be4a9e0>], n_skip=0, opt_func=<function Adam>,
              cbs=None, num_ft=10, num_cnn_layers=4, num_rnn_layers=2,
              hs_cnn=10, hs_rnn=10, hidden_p=0, input_p=0, weight_p=0,
              rnn_type='gru', stateful=False, **kwargs)
CRNNLearner(dls,rnn_type='gru').fit(1,3e-2)
epoch train_loss valid_loss fun_rmse time
0 0.145227 0.084660 0.121797 00:00

Autoregressive Learner


source

AR_TCNLearner

 AR_TCNLearner (dls, hl_depth=3, alpha=1, beta=1, early_stop=0,
                metrics=None, n_skip=None, opt_func=<function Adam>,
                hl_width=10, act=<class
                'torch.nn.modules.activation.Mish'>, bn=False,
                stateful=False, **kwargs)
AR_TCNLearner(dls).fit(1)
0.00% [0/1 00:00<?]
epoch train_loss valid_loss fun_rmse time

0.00% [0/12 00:00<?]

source

AR_RNNLearner

 AR_RNNLearner (dls, alpha=0, beta=0, early_stop=0, metrics=None,
                n_skip=0, opt_func=<function Adam>, num_layers=1,
                hidden_size=100, linear_layers=0, return_state=False,
                hidden_p=0.0, input_p=0.0, weight_p=0.0, rnn_type='gru',
                ret_full_hidden=False, stateful=False, normalization='',
                **kwargs)
AR_RNNLearner(dls).fit(1)
epoch train_loss valid_loss fun_rmse time
0 0.127089 0.075756 0.095623 00:00