from nbdev.config import get_config
Models
= get_config().config_file.parent
project_root = project_root / 'test_data/WienerHammerstein' f_path
= DataBlock(blocks=(SequenceBlock.from_hdf(['u','y'],TensorSequencesInput,clm_shift=[0,-1]),
seq 'y'],TensorSequencesOutput,clm_shift=[-1])),
SequenceBlock.from_hdf([=CreateDict([DfHDFCreateWindows(win_sz=100+1,stp_sz=100,clm='u')]),
get_items=ApplyToDict(ParentSplitter()))
splitter= seq.dataloaders(get_hdf_files(f_path)) db
RNNs
Basic RNN
RNN
RNN (input_size, hidden_size, num_layers, hidden_p=0.0, input_p=0.0, weight_p=0.0, rnn_type='gru', ret_full_hidden=False, stateful=False, normalization='', **kwargs)
inspired by https://arxiv.org/abs/1708.02182
Sequential_RNN
Sequential_RNN (input_size, hidden_size, num_layers, hidden_p=0.0, input_p=0.0, weight_p=0.0, rnn_type='gru', ret_full_hidden=False, stateful=False, normalization='', **kwargs)
RNN Variant for Sequential Modules
SimpleRNN
SimpleRNN (input_size, output_size, num_layers=1, hidden_size=100, linear_layers=0, return_state=False, hidden_p=0.0, input_p=0.0, weight_p=0.0, rnn_type='gru', ret_full_hidden=False, stateful=False, normalization='', **kwargs)
*Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes::
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self) -> None:
super().__init__()
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x):
x = F.relu(self.conv1(x))
return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:to
, etc.
.. note:: As per the example above, an __init__()
call to the parent class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*
= SimpleRNN(2,1,2,stateful=False,normalization='batchnorm')
model = Learner(db,model,loss_func=nn.MSELoss())#.fit(10) lrn
= SimpleRNN(2,1,2,rnn_type='lstm')
model = Learner(db,model,loss_func=nn.MSELoss()).fit(1) lrn
epoch | train_loss | valid_loss | time |
---|---|---|---|
0 | 0.031366 | 0.013281 | 00:00 |
= SimpleRNN(2,1,2,rnn_type='gru')
model = Learner(db,model,loss_func=nn.MSELoss()).fit(2) lrn
epoch | train_loss | valid_loss | time |
---|---|---|---|
0 | 0.018948 | 0.006698 | 00:02 |
1 | 0.010434 | 0.002743 | 00:02 |
Residual RNN
ResidualBlock_RNN
ResidualBlock_RNN (input_size, hidden_size, hidden_p=0.0, input_p=0.0, weight_p=0.0, rnn_type='gru', ret_full_hidden=False, stateful=False, normalization='', **kwargs)
*Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes::
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self) -> None:
super().__init__()
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x):
x = F.relu(self.conv1(x))
return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:to
, etc.
.. note:: As per the example above, an __init__()
call to the parent class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*
SimpleResidualRNN
SimpleResidualRNN (input_size, output_size, num_blocks=1, hidden_size=100, hidden_p=0.0, input_p=0.0, weight_p=0.0, rnn_type='gru', ret_full_hidden=False, stateful=False, normalization='', **kwargs)
*A sequential container.
Modules will be added to it in the order they are passed in the constructor. Alternatively, an OrderedDict
of modules can be passed in. The forward()
method of Sequential
accepts any input and forwards it to the first module it contains. It then “chains” outputs to inputs sequentially for each subsequent module, finally returning the output of the last module.
The value a Sequential
provides over manually calling a sequence of modules is that it allows treating the whole container as a single module, such that performing a transformation on the Sequential
applies to each of the modules it stores (which are each a registered submodule of the Sequential
).
What’s the difference between a Sequential
and a :class:torch.nn.ModuleList
? A ModuleList
is exactly what it sounds like–a list for storing Module
s! On the other hand, the layers in a Sequential
are connected in a cascading way.
Example::
# Using Sequential to create a small model. When `model` is run,
# input will first be passed to `Conv2d(1,20,5)`. The output of
# `Conv2d(1,20,5)` will be used as the input to the first
# `ReLU`; the output of the first `ReLU` will become the input
# for `Conv2d(20,64,5)`. Finally, the output of
# `Conv2d(20,64,5)` will be used as input to the second `ReLU`
model = nn.Sequential(
nn.Conv2d(1,20,5),
nn.ReLU(),
nn.Conv2d(20,64,5),
nn.ReLU()
)
# Using Sequential with OrderedDict. This is functionally the
# same as the above code
model = nn.Sequential(OrderedDict([
('conv1', nn.Conv2d(1,20,5)),
('relu1', nn.ReLU()),
('conv2', nn.Conv2d(20,64,5)),
('relu2', nn.ReLU())
]))*
= SimpleResidualRNN(2,1,1,stateful=False,normalization='')
model = Learner(db,model,loss_func=nn.MSELoss()).fit(1) lrn
epoch | train_loss | valid_loss | time |
---|---|---|---|
0 | 0.016374 | 0.002262 | 00:02 |
Dense RNN
DenseBlock_RNN
DenseBlock_RNN (num_layers, num_input_features, growth_rate, hidden_p=0.0, input_p=0.0, weight_p=0.0, rnn_type='gru', ret_full_hidden=False, stateful=False, normalization='', **kwargs)
*A sequential container.
Modules will be added to it in the order they are passed in the constructor. Alternatively, an OrderedDict
of modules can be passed in. The forward()
method of Sequential
accepts any input and forwards it to the first module it contains. It then “chains” outputs to inputs sequentially for each subsequent module, finally returning the output of the last module.
The value a Sequential
provides over manually calling a sequence of modules is that it allows treating the whole container as a single module, such that performing a transformation on the Sequential
applies to each of the modules it stores (which are each a registered submodule of the Sequential
).
What’s the difference between a Sequential
and a :class:torch.nn.ModuleList
? A ModuleList
is exactly what it sounds like–a list for storing Module
s! On the other hand, the layers in a Sequential
are connected in a cascading way.
Example::
# Using Sequential to create a small model. When `model` is run,
# input will first be passed to `Conv2d(1,20,5)`. The output of
# `Conv2d(1,20,5)` will be used as the input to the first
# `ReLU`; the output of the first `ReLU` will become the input
# for `Conv2d(20,64,5)`. Finally, the output of
# `Conv2d(20,64,5)` will be used as input to the second `ReLU`
model = nn.Sequential(
nn.Conv2d(1,20,5),
nn.ReLU(),
nn.Conv2d(20,64,5),
nn.ReLU()
)
# Using Sequential with OrderedDict. This is functionally the
# same as the above code
model = nn.Sequential(OrderedDict([
('conv1', nn.Conv2d(1,20,5)),
('relu1', nn.ReLU()),
('conv2', nn.Conv2d(20,64,5)),
('relu2', nn.ReLU())
]))*
DenseLayer_RNN
DenseLayer_RNN (input_size, hidden_size, hidden_p=0.0, input_p=0.0, weight_p=0.0, rnn_type='gru', ret_full_hidden=False, stateful=False, normalization='', **kwargs)
*Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes::
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self) -> None:
super().__init__()
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x):
x = F.relu(self.conv1(x))
return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:to
, etc.
.. note:: As per the example above, an __init__()
call to the parent class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*
DenseNet_RNN
DenseNet_RNN (input_size, output_size, growth_rate=32, block_config=(3, 3), num_init_features=32, hidden_p=0.0, input_p=0.0, weight_p=0.0, rnn_type='gru', ret_full_hidden=False, stateful=False, normalization='', **kwargs)
*A sequential container.
Modules will be added to it in the order they are passed in the constructor. Alternatively, an OrderedDict
of modules can be passed in. The forward()
method of Sequential
accepts any input and forwards it to the first module it contains. It then “chains” outputs to inputs sequentially for each subsequent module, finally returning the output of the last module.
The value a Sequential
provides over manually calling a sequence of modules is that it allows treating the whole container as a single module, such that performing a transformation on the Sequential
applies to each of the modules it stores (which are each a registered submodule of the Sequential
).
What’s the difference between a Sequential
and a :class:torch.nn.ModuleList
? A ModuleList
is exactly what it sounds like–a list for storing Module
s! On the other hand, the layers in a Sequential
are connected in a cascading way.
Example::
# Using Sequential to create a small model. When `model` is run,
# input will first be passed to `Conv2d(1,20,5)`. The output of
# `Conv2d(1,20,5)` will be used as the input to the first
# `ReLU`; the output of the first `ReLU` will become the input
# for `Conv2d(20,64,5)`. Finally, the output of
# `Conv2d(20,64,5)` will be used as input to the second `ReLU`
model = nn.Sequential(
nn.Conv2d(1,20,5),
nn.ReLU(),
nn.Conv2d(20,64,5),
nn.ReLU()
)
# Using Sequential with OrderedDict. This is functionally the
# same as the above code
model = nn.Sequential(OrderedDict([
('conv1', nn.Conv2d(1,20,5)),
('relu1', nn.ReLU()),
('conv2', nn.Conv2d(20,64,5)),
('relu2', nn.ReLU())
]))*
= DenseNet_RNN(2,1,growth_rate=10,block_config=(1,1),num_init_features=2)
model = Learner(db,model,loss_func=nn.MSELoss()).fit(1) lrn
epoch | train_loss | valid_loss | time |
---|---|---|---|
0 | 0.119397 | 0.075041 | 00:06 |
Seperate RNN
SeperateRNN
SeperateRNN (input_list, output_size, num_layers=1, hidden_size=100, linear_layers=1, hidden_p=0.0, input_p=0.0, weight_p=0.0, rnn_type='gru', ret_full_hidden=False, stateful=False, normalization='', **kwargs)
*Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes::
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self) -> None:
super().__init__()
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x):
x = F.relu(self.conv1(x))
return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:to
, etc.
.. note:: As per the example above, an __init__()
call to the parent class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*