from tsfast.datasets import create_dls_test
Models
= create_dls_test() dls
CNN
Conv1D
Conv1D (input_size, output_size, kernel_size=3, activation=<class 'torch.nn.modules.activation.Mish'>, wn=True, bn=False, stride:Union[int,Tuple[int]]=1, padding:Union[str,int,Tuple[int]]=0, dilation:Union[int,Tuple[int]]=1, groups:int=1, bias:bool=True, padding_mode:str='zeros', device=None, dtype=None, **kwargs)
CNN
CNN (input_size, output_size, hl_depth=1, hl_width=10, act=<class 'torch.nn.modules.activation.Mish'>, bn=False)
*Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes::
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self) -> None:
super().__init__()
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x):
x = F.relu(self.conv1(x))
return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:to
, etc.
.. note:: As per the example above, an __init__()
call to the parent class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*
= CNN(1,1,hl_depth=3)
model = Learner(dls,model,loss_func=nn.MSELoss()).fit(1) lrn
epoch | train_loss | valid_loss | time |
---|---|---|---|
0 | 0.070932 | 0.068136 | 00:00 |
TCN
CausalConv1d
CausalConv1d (in_channels, out_channels, kernel_size, stride=1, dilation=1, groups=1, bias=True, stateful=False)
*Applies a 1D convolution over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input size :math:(N, C_{\text{in}}, L)
and output :math:(N, C_{\text{out}}, L_{\text{out}})
can be precisely described as:
.. math:: (N_i, C_{j}) = (C{j}) + {k = 0}^{C_{in} - 1} (C_{_j}, k) (N_i, k)
where :math:\star
is the valid cross-correlation
_ operator, :math:N
is a batch size, :math:C
denotes a number of channels, :math:L
is a length of signal sequence.
This module supports :ref:TensorFloat32<tf32_on_ampere>
.
On certain ROCm devices, when using float16 inputs this module will use :ref:different precision<fp16_on_mi200>
for backward.
:attr:
stride
controls the stride for the cross-correlation, a single number or a one-element tuple.:attr:
padding
controls the amount of padding applied to the input. It can be either a string {‘valid’, ‘same’} or a tuple of ints giving the amount of implicit padding applied on both sides.:attr:
dilation
controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but thislink
_ has a nice visualization of what :attr:dilation
does.:attr:
groups
controls the connections between inputs and outputs. :attr:in_channels
and :attr:out_channels
must both be divisible by :attr:groups
. For example,- At groups=1, all inputs are convolved to all outputs.
- At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels and producing half the output channels, and both subsequently concatenated.
- At groups= :attr:
in_channels
, each input channel is convolved with its own set of filters (of size :math:\frac{\text{out\_channels}}{\text{in\_channels}}
).
Note: When groups == in_channels
and out_channels == K * in_channels
, where K
is a positive integer, this operation is also known as a “depthwise convolution”.
In other words, for an input of size :math:`(N, C_{in}, L_{in})`,
a depthwise convolution with a depthwise multiplier `K` can be performed with the arguments
:math:`(C_\text{in}=C_\text{in}, C_\text{out}=C_\text{in} \times \text{K}, ..., \text{groups}=C_\text{in})`.
Note: In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True
. See :doc:/notes/randomness
for more information.
Note: padding='valid'
is the same as no padding. padding='same'
pads the input so the output has the shape as the input. However, this mode doesn’t support any stride values other than 1.
Note: This module supports complex data types i.e. complex32, complex64, complex128
.
Args: in_channels (int): Number of channels in the input image out_channels (int): Number of channels produced by the convolution kernel_size (int or tuple): Size of the convolving kernel stride (int or tuple, optional): Stride of the convolution. Default: 1 padding (int, tuple or str, optional): Padding added to both sides of the input. Default: 0 dilation (int or tuple, optional): Spacing between kernel elements. Default: 1 groups (int, optional): Number of blocked connections from input channels to output channels. Default: 1 bias (bool, optional): If True
, adds a learnable bias to the output. Default: True
padding_mode (str, optional): 'zeros'
, 'reflect'
, 'replicate'
or 'circular'
. Default: 'zeros'
Shape: - Input: :math:(N, C_{in}, L_{in})
or :math:(C_{in}, L_{in})
- Output: :math:(N, C_{out}, L_{out})
or :math:(C_{out}, L_{out})
, where
.. math::
L_{out} = \left\lfloor\frac{L_{in} + 2 \times \text{padding} - \text{dilation}
\times (\text{kernel\_size} - 1) - 1}{\text{stride}} + 1\right\rfloor
Attributes: weight (Tensor): the learnable weights of the module of shape :math:(\text{out\_channels}, \frac{\text{in\_channels}}{\text{groups}}, \text{kernel\_size})
. The values of these weights are sampled from :math:\mathcal{U}(-\sqrt{k}, \sqrt{k})
where :math:k = \frac{groups}{C_\text{in} * \text{kernel\_size}}
bias (Tensor): the learnable bias of the module of shape (out_channels). If :attr:bias
is True
, then the values of these weights are sampled from :math:\mathcal{U}(-\sqrt{k}, \sqrt{k})
where :math:k = \frac{groups}{C_\text{in} * \text{kernel\_size}}
Examples::
>>> m = nn.Conv1d(16, 33, 3, stride=2)
>>> input = torch.randn(20, 16, 50)
>>> output = m(input)
.. _cross-correlation: https://en.wikipedia.org/wiki/Cross-correlation
.. _link: https://github.com/vdumoulin/conv_arithmetic/blob/master/README.md*
CConv1D
CConv1D (input_size, output_size, kernel_size=2, activation=<class 'torch.nn.modules.activation.Mish'>, wn=True, bn=False, stride=1, dilation=1, groups=1, bias=True, stateful=False, **kwargs)
TCN_Block
TCN_Block (input_size, output_size, num_layers=1, activation=<class 'torch.nn.modules.activation.Mish'>, wn=True, bn=False, stateful=False, stride=1, dilation=1, groups=1, bias=True, **kwargs)
*Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes::
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self) -> None:
super().__init__()
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x):
x = F.relu(self.conv1(x))
return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:to
, etc.
.. note:: As per the example above, an __init__()
call to the parent class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*
TCN
TCN (input_size, output_size, hl_depth=1, hl_width=10, act=<class 'torch.nn.modules.activation.Mish'>, bn=False, stateful=False)
*Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes::
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self) -> None:
super().__init__()
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x):
x = F.relu(self.conv1(x))
return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:to
, etc.
.. note:: As per the example above, an __init__()
call to the parent class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*
= TCN(1,1,hl_depth=3)
model = Learner(dls,model,loss_func=nn.MSELoss()).fit(1) lrn
epoch | train_loss | valid_loss | time |
---|---|---|---|
0 | 0.328497 | 0.211080 | 00:00 |
SeperateTCN
SeperateTCN (input_list, output_size, hl_depth=1, hl_width=10, act=<class 'torch.nn.modules.activation.Mish'>, bn=False, stateful=False, final_layer=3)
*Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes::
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self) -> None:
super().__init__()
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x):
x = F.relu(self.conv1(x))
return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:to
, etc.
.. note:: As per the example above, an __init__()
call to the parent class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*
CRNNs
CRNN
CRNN (input_size, output_size, num_ft=10, num_cnn_layers=4, num_rnn_layers=2, hs_cnn=10, hs_rnn=10, hidden_p=0, input_p=0, weight_p=0, rnn_type='gru', stateful=False)
*Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes::
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self) -> None:
super().__init__()
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x):
x = F.relu(self.conv1(x))
return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:to
, etc.
.. note:: As per the example above, an __init__()
call to the parent class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*
= CRNN(1,1,10)
model = Learner(dls,model,loss_func=nn.MSELoss()).fit(1) lrn
epoch | train_loss | valid_loss | time |
---|---|---|---|
0 | 0.056243 | 0.058065 | 00:02 |
= CRNN(1,1,10,rnn_type='gru')
model = Learner(dls,model,loss_func=nn.MSELoss()).fit(1) lrn
epoch | train_loss | valid_loss | time |
---|---|---|---|
0 | 0.199177 | 0.105373 | 00:01 |
SeperateCRNN
SeperateCRNN (input_list, output_size, num_ft=10, num_cnn_layers=4, num_rnn_layers=2, hs_cnn=10, hs_rnn=10, hidden_p=0, input_p=0, weight_p=0, rnn_type='gru', stateful=False)
*Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes::
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self) -> None:
super().__init__()
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x):
x = F.relu(self.conv1(x))
return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:to
, etc.
.. note:: As per the example above, an __init__()
call to the parent class must be made before assignment on the child.
:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool*