Skip to content

Training Transforms

Transforms and augmentations following the __call__(xb, yb) -> (xb, yb) protocol.

prediction_concat

prediction_concat(t_offset: int = 1)

Concatenate y onto x for autoregressive prediction, shortening both by t_offset.

Parameters:

Name Type Description Default
t_offset int

number of steps the output is shifted in the past

1
Source code in tsfast/training/transforms.py
def __init__(self, t_offset: int = 1):
    self.t_offset = t_offset

noise

noise(std: float | Tensor = 0.1, mean: float | Tensor = 0.0, p: float = 1.0)

Add normal-distributed noise with per-signal mean and std to the input.

Parameters:

Name Type Description Default
std float | Tensor

standard deviation of the noise per signal (scalar or vector)

0.1
mean float | Tensor

mean of the noise per signal (scalar or vector)

0.0
p float

probability of applying the augmentation

1.0
Source code in tsfast/training/transforms.py
def __init__(self, std: float | Tensor = 0.1, mean: float | Tensor = 0.0, p: float = 1.0):
    self.std = torch.as_tensor(std, dtype=torch.float)
    self.mean = torch.as_tensor(mean, dtype=torch.float)
    self.p = p

noise_varying

noise_varying(std_std: float = 0.1, p: float = 1.0)

Add noise with a randomly sampled standard deviation per application.

Parameters:

Name Type Description Default
std_std float

standard deviation of the noise std distribution

0.1
p float

probability of applying the augmentation

1.0
Source code in tsfast/training/transforms.py
def __init__(self, std_std: float = 0.1, p: float = 1.0):
    self.std_std = torch.as_tensor(std_std, dtype=torch.float)
    self.p = p

noise_grouped

noise_grouped(std_std, std_idx, p: float = 1.0)

Add noise with per-group randomly sampled standard deviations.

Parameters:

Name Type Description Default
std_std

standard deviation of the noise std distribution per group

required
std_idx

index mapping each signal to its noise group

required
p float

probability of applying the augmentation

1.0
Source code in tsfast/training/transforms.py
def __init__(self, std_std, std_idx, p: float = 1.0):
    self.std_std = torch.as_tensor(std_std, dtype=torch.float)
    self.std_idx = torch.as_tensor(std_idx, dtype=torch.long)
    self.p = p

bias

bias(std: float | Tensor = 0.1, mean: float | Tensor = 0.0, p: float = 1.0)

Add a constant normal-distributed offset per signal per sample to the input.

Parameters:

Name Type Description Default
std float | Tensor

standard deviation of the bias per signal (scalar or vector)

0.1
mean float | Tensor

mean of the bias per signal (scalar or vector)

0.0
p float

probability of applying the augmentation

1.0
Source code in tsfast/training/transforms.py
def __init__(self, std: float | Tensor = 0.1, mean: float | Tensor = 0.0, p: float = 1.0):
    self.std = torch.as_tensor(std, dtype=torch.float)
    self.mean = torch.as_tensor(mean, dtype=torch.float)
    self.p = p

vary_seq_len

vary_seq_len(min_len: int = 50)

Randomly vary sequence length of every minibatch.

Parameters:

Name Type Description Default
min_len int

minimum sequence length to keep

50
Source code in tsfast/training/transforms.py
def __init__(self, min_len: int = 50):
    self.min_len = min_len

truncate_sequence

truncate_sequence(truncate_length: int = 50, scheduler: Callable = sched_ramp)

Progressively truncate sequence length during training using a scheduler.

Stateful: call setup(trainer) before training to access trainer.pct_train.

Parameters:

Name Type Description Default
truncate_length int

maximum number of time steps to truncate

50
scheduler Callable

scheduling function controlling truncation over training

sched_ramp
Source code in tsfast/training/transforms.py
def __init__(self, truncate_length: int = 50, scheduler: Callable = sched_ramp):
    self._truncate_length = truncate_length
    self._scheduler = scheduler
    self._trainer = None