Hyperparameter Tuning¶
Hyperparameter optimization with Ray Tune integration.
LearnerTrainable ¶
Bases: Trainable
Ray Tune Trainable wrapper for Learners.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
Ray Tune config dict containing 'create_lrn' and 'dls' references. |
required |
HPOptimizer ¶
High-level interface for hyperparameter optimization with Ray Tune.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
create_lrn
|
Callable
|
factory function that creates a Learner from (dls, config). |
required |
dls
|
DataLoaders to use for training. |
required |
Source code in tsfast/tune.py
start_ray ¶
stop_ray ¶
optimize ¶
optimize(config: dict, optimize_func: Callable = learner_optimize, resources_per_trial: dict = {'gpu': 1.0}, verbose: int = 1, **kwargs)
Run hyperparameter optimization using the function-based API.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
dict
|
Ray Tune search space configuration dict. |
required |
optimize_func
|
Callable
|
training function to optimize. |
learner_optimize
|
resources_per_trial
|
dict
|
resource dict per trial (e.g. GPU/CPU counts). |
{'gpu': 1.0}
|
verbose
|
int
|
Ray Tune verbosity level. |
1
|
Source code in tsfast/tune.py
optimize_pbt ¶
optimize_pbt(opt_name: str, num_samples: int, config: dict, mut_conf: dict, perturbation_interval: int = 2, stop: dict = {'training_iteration': 40}, resources_per_trial: dict = {'gpu': 1}, resample_probability: float = 0.25, quantile_fraction: float = 0.25, **kwargs)
Run Population Based Training optimization.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
opt_name
|
str
|
experiment name for Ray Tune. |
required |
num_samples
|
int
|
number of parallel trials. |
required |
config
|
dict
|
initial hyperparameter configuration dict. |
required |
mut_conf
|
dict
|
mutable hyperparameter space for PBT mutations. |
required |
perturbation_interval
|
int
|
epochs between PBT perturbations. |
2
|
stop
|
dict
|
stopping criteria dict. |
{'training_iteration': 40}
|
resources_per_trial
|
dict
|
resource dict per trial. |
{'gpu': 1}
|
resample_probability
|
float
|
probability of resampling vs. perturbing. |
0.25
|
quantile_fraction
|
float
|
fraction of trials to exploit/explore. |
0.25
|
Source code in tsfast/tune.py
best_model ¶
Load and return the best model from the optimization run.
Source code in tsfast/tune.py
log_uniform ¶
Sample uniformly in an exponential (log) range.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
min_bound
|
float
|
lower bound of the sampling range. |
required |
max_bound
|
float
|
upper bound of the sampling range. |
required |
base
|
float
|
logarithm base for the exponential scale. |
10
|
Source code in tsfast/tune.py
stop_shared_memory_managers ¶
Find and stop all SharedMemoryManager instances within an object.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
obj
|
object
|
root object to traverse for SharedMemoryManager instances. |
required |
Source code in tsfast/tune.py
learner_optimize ¶
Training function for Ray Tune function-based API.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
dict
|
Ray Tune config dict containing 'create_lrn', 'dls', 'fit_method', and hyperparameters. |
required |
Source code in tsfast/tune.py
sample_config ¶
Sample concrete values from a config of callables.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
dict
|
dict mapping keys to callable samplers. |
required |