Optimisation module#
Single-point optimisation#
- nubo.optimisation.singlepoint.single(func: Callable, method: str, bounds: Tensor, constraints: dict | list | None = None, discrete: dict | None = None, lr: float | None = 0.1, steps: int | None = 100, num_starts: int | None = 10, num_samples: int | None = 100, **kwargs: Any) Tuple[Tensor, Tensor] [source]#
Single-point optimisation.
Optimises the acquisition function with the L-BFGS-B, SLSQP, or Adam optimiser. Minimises func.
- Parameters:
- func
Callable
Function to optimise.
- method
str
One of “L-BFGS-B”, “SLSQP”, or “Adam”.
- bounds
torch.Tensor
(size 2 x d) Optimisation bounds of input space.
- constraints
dict
orlist
ofdict
, optional Optimisation constraints.
- discrete
dict
Possible values for all discrete inputs in the shape {dim1: [values1], dim2: [values2], etc.}, e.g. {0: [1., 2., 3.], 3: [-0.1, -0.2, 100.]}.
- lr
float
, optional Learning rate of
torch.optim.Adam
algorithm, default is 0.1.- steps
int
, optional Optimisation steps of
torch.optim.Adam
algorithm, default is 200.- num_starts
int
, optional Number of start for multi-start optimisation, default is 10.
- num_samples
int
, optional Number of samples from which to draw the starts, default is 100.
- **kwargs
Any
Keyword argument passed to
torch.optim.Adam
orscipy.optimize.minimze
.
- func
- Returns:
- best_result
torch.Tensor
(size 1 x d) Minimiser inputs.
- best_func_result
torch.Tensor
(size 1) Minimiser output.
- best_result
Multi-point optimisation#
References
J Wilson, F Hutter, and M Deisenroth, “Maximizing Acquisition Functions for Bayesian Optimization,” Advances in Neural Information Processing Systems, vol. 31, 2018.
- nubo.optimisation.multipoint.multi_joint(func: Callable, method: str, batch_size: int, bounds: Tensor, discrete: dict | None = None, lr: float | None = 0.1, steps: int | None = 100, num_starts: int | None = 10, num_samples: int | None = 100, **kwargs: Any) Tuple[Tensor, Tensor] [source]#
Joint optimisation loop for Monte Carlo acquisition functions.
Optimises Monte Carlo acquisition functions to return multi-point batches for parallel evaluation. Computes all points of a batch at once. Minimises func.
- Parameters:
- func
Callable
Function to optimise.
- method
str
One of “L-BFGS-B” or “Adam”.
- batch_size
int
Number of points to return.
- bounds
torch.Tensor
(size 2 x d) Optimisation bounds of input space.
- discrete
dict
Possible values for all discrete inputs in the shape {dim1: [values1], dim2: [values2], etc.}, e.g. {0: [1., 2., 3.], 3: [-0.1, -0.2, 100.]}.
- lr
float
, optional Learning rate of
torch.optim.Adam
algorithm, default is 0.1.- steps
int
, optional Optimisation steps of
torch.optim.Adam
algorithm, default is 200.- num_starts
int
, optional Number of start for multi-start optimisation, default is 10.
- num_samples
int
, optional Number of samples from which to draw the starts, default is 100.
- **kwargs
Any
Keyword argument passed to
torch.optim.Adam
orscipy.optimize.minimze
.
- func
- Returns:
- batch_result
torch.Tensor
(sizq batch_size x d) Batch inputs.
- batch_func_result
torch.Tensor
(size batch_size) Batch outputs.
- batch_result
- nubo.optimisation.multipoint.multi_sequential(func: Callable, method: str, batch_size: int, bounds: Tensor, constraints: dict | list | None = None, discrete: dict | None = None, lr: float | None = 0.1, steps: int | None = 100, num_starts: int | None = 10, num_samples: int | None = 100, **kwargs: Any) Tuple[Tensor, Tensor] [source]#
Sequential greedy optimisation loop for Monte Carlo acquisition functions.
Optimises Monte Carlo acquisition functions to return multi-point batches for parallel evaluation. Computes one point after the other for a batch always keeping previous points fixed, i.e. compute point 1, compute point 2 holding point 1 fixed, compute point 3 holding points 1 and 2 fixed and so on until the batch is full. Minimises func.
- Parameters:
- func
Callable
Function to optimise.
- method
str
One of “L-BFGS-B”, “SLSQP”, or “Adam”.
- batch_size
int
Number of points to return.
- bounds
torch.Tensor
(size 2 x d) Optimisation bounds of input space.
- constraints
dict
orlist
ofdict
, optional Optimisation constraints.
- discrete
dict
Possible values for all discrete inputs in the shape {dim1: [values1], dim2: [values2], etc.}, e.g. {0: [1., 2., 3.], 3: [-0.1, -0.2, 100.]}.
- lr
float
, optional Learning rate of
torch.optim.Adam
algorithm, default is 0.1.- steps
int
, optional Optimisation steps of
torch.optim.Adam
algorithm, default is 200.- num_starts
int
, optional Number of start for multi-start optimisation, default is 10.
- num_samples
int
, optional Number of samples from which to draw the starts, default is 100.
- **kwargs
Any
Keyword argument passed to
torch.optim.Adam
orscipy.optimize.minimze
.
- func
- Returns:
- batch_result
torch.Tensor
(size batch_size x d) Batch inputs.
- batch_func_result
torch.Tensor
(size batch_size) Batch outputs.
- batch_result
Mixed optimisation#
- nubo.optimisation.mixed.mixed(func: Callable, method: str, bounds: Tensor, discrete: dict, constraints: dict | list | None = None, lr: float | None = 0.1, steps: int | None = 200, num_starts: int | None = 10, num_samples: int | None = 100, **kwargs: Any) Tuple[Tensor, Tensor] [source]#
Mixed optimisation with continuous and discrete inputs.
Optimises the acquisition over all continuous input dimensions by fixing a combination of the discrete inputs. Returns the best result over all possible discrete combinations. Minimises func.
- Parameters:
- func
Callable
Function to optimise.
- method
str
One of “L-BFGS-B”, “SLSQP”, or “Adam”.
- bounds
torch.Tensor
(size 2 x d) Optimisation bounds of input space.
- discrete
dict
Possible values for all discrete inputs in the shape {dim1: [values1], dim2: [values2], etc.}, e.g. {0: [1., 2., 3.], 3: [-0.1, -0.2, 100.]}.
- constraints
dict
orlist
ofdict
, optional Optimisation constraints.
- lr
float
, optional Learning rate of
torch.optim.Adam
algorithm, default is 0.1.- steps
int
, optional Optimisation steps of
torch.optim.Adam
algorithm, default is 200.- num_starts
int
, optional Number of start for multi-start optimisation, default is 10.
- num_samples
int
, optional Number of samples from which to draw the starts, default is 100.
- **kwargs
Any
Keyword argument passed to
torch.optim.Adam
orscipy.optimize.minimze
.
- func
- Returns:
- best_result
torch.Tensor
(size 1 x d) Minimiser inputs.
- best_func_result
torch.Tensor
(size 1) Minimiser output.
- best_result
Deterministic optimisers#
References
P Virtanen et al., “SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python,”” Nature Methods, vol. 17, no. 3, p. 261-272, 2020.
- nubo.optimisation.lbfgsb.lbfgsb(func: Callable, bounds: Tensor, num_starts: int | None = 10, num_samples: int | None = 100, **kwargs: Any) Tuple[Tensor, Tensor] [source]#
Multi-start L-BFGS-B optimiser using the
scipy.optimize.minimize
implementation fromSciPy
.Used for optimising analytical acquisition functions or Monte Carlo acquisition function when base samples are fixed. Picks the best num_starts points from a total num_samples Latin hypercube samples to initialise the optimser. Returns the best result. Minimises func.
- Parameters:
- func
Callable
Function to optimise.
- bounds
torch.Tensor
(size 2 x d) Optimisation bounds of input space.
- num_starts
int
, optional Number of start for multi-start optimisation, default is 10.
- num_samples
int
, optional Number of samples from which to draw the starts, default is 100.
- **kwargs
Any
Keyword argument passed to
scipy.optimize.minimize
.
- func
- Returns:
- best_result
torch.Tensor
(size 1 x d) Minimiser inputs.
- best_func_result
torch.Tensor
(size 1) Minimiser output.
- best_result
- nubo.optimisation.slsqp.slsqp(func: Callable, bounds: Tensor, constraints: dict | Tuple[dict] | None = (), num_starts: int | None = 10, num_samples: int | None = 100, **kwargs: Any) Tuple[Tensor, Tensor] [source]#
Multi-start SLSQP optimiser using the
scipy.optimize.minimize
implementation fromSciPy
.Used for optimising analytical acquisition functions or Monte Carlo acquisition function when base samples are fixed. Picks the best num_starts points from a total num_samples Latin hypercube samples to initialise the optimser. Returns the best result. Minimises func.
- Parameters:
- func
Callable
Function to optimise.
- bounds
torch.Tensor
(size 2 x d) Optimisation bounds of input space.
- constraints
dict
orTuple
ofdict
, optional Optimisation constraints, default is no constraints.
- num_starts
int
, optional Number of start for multi-start optimisation, default is 10.
- num_samples
int
, optional Number of samples from which to draw the starts, default is 100.
- **kwargs
Any
Keyword argument passed to
scipy.optimize.minimize
.
- func
- Returns:
- best_result
torch.Tensor
(size 1 x d) Minimiser inputs.
- best_func_result
torch.Tensor
(size 1) Minimiser output.
- best_result
Stochastic optimisers#
References
DP Kingma and J Ba, “Adam: A Method for Stochastic Optimization,” Proceedings of the 3rd International Conference on Learning Representations, 2015.
A Paszke, et al., “PyTorch: An Imperative Style, High-Performance Deep Learning Library,” In Advances in Neural Information Processing Systems, vol. 32, 2019.
- nubo.optimisation.adam.adam(func: Callable, bounds: Tensor, lr: float | None = 0.1, steps: int | None = 200, num_starts: int | None = 10, num_samples: int | None = 100, **kwargs: Any) Tuple[Tensor, Tensor] [source]#
Multi-start Adam optimiser using the
torch.optim.Adam
implementation fromPyTorch
.Used for optimising Monte Carlo acquisition function when base samples are not fixed. Bounds are enforced by transforming func with the sigmoid function and scaling results. Picks the best num_starts points from a total num_samples Latin hypercube samples to initialise the optimiser. Returns the best result. Minimises func.
- Parameters:
- func
Callable
Function to optimise.
- bounds
torch.Tensor
(size 2 x d) Optimisation bounds of input space.
- lr
float
, optional Learning rate of
torch.optim.Adam
algorithm, default is 0.1.- steps
int
, optional Optimisation steps of
torch.optim.Adam
algorithm, default is 200.- num_starts
int
, optional Number of start for multi-start optimisation, default is 10.
- num_samples
int
, optional Number of samples from which to draw the starts, default is 100.
- **kwargs
Any
Keyword argument passed to
torch.optim.Adam
.
- func
- Returns:
- best_result
torch.Tensor
(size 1 x d) Minimiser input.
- best_func_result
torch.Tensor
(size 1) Minimiser output.
- best_result
- nubo.optimisation.adam.adam_mixed(func: Callable, bounds: Tensor, lr: float | None = 0.1, steps: int | None = 200, num_starts: int | None = 10, num_samples: int | None = 100, **kwargs: Any) Tuple[Tensor, Tensor] [source]#
Multi-start Adam optimiser using the
torch.optim.Adam
implementation fromPyTorch
.Used for optimising Monte Carlo acquisition function when base samples are not fixed. Bounds are enforced by clamping where values exceed them. Picks the best num_starts points from a total num_samples Latin hypercube samples to initialise the optimiser. Returns the best result. Minimises func.
- Parameters:
- func
Callable
Function to optimise.
- bounds
torch.Tensor
(size 2 x d) Optimisation bounds of input space.
- lr
float
, optional Learning rate of
torch.optim.Adam
algorithm, default is 0.1.- steps
int
, optional Optimisation steps of
torch.optim.Adam
algorithm, default is 200.- num_starts
int
, optional Number of start for multi-start optimisation, default is 10.
- num_samples
int
, optional Number of samples from which to draw the starts, default is 100.
- **kwargs
Any
Keyword argument passed to
torch.optim.Adam
.
- func
- Returns:
- best_result
torch.Tensor
(size 1 x d) Minimiser input.
- best_func_result
torch.Tensor
(size 1) Minimiser output.
- best_result
Optimisation utilities#
- nubo.optimisation.utils.gen_candidates(func: Callable, bounds: Tensor, num_candidates: int, num_samples: int, args: Tuple | None = ()) Tensor [source]#
Generate candidates for multi-start optimisation using a maximin Latin hypercube design or a uniform distribution for one candidate point.
- Parameters:
- func
Callable
Function to optimise.
- bounds
torch.Tensor
(size 2 x d) Optimisation bounds of input space.
- num_candidates
int
Number of candidates.
- num_samples
int
Number of samples from which to draw the starts.
- args
Tuple
, optional Arguments for function to maximise in order.
- func
- Returns:
torch.Tensor
(size num_candidates x d) Candidates.