Optimisation module#

Single-point optimisation#

nubo.optimisation.singlepoint.single(func: Callable, method: str, bounds: Tensor, constraints: dict | list | None = None, discrete: dict | None = None, lr: float | None = 0.1, steps: int | None = 100, num_starts: int | None = 10, num_samples: int | None = 100, **kwargs: Any) Tuple[Tensor, Tensor][source]#

Single-point optimisation.

Optimises the acquisition function with the L-BFGS-B, SLSQP, or Adam optimiser. Minimises func.

Parameters:
funcCallable

Function to optimise.

methodstr

One of “L-BFGS-B”, “SLSQP”, or “Adam”.

boundstorch.Tensor

(size 2 x d) Optimisation bounds of input space.

constraintsdict or list of dict, optional

Optimisation constraints.

discretedict

Possible values for all discrete inputs in the shape {dim1: [values1], dim2: [values2], etc.}, e.g. {0: [1., 2., 3.], 3: [-0.1, -0.2, 100.]}.

lrfloat, optional

Learning rate of torch.optim.Adam algorithm, default is 0.1.

stepsint, optional

Optimisation steps of torch.optim.Adam algorithm, default is 200.

num_startsint, optional

Number of start for multi-start optimisation, default is 10.

num_samplesint, optional

Number of samples from which to draw the starts, default is 100.

**kwargsAny

Keyword argument passed to torch.optim.Adam or scipy.optimize.minimze.

Returns:
best_resulttorch.Tensor

(size 1 x d) Minimiser inputs.

best_func_resulttorch.Tensor

(size 1) Minimiser output.

Multi-point optimisation#

References

  • J Wilson, F Hutter, and M Deisenroth, “Maximizing Acquisition Functions for Bayesian Optimization,” Advances in Neural Information Processing Systems, vol. 31, 2018.

nubo.optimisation.multipoint.multi_joint(func: Callable, method: str, batch_size: int, bounds: Tensor, discrete: dict | None = None, lr: float | None = 0.1, steps: int | None = 100, num_starts: int | None = 10, num_samples: int | None = 100, **kwargs: Any) Tuple[Tensor, Tensor][source]#

Joint optimisation loop for Monte Carlo acquisition functions.

Optimises Monte Carlo acquisition functions to return multi-point batches for parallel evaluation. Computes all points of a batch at once. Minimises func.

Parameters:
funcCallable

Function to optimise.

methodstr

One of “L-BFGS-B” or “Adam”.

batch_sizeint

Number of points to return.

boundstorch.Tensor

(size 2 x d) Optimisation bounds of input space.

discretedict

Possible values for all discrete inputs in the shape {dim1: [values1], dim2: [values2], etc.}, e.g. {0: [1., 2., 3.], 3: [-0.1, -0.2, 100.]}.

lrfloat, optional

Learning rate of torch.optim.Adam algorithm, default is 0.1.

stepsint, optional

Optimisation steps of torch.optim.Adam algorithm, default is 200.

num_startsint, optional

Number of start for multi-start optimisation, default is 10.

num_samplesint, optional

Number of samples from which to draw the starts, default is 100.

**kwargsAny

Keyword argument passed to torch.optim.Adam or scipy.optimize.minimze.

Returns:
batch_resulttorch.Tensor

(sizq batch_size x d) Batch inputs.

batch_func_resulttorch.Tensor

(size batch_size) Batch outputs.

nubo.optimisation.multipoint.multi_sequential(func: Callable, method: str, batch_size: int, bounds: Tensor, constraints: dict | list | None = None, discrete: dict | None = None, lr: float | None = 0.1, steps: int | None = 100, num_starts: int | None = 10, num_samples: int | None = 100, **kwargs: Any) Tuple[Tensor, Tensor][source]#

Sequential greedy optimisation loop for Monte Carlo acquisition functions.

Optimises Monte Carlo acquisition functions to return multi-point batches for parallel evaluation. Computes one point after the other for a batch always keeping previous points fixed, i.e. compute point 1, compute point 2 holding point 1 fixed, compute point 3 holding points 1 and 2 fixed and so on until the batch is full. Minimises func.

Parameters:
funcCallable

Function to optimise.

methodstr

One of “L-BFGS-B”, “SLSQP”, or “Adam”.

batch_sizeint

Number of points to return.

boundstorch.Tensor

(size 2 x d) Optimisation bounds of input space.

constraintsdict or list of dict, optional

Optimisation constraints.

discretedict

Possible values for all discrete inputs in the shape {dim1: [values1], dim2: [values2], etc.}, e.g. {0: [1., 2., 3.], 3: [-0.1, -0.2, 100.]}.

lrfloat, optional

Learning rate of torch.optim.Adam algorithm, default is 0.1.

stepsint, optional

Optimisation steps of torch.optim.Adam algorithm, default is 200.

num_startsint, optional

Number of start for multi-start optimisation, default is 10.

num_samplesint, optional

Number of samples from which to draw the starts, default is 100.

**kwargsAny

Keyword argument passed to torch.optim.Adam or scipy.optimize.minimze.

Returns:
batch_resulttorch.Tensor

(size batch_size x d) Batch inputs.

batch_func_resulttorch.Tensor

(size batch_size) Batch outputs.

Mixed optimisation#

nubo.optimisation.mixed.mixed(func: Callable, method: str, bounds: Tensor, discrete: dict, constraints: dict | list | None = None, lr: float | None = 0.1, steps: int | None = 200, num_starts: int | None = 10, num_samples: int | None = 100, **kwargs: Any) Tuple[Tensor, Tensor][source]#

Mixed optimisation with continuous and discrete inputs.

Optimises the acquisition over all continuous input dimensions by fixing a combination of the discrete inputs. Returns the best result over all possible discrete combinations. Minimises func.

Parameters:
funcCallable

Function to optimise.

methodstr

One of “L-BFGS-B”, “SLSQP”, or “Adam”.

boundstorch.Tensor

(size 2 x d) Optimisation bounds of input space.

discretedict

Possible values for all discrete inputs in the shape {dim1: [values1], dim2: [values2], etc.}, e.g. {0: [1., 2., 3.], 3: [-0.1, -0.2, 100.]}.

constraintsdict or list of dict, optional

Optimisation constraints.

lrfloat, optional

Learning rate of torch.optim.Adam algorithm, default is 0.1.

stepsint, optional

Optimisation steps of torch.optim.Adam algorithm, default is 200.

num_startsint, optional

Number of start for multi-start optimisation, default is 10.

num_samplesint, optional

Number of samples from which to draw the starts, default is 100.

**kwargsAny

Keyword argument passed to torch.optim.Adam or scipy.optimize.minimze.

Returns:
best_resulttorch.Tensor

(size 1 x d) Minimiser inputs.

best_func_resulttorch.Tensor

(size 1) Minimiser output.

Deterministic optimisers#

References

  • P Virtanen et al., “SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python,”” Nature Methods, vol. 17, no. 3, p. 261-272, 2020.

nubo.optimisation.lbfgsb.lbfgsb(func: Callable, bounds: Tensor, num_starts: int | None = 10, num_samples: int | None = 100, **kwargs: Any) Tuple[Tensor, Tensor][source]#

Multi-start L-BFGS-B optimiser using the scipy.optimize.minimize implementation from SciPy.

Used for optimising analytical acquisition functions or Monte Carlo acquisition function when base samples are fixed. Picks the best num_starts points from a total num_samples Latin hypercube samples to initialise the optimser. Returns the best result. Minimises func.

Parameters:
funcCallable

Function to optimise.

boundstorch.Tensor

(size 2 x d) Optimisation bounds of input space.

num_startsint, optional

Number of start for multi-start optimisation, default is 10.

num_samplesint, optional

Number of samples from which to draw the starts, default is 100.

**kwargsAny

Keyword argument passed to scipy.optimize.minimize.

Returns:
best_resulttorch.Tensor

(size 1 x d) Minimiser inputs.

best_func_resulttorch.Tensor

(size 1) Minimiser output.

nubo.optimisation.slsqp.slsqp(func: Callable, bounds: Tensor, constraints: dict | Tuple[dict] | None = (), num_starts: int | None = 10, num_samples: int | None = 100, **kwargs: Any) Tuple[Tensor, Tensor][source]#

Multi-start SLSQP optimiser using the scipy.optimize.minimize implementation from SciPy.

Used for optimising analytical acquisition functions or Monte Carlo acquisition function when base samples are fixed. Picks the best num_starts points from a total num_samples Latin hypercube samples to initialise the optimser. Returns the best result. Minimises func.

Parameters:
funcCallable

Function to optimise.

boundstorch.Tensor

(size 2 x d) Optimisation bounds of input space.

constraintsdict or Tuple of dict, optional

Optimisation constraints, default is no constraints.

num_startsint, optional

Number of start for multi-start optimisation, default is 10.

num_samplesint, optional

Number of samples from which to draw the starts, default is 100.

**kwargsAny

Keyword argument passed to scipy.optimize.minimize.

Returns:
best_resulttorch.Tensor

(size 1 x d) Minimiser inputs.

best_func_resulttorch.Tensor

(size 1) Minimiser output.

Stochastic optimisers#

References

  • DP Kingma and J Ba, “Adam: A Method for Stochastic Optimization,” Proceedings of the 3rd International Conference on Learning Representations, 2015.

  • A Paszke, et al., “PyTorch: An Imperative Style, High-Performance Deep Learning Library,” In Advances in Neural Information Processing Systems, vol. 32, 2019.

nubo.optimisation.adam.adam(func: Callable, bounds: Tensor, lr: float | None = 0.1, steps: int | None = 200, num_starts: int | None = 10, num_samples: int | None = 100, **kwargs: Any) Tuple[Tensor, Tensor][source]#

Multi-start Adam optimiser using the torch.optim.Adam implementation from PyTorch.

Used for optimising Monte Carlo acquisition function when base samples are not fixed. Bounds are enforced by transforming func with the sigmoid function and scaling results. Picks the best num_starts points from a total num_samples Latin hypercube samples to initialise the optimiser. Returns the best result. Minimises func.

Parameters:
funcCallable

Function to optimise.

boundstorch.Tensor

(size 2 x d) Optimisation bounds of input space.

lrfloat, optional

Learning rate of torch.optim.Adam algorithm, default is 0.1.

stepsint, optional

Optimisation steps of torch.optim.Adam algorithm, default is 200.

num_startsint, optional

Number of start for multi-start optimisation, default is 10.

num_samplesint, optional

Number of samples from which to draw the starts, default is 100.

**kwargsAny

Keyword argument passed to torch.optim.Adam.

Returns:
best_resulttorch.Tensor

(size 1 x d) Minimiser input.

best_func_resulttorch.Tensor

(size 1) Minimiser output.

nubo.optimisation.adam.adam_mixed(func: Callable, bounds: Tensor, lr: float | None = 0.1, steps: int | None = 200, num_starts: int | None = 10, num_samples: int | None = 100, **kwargs: Any) Tuple[Tensor, Tensor][source]#

Multi-start Adam optimiser using the torch.optim.Adam implementation from PyTorch.

Used for optimising Monte Carlo acquisition function when base samples are not fixed. Bounds are enforced by clamping where values exceed them. Picks the best num_starts points from a total num_samples Latin hypercube samples to initialise the optimiser. Returns the best result. Minimises func.

Parameters:
funcCallable

Function to optimise.

boundstorch.Tensor

(size 2 x d) Optimisation bounds of input space.

lrfloat, optional

Learning rate of torch.optim.Adam algorithm, default is 0.1.

stepsint, optional

Optimisation steps of torch.optim.Adam algorithm, default is 200.

num_startsint, optional

Number of start for multi-start optimisation, default is 10.

num_samplesint, optional

Number of samples from which to draw the starts, default is 100.

**kwargsAny

Keyword argument passed to torch.optim.Adam.

Returns:
best_resulttorch.Tensor

(size 1 x d) Minimiser input.

best_func_resulttorch.Tensor

(size 1) Minimiser output.

Optimisation utilities#

nubo.optimisation.utils.gen_candidates(func: Callable, bounds: Tensor, num_candidates: int, num_samples: int, args: Tuple | None = ()) Tensor[source]#

Generate candidates for multi-start optimisation using a maximin Latin hypercube design or a uniform distribution for one candidate point.

Parameters:
funcCallable

Function to optimise.

boundstorch.Tensor

(size 2 x d) Optimisation bounds of input space.

num_candidatesint

Number of candidates.

num_samplesint

Number of samples from which to draw the starts.

argsTuple, optional

Arguments for function to maximise in order.

Returns:
torch.Tensor

(size num_candidates x d) Candidates.