Acquisition module#

Parent class#

class nubo.acquisition.acquisition_function.AcquisitionFunction[source]#

Parent class of all acquisition functions.

Methods

__call__(x)

Wrapper to allow x to be a torch.Tensor or a numpy.ndarray to enable optimisation with torch.optim and scipy.optimize.

Analytical acquisition functions#

References

  • DR Jones, M Schonlau, and WJ Welch, “Efficient Global Optimization of Expensive Black-Box Functions,” Journal of Global Optimization, vol. 13, no. 4, p. 566, 1998.

  • N Srinivas, A Krause, SM Kakade, and M Seeger, “Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design,” Proceedings of the 27th International Conference on Machine Learning, p. 1015-1022, 2010.

class nubo.acquisition.analytical.ExpectedImprovement(gp: GP, y_best: Tensor)[source]#

Bases: AcquisitionFunction

Expected improvement acquisition function:

\[\alpha_{EI} (\boldsymbol X_*) = \left(\mu_n(\boldsymbol X_*) - y^{best} \right) \Phi(z) + \sigma_n(\boldsymbol X_*) \phi(z),\]

where \(z = \frac{\mu_n(\boldsymbol X_*) - y^{best}}{\sigma_n(\boldsymbol X_*)}\), \(\mu_n(\cdot)\) and \(\sigma_n(\cdot)\) are the mean and the standard deviation of the posterior distribution of the Gaussian process, \(y^{best}\) is the current best observation, and \(\Phi (\cdot)\) and \(\phi (\cdot)\) are the cumulative distribution function and the probability density function of the standard normal distribution.

Attributes:
gpgpytorch.models.GP

Gaussian Process model.

y_besttorch.Tensor

(size 1) Best output of training data.

Methods

eval(x)

Computes the (negative) expected improvement for some test point x analytically.

eval(x: Tensor) Tensor[source]#

Computes the (negative) expected improvement for some test point x analytically.

Parameters:
xtorch.Tensor

(size 1 x d) Test point.

Returns:
torch.Tensor

(size 1) (Negative) expected improvement of x.

class nubo.acquisition.analytical.UpperConfidenceBound(gp: GP, beta: float | None = 4.0)[source]#

Bases: AcquisitionFunction

Upper confidence bound acquisition function:

\[\alpha_{UCB} (\boldsymbol X_*) = \mu_n(\boldsymbol X_*) + \sqrt{\beta} \sigma_n(\boldsymbol X_*),\]

where \(\beta\) is a predefined trade-off parameter, and \(\mu_n(\cdot)\) and \(\sigma_n(\cdot)\) are the mean and the standard deviation of the posterior distribution of the Gaussian process.

Attributes:
gpgpytorch.models.GP

Gaussian Process model.

betafloat

Trade-off parameter, default is 4.0.

Methods

eval(x)

Computes the (negative) upper confidence bound for some test point x analytically.

eval(x: Tensor) Tensor[source]#

Computes the (negative) upper confidence bound for some test point x analytically.

Parameters:
xtorch.Tensor

(size 1 x d) Test point.

Returns:
torch.Tensor

(size 1) (Negative) upper confidence bound of x.

Monte Carlo aquisition functions#

References

  • J Wilson, F Hutter, and M Deisenroth, “Maximizing Acquisition Functions for Bayesian Optimization,” Advances in Neural Information Processing Systems, vol. 31, 2018.

class nubo.acquisition.monte_carlo.MCExpectedImprovement(gp: GP, y_best: Tensor, x_pending: Tensor | None = None, samples: int | None = 512, fix_base_samples: bool | None = False)[source]#

Bases: AcquisitionFunction

Monte Carlo expected improvement acquisition function:

\[\alpha_{EI}^{MC} (\boldsymbol X_*) = \max \left(ReLU(\mu_n(\boldsymbol X_*) + \boldsymbol L \boldsymbol z - y^{best}) \right),\]

where \(\mu_n(\cdot)\) is the mean of the predictive distribution of the Gaussian process, \(\boldsymbol L\) is the lower triangular matrix of the Cholesky decomposition of the covariance matrix \(\boldsymbol L \boldsymbol L^T = K(\boldsymbol X_n, \boldsymbol X_n)\), \(\boldsymbol z\) are samples from the standard normal distribution \(\mathcal{N} (0, 1)\), \(y^{best}\) is the current best observation, and \(ReLU (\cdot)\) is the rectified linear unit function that zeros all values below 0 and leaves the rest as is.

Attributes:
gpgpytorch.models.GP

Gaussian Process model.

y_besttorch.Tensor

(size 1) Best output of training data.

x_pendingtorch.Tensor

(size n x d) Training inputs of currently pending points.

samplesint

Number of Monte Carlo samples, default is 512.

fix_base_samplesbool

Whether base samples used to compute Monte Carlo samples of acquisition function should be fixed for the optimisation step. If false (default) stochastic optimizer (Adam) has to be used. If true deterministic optimizer (L-BFGS-B, SLSQP) can be used.

base_samplesNoneType or torch.Tensor

Base samples used to compute Monte Carlo samples drawn if fix_base_samples is true.

dimsint

Number of input dimensions.

Methods

eval(x)

Computes the (negative) expected improvement for some test point x by averaging Monte Carlo samples.

eval(x: Tensor) Tensor[source]#

Computes the (negative) expected improvement for some test point x by averaging Monte Carlo samples.

Parameters:
xtorch.Tensor

(size 1 x d) Test point.

Returns
——-
``torch.Tensor``

(size 1) (Negative) expected improvement of x.

class nubo.acquisition.monte_carlo.MCUpperConfidenceBound(gp: GP, beta: float | None = 4.0, x_pending: Tensor | None = None, samples: int | None = 512, fix_base_samples: bool | None = False)[source]#

Bases: AcquisitionFunction

Monte Carlo upper confidence bound acquisition function:

\[\alpha_{UCB}^{MC} (\boldsymbol X_*) = \max \left(\mu_n(\boldsymbol X_*) + \sqrt{\frac{\beta \pi}{2}} \lvert \boldsymbol L \boldsymbol z \rvert \right),\]

where \(\mu_n(\cdot)\) is the mean of the predictive distribution of the Gaussian process, \(\boldsymbol L\) is the lower triangular matrix of the Cholesky decomposition of the covariance matrix \(\boldsymbol L \boldsymbol L^T = K(\boldsymbol X_n, \boldsymbol X_n)\), \(\boldsymbol z\) are samples from the standard normal distribution \(\mathcal{N} (0, 1)\), and \(\beta\) is the trade-off parameter.

Attributes:
gpgpytorch.models.GP

Gaussian Process model.

betafloat

Trade-off parameter, default is 4.0.

x_pendingtorch.Tensor

(size n x d) Training inputs of currently pending points.

samplesint

Number of Monte Carlo samples, default is 512.

fix_base_samplesbool

Whether base samples used to compute Monte Carlo samples of acquisition function should be fixed for the optimisation step. If false (default) stochastic optimizer (Adam) has to be used. If true deterministic optimizer (L-BFGS-B, SLSQP) can be used.

base_samplesNoneType or torch.Tensor

Base samples used to compute Monte Carlo samples drawn if fix_base_samples is true.

dimsint

Number of input dimensions.

Methods

eval(x)

Computes the (negative) upper confidence bound for some test point x by averaging Monte Carlo samples.

eval(x: Tensor) Tensor[source]#

Computes the (negative) upper confidence bound for some test point x by averaging Monte Carlo samples.

Parameters:
xtorch.Tensor

(size 1 x d) Test point.

Returns:
torch.Tensor

(size 1) (Negative) upper confidence bound of x.