Acquisition module#
Parent class#
Analytical acquisition functions#
References
DR Jones, M Schonlau, and WJ Welch, “Efficient Global Optimization of Expensive Black-Box Functions,” Journal of Global Optimization, vol. 13, no. 4, p. 566, 1998.
N Srinivas, A Krause, SM Kakade, and M Seeger, “Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design,” Proceedings of the 27th International Conference on Machine Learning, p. 1015-1022, 2010.
- class nubo.acquisition.analytical.ExpectedImprovement(gp: GP, y_best: Tensor)[source]#
Bases:
AcquisitionFunction
Expected improvement acquisition function:
\[\alpha_{EI} (\boldsymbol X_*) = \left(\mu_n(\boldsymbol X_*) - y^{best} \right) \Phi(z) + \sigma_n(\boldsymbol X_*) \phi(z),\]where \(z = \frac{\mu_n(\boldsymbol X_*) - y^{best}}{\sigma_n(\boldsymbol X_*)}\), \(\mu_n(\cdot)\) and \(\sigma_n(\cdot)\) are the mean and the standard deviation of the posterior distribution of the Gaussian process, \(y^{best}\) is the current best observation, and \(\Phi (\cdot)\) and \(\phi (\cdot)\) are the cumulative distribution function and the probability density function of the standard normal distribution.
- Attributes:
- gp
gpytorch.models.GP
Gaussian Process model.
- y_best
torch.Tensor
(size 1) Best output of training data.
- gp
Methods
eval
(x)Computes the (negative) expected improvement for some test point x analytically.
- class nubo.acquisition.analytical.UpperConfidenceBound(gp: GP, beta: float | None = 4.0)[source]#
Bases:
AcquisitionFunction
Upper confidence bound acquisition function:
\[\alpha_{UCB} (\boldsymbol X_*) = \mu_n(\boldsymbol X_*) + \sqrt{\beta} \sigma_n(\boldsymbol X_*),\]where \(\beta\) is a predefined trade-off parameter, and \(\mu_n(\cdot)\) and \(\sigma_n(\cdot)\) are the mean and the standard deviation of the posterior distribution of the Gaussian process.
- Attributes:
- gp
gpytorch.models.GP
Gaussian Process model.
- beta
float
Trade-off parameter, default is 4.0.
- gp
Methods
eval
(x)Computes the (negative) upper confidence bound for some test point x analytically.
Monte Carlo aquisition functions#
References
J Wilson, F Hutter, and M Deisenroth, “Maximizing Acquisition Functions for Bayesian Optimization,” Advances in Neural Information Processing Systems, vol. 31, 2018.
- class nubo.acquisition.monte_carlo.MCExpectedImprovement(gp: GP, y_best: Tensor, x_pending: Tensor | None = None, samples: int | None = 512, fix_base_samples: bool | None = False)[source]#
Bases:
AcquisitionFunction
Monte Carlo expected improvement acquisition function:
\[\alpha_{EI}^{MC} (\boldsymbol X_*) = \max \left(ReLU(\mu_n(\boldsymbol X_*) + \boldsymbol L \boldsymbol z - y^{best}) \right),\]where \(\mu_n(\cdot)\) is the mean of the predictive distribution of the Gaussian process, \(\boldsymbol L\) is the lower triangular matrix of the Cholesky decomposition of the covariance matrix \(\boldsymbol L \boldsymbol L^T = K(\boldsymbol X_n, \boldsymbol X_n)\), \(\boldsymbol z\) are samples from the standard normal distribution \(\mathcal{N} (0, 1)\), \(y^{best}\) is the current best observation, and \(ReLU (\cdot)\) is the rectified linear unit function that zeros all values below 0 and leaves the rest as is.
- Attributes:
- gp
gpytorch.models.GP
Gaussian Process model.
- y_best
torch.Tensor
(size 1) Best output of training data.
- x_pending
torch.Tensor
(size n x d) Training inputs of currently pending points.
- samples
int
Number of Monte Carlo samples, default is 512.
- fix_base_samples
bool
Whether base samples used to compute Monte Carlo samples of acquisition function should be fixed for the optimisation step. If false (default) stochastic optimizer (Adam) has to be used. If true deterministic optimizer (L-BFGS-B, SLSQP) can be used.
- base_samples
NoneType
ortorch.Tensor
Base samples used to compute Monte Carlo samples drawn if fix_base_samples is true.
- dims
int
Number of input dimensions.
- gp
Methods
eval
(x)Computes the (negative) expected improvement for some test point x by averaging Monte Carlo samples.
- class nubo.acquisition.monte_carlo.MCUpperConfidenceBound(gp: GP, beta: float | None = 4.0, x_pending: Tensor | None = None, samples: int | None = 512, fix_base_samples: bool | None = False)[source]#
Bases:
AcquisitionFunction
Monte Carlo upper confidence bound acquisition function:
\[\alpha_{UCB}^{MC} (\boldsymbol X_*) = \max \left(\mu_n(\boldsymbol X_*) + \sqrt{\frac{\beta \pi}{2}} \lvert \boldsymbol L \boldsymbol z \rvert \right),\]where \(\mu_n(\cdot)\) is the mean of the predictive distribution of the Gaussian process, \(\boldsymbol L\) is the lower triangular matrix of the Cholesky decomposition of the covariance matrix \(\boldsymbol L \boldsymbol L^T = K(\boldsymbol X_n, \boldsymbol X_n)\), \(\boldsymbol z\) are samples from the standard normal distribution \(\mathcal{N} (0, 1)\), and \(\beta\) is the trade-off parameter.
- Attributes:
- gp
gpytorch.models.GP
Gaussian Process model.
- beta
float
Trade-off parameter, default is 4.0.
- x_pending
torch.Tensor
(size n x d) Training inputs of currently pending points.
- samples
int
Number of Monte Carlo samples, default is 512.
- fix_base_samples
bool
Whether base samples used to compute Monte Carlo samples of acquisition function should be fixed for the optimisation step. If false (default) stochastic optimizer (Adam) has to be used. If true deterministic optimizer (L-BFGS-B, SLSQP) can be used.
- base_samples
NoneType
ortorch.Tensor
Base samples used to compute Monte Carlo samples drawn if fix_base_samples is true.
- dims
int
Number of input dimensions.
- gp
Methods
eval
(x)Computes the (negative) upper confidence bound for some test point x by averaging Monte Carlo samples.