Sequential single-point Bayesian optimisation#

In this example, NUBO is used for sequential single-point optimisation. The Hartmann6D synthetic test function acts as a substitute for a black-box objective function, such as an experiment or a simulation. We use the analytical acquisition function UpperConfidenceBound with \(\beta = 1.96^2\) corresponding to the 95% confidence interval of the Gaussian distribution. We optimise this acquisition function with the L-BFGS-B algorithm with 5 starts to avoid getting stuck in a local maximum. The optimisation loop is run for 40 iterations and finds a solution close to the true optimum of -3.3224.

[1]:
import torch
from nubo.acquisition import ExpectedImprovement, UpperConfidenceBound
from nubo.models import GaussianProcess, fit_gp
from nubo.optimisation import single
from nubo.test_functions import Hartmann6D
from nubo.utils import gen_inputs
from gpytorch.likelihoods import GaussianLikelihood


# test function
func = Hartmann6D(minimise=False)
dims = 6

# specify bounds
bounds = torch.tensor([[0., 0., 0., 0., 0., 0.], [1., 1., 1., 1., 1., 1.]])

# training data
x_train = gen_inputs(num_points=dims*5,
                     num_dims=dims,
                     bounds=bounds)
y_train = func(x_train)

# Bayesian optimisation loop
iters = 40

for iter in range(iters):

    # specify Gaussian process
    likelihood = GaussianLikelihood()
    gp = GaussianProcess(x_train, y_train, likelihood=likelihood)

    # fit Gaussian process
    fit_gp(x_train, y_train, gp=gp, likelihood=likelihood, lr=0.1, steps=200)

    # specify acquisition function
    # acq = ExpectedImprovement(gp=gp, y_best=torch.max(y_train))
    acq = UpperConfidenceBound(gp=gp, beta=1.96**2)

    # optimise acquisition function
    x_new, _ = single(func=acq, method="L-BFGS-B", bounds=bounds, num_starts=5)

    # evaluate new point
    y_new = func(x_new)

    # add to data
    x_train = torch.vstack((x_train, x_new))
    y_train = torch.hstack((y_train, y_new))

    # print new best
    if y_new > torch.max(y_train[:-1]):
        print(f"New best at evaluation {len(y_train)}: \t Inputs: {x_new.numpy().reshape(dims).round(4)}, \t Outputs: {-y_new.numpy().round(4)}")

# results
best_iter = int(torch.argmax(y_train))
print(f"Evaluation: {best_iter+1} \t Solution: {-float(y_train[best_iter]):.4f}")

New best at evaluation 36:       Inputs: [0.3965 1.     1.     0.6047 0.2086 0.0744],    Outputs: [-2.7826]
New best at evaluation 38:       Inputs: [0.4048 1.     1.     0.58   0.1023 0.0234],    Outputs: [-2.8559]
New best at evaluation 42:       Inputs: [0.4123 0.8905 1.     0.5833 0.     0.0325],    Outputs: [-3.1857]
New best at evaluation 48:       Inputs: [0.396  0.8896 1.     0.5694 0.     0.0367],    Outputs: [-3.1883]
New best at evaluation 50:       Inputs: [0.4044 0.8807 1.     0.5761 0.     0.0376],    Outputs: [-3.1942]
New best at evaluation 52:       Inputs: [0.4061 0.8784 1.     0.576  0.1186 0.0378],    Outputs: [-3.1989]
Evaluation: 52   Solution: -3.1989