8 research outputs found
Advances in Bayesian Optimization with Applications in Aerospace Engineering
Optimization requires the quantities of interest that define objective functions and constraints to be evaluated a large number of times. In aerospace engineering, these quantities of interest can be expensive to compute (e.g., numerically solving a set of partial differential equations), leading to a challenging optimization problem. Bayesian optimization (BO) is a
class of algorithms for the global optimization of expensive-to-evaluate functions. BO leverages all past evaluations available to construct a surrogate model. This surrogate model is then used to select the next design to evaluate. This paper reviews two recent advances in BO that tackle the challenges of optimizing expensive functions and thus can enrich the
optimization toolbox of the aerospace engineer. The first method addresses optimization problems subject to inequality constraints where a finite budget of evaluations is available, a common situation when dealing with expensive models (e.g., a limited time to conduct the optimization study or limited access to a supercomputer). This challenge is addressed via a lookahead BO algorithm that plans the sequence of designs to evaluate in order to maximize the improvement achieved, not only at the next iteration, but once the total budget is consumed. The second method demonstrates how sensitivity information, such as gradients computed with adjoint methods, can be incorporated into a BO algorithm. This algorithm exploits sensitivity information in two ways: first, to enhance the surrogate model, and second, to improve the selection of the next design to evaluate by accounting for future gradient evaluations. The benefits of the two methods are demonstrated on aerospace examples
Non-Myopic Multifidelity Bayesian Optimization
Bayesian optimization is a popular framework for the optimization of black
box functions. Multifidelity methods allows to accelerate Bayesian optimization
by exploiting low-fidelity representations of expensive objective functions.
Popular multifidelity Bayesian strategies rely on sampling policies that
account for the immediate reward obtained evaluating the objective function at
a specific input, precluding greater informative gains that might be obtained
looking ahead more steps. This paper proposes a non-myopic multifidelity
Bayesian framework to grasp the long-term reward from future steps of the
optimization. Our computational strategy comes with a two-step lookahead
multifidelity acquisition function that maximizes the cumulative reward
obtained measuring the improvement in the solution over two steps ahead. We
demonstrate that the proposed algorithm outperforms a standard multifidelity
Bayesian framework on popular benchmark optimization problems
Stochastic Zeroth-order Functional Constrained Optimization: Oracle Complexity and Applications
Functionally constrained stochastic optimization problems, where neither the
objective function nor the constraint functions are analytically available,
arise frequently in machine learning applications. In this work, assuming we
only have access to the noisy evaluations of the objective and constraint
functions, we propose and analyze stochastic zeroth-order algorithms for
solving the above class of stochastic optimization problem. When the domain of
the functions is , assuming there are constraint functions,
we establish oracle complexities of order and
respectively in the convex and nonconvex
setting, where represents the accuracy of the solutions required in
appropriately defined metrics. The established oracle complexities are, to our
knowledge, the first such results in the literature for functionally
constrained stochastic zeroth-order optimization problems. We demonstrate the
applicability of our algorithms by illustrating its superior performance on the
problem of hyperparameter tuning for sampling algorithms and neural network
training.Comment: To appear in INFORMS Journal on Optimizatio