919 research outputs found

    Constrained Efficient Global Optimization of Expensive Black-box Functions

    Full text link
    We study the problem of constrained efficient global optimization, where both the objective and constraints are expensive black-box functions that can be learned with Gaussian processes. We propose CONFIG (CONstrained efFIcient Global Optimization), a simple and effective algorithm to solve it. Under certain regularity assumptions, we show that our algorithm enjoys the same cumulative regret bound as that in the unconstrained case and similar cumulative constraint violation upper bounds. For commonly used Matern and Squared Exponential kernels, our bounds are sublinear and allow us to derive a convergence rate to the optimal solution of the original constrained problem. In addition, our method naturally provides a scheme to declare infeasibility when the original black-box optimization problem is infeasible. Numerical experiments on sampled instances from the Gaussian process, artificial numerical problems, and a black-box building controller tuning problem all demonstrate the competitive performance of our algorithm. Compared to the other state-of-the-art methods, our algorithm significantly improves the theoretical guarantees, while achieving competitive empirical performance.Comment: Accepted to ICML 202

    Balancing Exploration and Exploitation using Kriging Surrogate Models in Electromagnetic Design Optimization

    No full text
    The balance between exploration and exploitation is an important issue when attempting to find the global minimum of an objective function. This paper describes how this balance may be carefully controlled when using Kriging surrogate models to approximate the objective function

    Scalarizing cost-effective multiobjective optimization algorithms made possible with kriging

    No full text
    The use of kriging in cost-effective single-objective optimization is well established, and a wide variety of different criteria now exist for selecting design vectors to evaluate in the search for the global minimum. Additionly, a large number of methods exist for transforming a multi-objective optimization problem to a single-objective problem. With these two facts in mind, this paper discusses the range of kriging assisted algorithms which are possible (and which remain to be explored) for cost-effective multi-objective optimization

    The consideration of surrogate model accuracy in single-objective electromagnetic design optimization

    No full text
    The computational cost of evaluating the objective function in electromagnetic optimal design problems necessitates the use of cost-effective techniques. This paper describes how one popular technique, surrogate modelling, has been used in the single-objective optimization of electromagnetic devices. Three different types of surrogate model are considered, namely polynomial approximation, artificial neural networks and kriging. The importance of considering surrogate model accuracy is emphasised, and techniques used to improve accuracy for each type of model are discussed. Developments in this area outside the field of electromagnetic design optimization are also mentioned. It is concluded that surrogate model accuracy is an important factor which should be considered during an optimization search, and that developments have been made elsewhere in this area which are yet to be implemented in electromagnetic design optimization

    Strategies for balancing exploration and exploitation in electromagnetic optimisation

    No full text
    The paper focuses on the advantages and drawbacks of different strategies which may be used to assist kriging surrogate modelling with the purpose of selecting multiple design vectors for evaluation when stepping forward in optimisation routines. The combined criteria include the efficiency of finding the global optimum but also the quality of the approximation of the shape of the objective function; the latter may be used to make judgements about the robustness of the optimised design

    Bayesian optimization using sequential Monte Carlo

    Full text link
    We consider the problem of optimizing a real-valued continuous function ff using a Bayesian approach, where the evaluations of ff are chosen sequentially by combining prior information about ff, which is described by a random process model, and past evaluation results. The main difficulty with this approach is to be able to compute the posterior distributions of quantities of interest which are used to choose evaluation points. In this article, we decide to use a Sequential Monte Carlo (SMC) approach

    A warped kernel improving robustness in Bayesian optimization via random embeddings

    Get PDF
    This works extends the Random Embedding Bayesian Optimization approach by integrating a warping of the high dimensional subspace within the covariance kernel. The proposed warping, that relies on elementary geometric considerations, allows mitigating the drawbacks of the high extrinsic dimensionality while avoiding the algorithm to evaluate points giving redundant information. It also alleviates constraints on bound selection for the embedded domain, thus improving the robustness, as illustrated with a test case with 25 variables and intrinsic dimension 6
    • 

    corecore