93,671 research outputs found

    On strong homogeneity of two global optimization algorithms based on statistical models of multimodal objective functions

    Full text link
    The implementation of global optimization algorithms, using the arithmetic of infinity, is considered. A relatively simple version of implementation is proposed for the algorithms that possess the introduced property of strong homogeneity. It is shown that the P-algorithm and the one-step Bayesian algorithm are strongly homogeneous.Comment: 11 pages, 1 figur

    A hybrid of Bayesian-based global search with Hooke–Jeeves local refinement for multi-objective optimization problems

    Get PDF
    The proposed multi-objective optimization algorithm hybridizes random global search with a local refinement algorithm. The global search algorithm mimics the Bayesian multi-objective optimization algorithm. The site of current computation of the objective functions by the proposed algorithm is selected by randomized simulation of the bi-objective selection by the Bayesian-based algorithm. The advantage of the new algorithm is that it avoids the inner complexity of Bayesian algorithms. A version of the Hooke–Jeeves algorithm is adapted for the local refinement of the approximation of the Pareto front. The developed hybrid algorithm is tested under conditions previously applied to test other Bayesian algorithms so that performance could be compared. Other experiments were performed to assess the efficiency of the proposed algorithm under conditions where the previous versions of Bayesian algorithms were not appropriate because of the number of objectives and/or dimensionality of the decision space

    Optimistic Optimization of Gaussian Process Samples

    Full text link
    Bayesian optimization is a popular formalism for global optimization, but its computational costs limit it to expensive-to-evaluate functions. A competing, computationally more efficient, global optimization framework is optimistic optimization, which exploits prior knowledge about the geometry of the search space in form of a dissimilarity function. We investigate to which degree the conceptual advantages of Bayesian Optimization can be combined with the computational efficiency of optimistic optimization. By mapping the kernel to a dissimilarity, we obtain an optimistic optimization algorithm for the Bayesian Optimization setting with a run-time of up to O(NlogN)\mathcal{O}(N \log N). As a high-level take-away we find that, when using stationary kernels on objectives of relatively low evaluation cost, optimistic optimization can be strongly preferable over Bayesian optimization, while for strongly coupled and parametric models, good implementations of Bayesian optimization can perform much better, even at low evaluation cost. We argue that there is a new research domain between geometric and probabilistic search, i.e. methods that run drastically faster than traditional Bayesian optimization, while retaining some of the crucial functionality of Bayesian optimization.Comment: 10 pages, 6 figure

    Calibrated Uncertainty Estimation Improves Bayesian Optimization

    Full text link
    Bayesian optimization is a sequential procedure for obtaining the global optimum of black-box functions without knowing a priori their true form. Good uncertainty estimates over the shape of the objective function are essential in guiding the optimization process. However, these estimates can be inaccurate if the true objective function violates assumptions made by its model (e.g., Gaussianity). This paper studies which uncertainties are needed in Bayesian optimization models and argues that ideal uncertainties should be calibrated -- i.e., an 80% predictive interval should contain the true outcome 80% of the time. We propose a simple algorithm for enforcing this property and show that it enables Bayesian optimization to arrive at the global optimum in fewer steps. We provide theoretical insights into the role of calibrated uncertainties and demonstrate the improved performance of our method on standard benchmark functions and hyperparameter optimization tasks
    corecore