47 research outputs found

    An informational approach to the global optimization of expensive-to-evaluate functions

    Full text link
    In many global optimization problems motivated by engineering applications, the number of function evaluations is severely limited by time or cost. To ensure that each evaluation contributes to the localization of good candidates for the role of global minimizer, a sequential choice of evaluation points is usually carried out. In particular, when Kriging is used to interpolate past evaluations, the uncertainty associated with the lack of information on the function can be expressed and used to compute a number of criteria accounting for the interest of an additional evaluation at any given point. This paper introduces minimizer entropy as a new Kriging-based criterion for the sequential choice of points at which the function should be evaluated. Based on \emph{stepwise uncertainty reduction}, it accounts for the informational gain on the minimizer expected from a new evaluation. The criterion is approximated using conditional simulations of the Gaussian process model behind Kriging, and then inserted into an algorithm similar in spirit to the \emph{Efficient Global Optimization} (EGO) algorithm. An empirical comparison is carried out between our criterion and \emph{expected improvement}, one of the reference criteria in the literature. Experimental results indicate major evaluation savings over EGO. Finally, the method, which we call IAGO (for Informational Approach to Global Optimization) is extended to robust optimization problems, where both the factors to be tuned and the function evaluations are corrupted by noise.Comment: Accepted for publication in the Journal of Global Optimization (This is the revised version, with additional details on computational problems, and some grammatical changes

    Scalarizing cost-effective multiobjective optimization algorithms made possible with kriging

    No full text
    The use of kriging in cost-effective single-objective optimization is well established, and a wide variety of different criteria now exist for selecting design vectors to evaluate in the search for the global minimum. Additionly, a large number of methods exist for transforming a multi-objective optimization problem to a single-objective problem. With these two facts in mind, this paper discusses the range of kriging assisted algorithms which are possible (and which remain to be explored) for cost-effective multi-objective optimization

    Active Bayesian Optimization: Minimizing Minimizer Entropy

    Full text link
    The ultimate goal of optimization is to find the minimizer of a target function.However, typical criteria for active optimization often ignore the uncertainty about the minimizer. We propose a novel criterion for global optimization and an associated sequential active learning strategy using Gaussian processes.Our criterion is the reduction of uncertainty in the posterior distribution of the function minimizer. It can also flexibly incorporate multiple global minimizers. We implement a tractable approximation of the criterion and demonstrate that it obtains the global minimizer accurately compared to conventional Bayesian optimization criteria

    The Informational Approach to Global Optimization in presence of very noisy evaluation results. Application to the optimization of renewable energy integration strategies

    Full text link
    We consider the problem of global optimization of a function f from very noisy evaluations. We adopt a Bayesian sequential approach: evaluation points are chosen so as to reduce the uncertainty about the position of the global optimum of f, as measured by the entropy of the corresponding random variable (Informational Approach to Global Optimization, Villemonteix et al., 2009). When evaluations are very noisy, the error coming from the estimation of the entropy using conditional simulations becomes non negligible compared to its variations on the input domain. We propose a solution to this problem by choosing evaluation points as if several evaluations were going to be made at these points. The method is applied to the optimization of a strategy for the integration of renewable energies into an electrical distribution network

    Bayesian Subset Simulation: a kriging-based subset simulation algorithm for the estimation of small probabilities of failure

    Full text link
    The estimation of small probabilities of failure from computer simulations is a classical problem in engineering, and the Subset Simulation algorithm proposed by Au & Beck (Prob. Eng. Mech., 2001) has become one of the most popular method to solve it. Subset simulation has been shown to provide significant savings in the number of simulations to achieve a given accuracy of estimation, with respect to many other Monte Carlo approaches. The number of simulations remains still quite high however, and this method can be impractical for applications where an expensive-to-evaluate computer model is involved. We propose a new algorithm, called Bayesian Subset Simulation, that takes the best from the Subset Simulation algorithm and from sequential Bayesian methods based on kriging (also known as Gaussian process modeling). The performance of this new algorithm is illustrated using a test case from the literature. We are able to report promising results. In addition, we provide a numerical study of the statistical properties of the estimator.Comment: 11th International Probabilistic Assessment and Management Conference (PSAM11) and The Annual European Safety and Reliability Conference (ESREL 2012), Helsinki : Finland (2012

    Differentiating the multipoint Expected Improvement for optimal batch design

    Full text link
    This work deals with parallel optimization of expensive objective functions which are modeled as sample realizations of Gaussian processes. The study is formalized as a Bayesian optimization problem, or continuous multi-armed bandit problem, where a batch of q > 0 arms is pulled in parallel at each iteration. Several algorithms have been developed for choosing batches by trading off exploitation and exploration. As of today, the maximum Expected Improvement (EI) and Upper Confidence Bound (UCB) selection rules appear as the most prominent approaches for batch selection. Here, we build upon recent work on the multipoint Expected Improvement criterion, for which an analytic expansion relying on Tallis' formula was recently established. The computational burden of this selection rule being still an issue in application, we derive a closed-form expression for the gradient of the multipoint Expected Improvement, which aims at facilitating its maximization using gradient-based ascent algorithms. Substantial computational savings are shown in application. In addition, our algorithms are tested numerically and compared to state-of-the-art UCB-based batch-sequential algorithms. Combining starting designs relying on UCB with gradient-based EI local optimization finally appears as a sound option for batch design in distributed Gaussian Process optimization
    corecore