179,665 research outputs found

    An Entropy Search Portfolio for Bayesian Optimization

    Full text link
    Bayesian optimization is a sample-efficient method for black-box global optimization. How- ever, the performance of a Bayesian optimization method very much depends on its exploration strategy, i.e. the choice of acquisition function, and it is not clear a priori which choice will result in superior performance. While portfolio methods provide an effective, principled way of combining a collection of acquisition functions, they are often based on measures of past performance which can be misleading. To address this issue, we introduce the Entropy Search Portfolio (ESP): a novel approach to portfolio construction which is motivated by information theoretic considerations. We show that ESP outperforms existing portfolio methods on several real and synthetic problems, including geostatistical datasets and simulated control tasks. We not only show that ESP is able to offer performance as good as the best, but unknown, acquisition function, but surprisingly it often gives better performance. Finally, over a wide range of conditions we find that ESP is robust to the inclusion of poor acquisition functions.Comment: 10 pages, 5 figure

    Integrated aerodynamic/dynamic optimization of helicopter rotor blades

    Get PDF
    An integrated aerodynamic/dynamic optimization procedure is used to minimize blade weight and 4 per rev vertical hub shear for a rotor blade in forward flight. The coupling of aerodynamics and dynamics is accomplished through the inclusion of airloads which vary with the design variables during the optimization process. Both single and multiple objective functions are used in the optimization formulation. The Global Criteria Approach is used to formulate the multiple objective optimization and results are compared with those obtained by using single objective function formulations. Constraints are imposed on natural frequencies, autorotational inertia, and centrifugal stress. The program CAMRAD is used for the blade aerodynamic and dynamic analyses, and the program CONMIN is used for the optimization. Since the spanwise and the azimuthal variations of loading are responsible for most rotor vibration and noise, the vertical airload distributions on the blade, before and after optimization, are compared. The total power required by the rotor to produce the same amount of thrust for a given area is also calculated before and after optimization. Results indicate that integrated optimization can significantly reduce the blade weight, the hub shear and the amplitude of the vertical airload distributions on the blade and the total power required by the rotor

    Robust optimization revisited via robust vector Farkas lemmas

    Get PDF
    This paper provides characterizations of the weakly minimal elements of vector optimization problems and the global minima of scalar optimization problems posed on locally convex spaces whose objective functions are deterministic while the uncertain constraints are treated under the robust (or risk-averse) approach, i.e. requiring the feasibility of the decisions to be taken for any possible scenario. To get these optimality conditions we provide Farkas-type results characterizing the inclusion of the robust feasible set into the solution set of some system involving the objective function and possibly uncertain parameters. In the particular case of scalar convex optimization problems, we characterize the optimality conditions in terms of the convexity and closedness of an associated set regarding a suitable point.This research was partially supported by MINECO of Spain and FEDER of EU, [grant number MTM2011-29064-C03-02] and by the project [B2015-28-04]: “A new approach to some classes of optimization problems”, Vietnam National University – HCMC, Vietnam

    Finding global minimum using filled function method

    Get PDF
    Filled function method is an optimization method for finding global minimizers. Filled function method is a combination of a local search in findings local solutions as well as global solution. It is basically a construction and eventually the inclusion of an auxiliary function called the filled function into the algorithm. Optimizing the objective function at an initial point will only yield a local minimizer. By using the auxiliary function, the local minimizer is shifted to a new lower basin of the objective function. The shifted point is the new initial solution for the local search to find the next local minimizer, where the function value is lower. The process continued until the global minimizer is achieved. This research used several test functions to examine the effectiveness of the method in finding global solution. The results show that this method works successfully and further research directions are discussed

    Input Warping for Bayesian Optimization of Non-Stationary Functions

    Get PDF
    Bayesian optimization has proven to be a highly effective methodology for the global optimization of unknown, expensive and multimodal functions. The ability to accurately model distributions over functions is critical to the effectiveness of Bayesian optimization. Although Gaussian processes provide a flexible prior over functions, there are various classes of functions that remain difficult to model. One of the most frequently occurring of these is the class of non-stationary functions. The optimization of the hyperparameters of machine learning algorithms is a problem domain in which parameters are often manually transformed a priori, for example by optimizing in “log-space”, to mitigate the effects of spatially-varying length scale. We develop a methodology for automatically learning a wide family of bijective transformations or warpings of the input space using the Beta cumulative distribution function. We further extend the warping framework to multi-task Bayesian optimization so that multiple tasks can be warped into a jointly stationary space. On a set of challenging benchmark optimization tasks, we observe that the inclusion of warping greatly improves on the state-of-the-art, producing better results faster and more reliably.Engineering and Applied Science

    Multimodal estimation of distribution algorithms

    Get PDF
    Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima

    Differential evolution with an evolution path: a DEEP evolutionary algorithm

    Get PDF
    Utilizing cumulative correlation information already existing in an evolutionary process, this paper proposes a predictive approach to the reproduction mechanism of new individuals for differential evolution (DE) algorithms. DE uses a distributed model (DM) to generate new individuals, which is relatively explorative, whilst evolution strategy (ES) uses a centralized model (CM) to generate offspring, which through adaptation retains a convergence momentum. This paper adopts a key feature in the CM of a covariance matrix adaptation ES, the cumulatively learned evolution path (EP), to formulate a new evolutionary algorithm (EA) framework, termed DEEP, standing for DE with an EP. Without mechanistically combining two CM and DM based algorithms together, the DEEP framework offers advantages of both a DM and a CM and hence substantially enhances performance. Under this architecture, a self-adaptation mechanism can be built inherently in a DEEP algorithm, easing the task of predetermining algorithm control parameters. Two DEEP variants are developed and illustrated in the paper. Experiments on the CEC'13 test suites and two practical problems demonstrate that the DEEP algorithms offer promising results, compared with the original DEs and other relevant state-of-the-art EAs

    Genetic learning particle swarm optimization

    Get PDF
    Social learning in particle swarm optimization (PSO) helps collective efficiency, whereas individual reproduction in genetic algorithm (GA) facilitates global effectiveness. This observation recently leads to hybridizing PSO with GA for performance enhancement. However, existing work uses a mechanistic parallel superposition and research has shown that construction of superior exemplars in PSO is more effective. Hence, this paper first develops a new framework so as to organically hybridize PSO with another optimization technique for “learning.” This leads to a generalized “learning PSO” paradigm, the *L-PSO. The paradigm is composed of two cascading layers, the first for exemplar generation and the second for particle updates as per a normal PSO algorithm. Using genetic evolution to breed promising exemplars for PSO, a specific novel *L-PSO algorithm is proposed in the paper, termed genetic learning PSO (GL-PSO). In particular, genetic operators are used to generate exemplars from which particles learn and, in turn, historical search information of particles provides guidance to the evolution of the exemplars. By performing crossover, mutation, and selection on the historical information of particles, the constructed exemplars are not only well diversified, but also high qualified. Under such guidance, the global search ability and search efficiency of PSO are both enhanced. The proposed GL-PSO is tested on 42 benchmark functions widely adopted in the literature. Experimental results verify the effectiveness, efficiency, robustness, and scalability of the GL-PSO

    Orthogonal learning particle swarm optimization

    Get PDF
    Particle swarm optimization (PSO) relies on its learning strategy to guide its search direction. Traditionally, each particle utilizes its historical best experience and its neighborhood’s best experience through linear summation. Such a learning strategy is easy to use, but is inefficient when searching in complex problem spaces. Hence, designing learning strategies that can utilize previous search information (experience) more efficiently has become one of the most salient and active PSO research topics. In this paper, we proposes an orthogonal learning (OL) strategy for PSO to discover more useful information that lies in the above two experiences via orthogonal experimental design. We name this PSO as orthogonal learning particle swarm optimization (OLPSO). The OL strategy can guide particles to fly in better directions by constructing a much promising and efficient exemplar. The OL strategy can be applied to PSO with any topological structure. In this paper, it is applied to both global and local versions of PSO, yielding the OLPSO-G and OLPSOL algorithms, respectively. This new learning strategy and the new algorithms are tested on a set of 16 benchmark functions, and are compared with other PSO algorithms and some state of the art evolutionary algorithms. The experimental results illustrate the effectiveness and efficiency of the proposed learning strategy and algorithms. The comparisons show that OLPSO significantly improves the performance of PSO, offering faster global convergence, higher solution quality, and stronger robustness

    KKT reformulation and necessary conditions for optimality in nonsmooth bilevel optimization

    No full text
    For a long time, the bilevel programming problem has essentially been considered as a special case of mathematical programs with equilibrium constraints (MPECs), in particular when the so-called KKT reformulation is in question. Recently though, this widespread believe was shown to be false in general. In this paper, other aspects of the difference between both problems are revealed as we consider the KKT approach for the nonsmooth bilevel program. It turns out that the new inclusion (constraint) which appears as a consequence of the partial subdifferential of the lower-level Lagrangian (PSLLL) places the KKT reformulation of the nonsmooth bilevel program in a new class of mathematical program with both set-valued and complementarity constraints. While highlighting some new features of this problem, we attempt here to establish close links with the standard optimistic bilevel program. Moreover, we discuss possible natural extensions for C-, M-, and S-stationarity concepts. Most of the results rely on a coderivative estimate for the PSLLL that we also provide in this paper
    corecore