35 research outputs found

    EDA++: Estimation of Distribution Algorithms with Feasibility Conserving Mechanisms for Constrained Continuous Optimization

    Get PDF
    Handling non-linear constraints in continuous optimization is challenging, and finding a feasible solution is usually a difficult task. In the past few decades, various techniques have been developed to deal with linear and non-linear constraints. However, reaching feasible solutions has been a challenging task for most of these methods. In this paper, we adopt the framework of Estimation of Distribution Algorithms (EDAs) and propose a new algorithm (EDA++) equipped with some mechanisms to deal with non-linear constraints. These mechanisms are associated with different stages of the EDA, including seeding, learning and mapping. It is shown that, besides increasing the quality of the solutions in terms of objective values, the feasibility of the final solutions is guaranteed if an initial population of feasible solutions is seeded to the algorithm. The EDA with the proposed mechanisms is applied to two suites of benchmark problems for constrained continuous optimization and its performance is compared with some state-of-the-art algorithms and constraint handling methods. Conducted experiments confirm the speed, robustness and efficiency of the proposed algorithm in tackling various problems with linear and non-linear constraints.La Caixa Foundatio

    MATEDA: A suite of EDA programs in Matlab

    Get PDF
    This paper describes MATEDA-2.0, a suite of programs in Matlab for estimation of distribution algorithms. The package allows the optimization of single and multi-objective problems with estimation of distribution algorithms (EDAs) based on undirected graphical models and Bayesian networks. The implementation is conceived for allowing the incorporation by the user of different combinations of selection, learning, sampling, and local search procedures. Other included methods allow the analysis of the structures learned by the probabilistic models, the visualization of particular features of these structures and the use of the probabilistic models as fitness modeling tools

    Adaptive algorithms for history matching and uncertainty quantification

    Get PDF
    Numerical reservoir simulation models are the basis for many decisions in regard to predicting, optimising, and improving production performance of oil and gas reservoirs. History matching is required to calibrate models to the dynamic behaviour of the reservoir, due to the existence of uncertainty in model parameters. Finally a set of history matched models are used for reservoir performance prediction and economic and risk assessment of different development scenarios. Various algorithms are employed to search and sample parameter space in history matching and uncertainty quantification problems. The algorithm choice and implementation, as done through a number of control parameters, have a significant impact on effectiveness and efficiency of the algorithm and thus, the quality of results and the speed of the process. This thesis is concerned with investigation, development, and implementation of improved and adaptive algorithms for reservoir history matching and uncertainty quantification problems. A set of evolutionary algorithms are considered and applied to history matching. The shared characteristic of applied algorithms is adaptation by balancing exploration and exploitation of the search space, which can lead to improved convergence and diversity. This includes the use of estimation of distribution algorithms, which implicitly adapt their search mechanism to the characteristics of the problem. Hybridising them with genetic algorithms, multiobjective sorting algorithms, and real-coded, multi-model and multivariate Gaussian-based models can help these algorithms to adapt even more and improve their performance. Finally diversity measures are used to develop an explicit, adaptive algorithm and control the algorithm’s performance, based on the structure of the problem. Uncertainty quantification in a Bayesian framework can be carried out by resampling of the search space using Markov chain Monte-Carlo sampling algorithms. Common critiques of these are low efficiency and their need for control parameter tuning. A Metropolis-Hastings sampling algorithm with an adaptive multivariate Gaussian proposal distribution and a K-nearest neighbour approximation has been developed and applied

    Distributed Estimation of Distribution Algorithms for continuous optimization: how does the exchanged information influence their behavior?

    Get PDF
    One of the most promising areas in which probabilistic graphical models have shown an incipient activity is the field of heuristic optimization and, in particular, in Estimation of Distribution Algorithms. Due to their inherent parallelism, different research lines have been studied trying to improve Estimation of Distribution Algorithms from the point of view of execution time and/or accuracy. Among these proposals, we focus on the so-called distributed or island-based models. This approach defines several islands (algorithms instances) running independently and exchanging information with a given frequency. The information sent by the islands can be either a set of individuals or a probabilistic model. This paper presents a comparative study for a distributed univariate Estimation of Distribution Algorithm and a multivariate version, paying special attention to the comparison of two alternative methods for exchanging information, over a wide set of parameters and problems ? the standard benchmark developed for the IEEE Workshop on Evolutionary Algorithms and other Metaheuristics for Continuous Optimization Problems of the ISDA 2009 Conference. Several analyses from different points of view have been conducted to analyze both the influence of the parameters and the relationships between them including a characterization of the configurations according to their behavior on the proposed benchmark

    Time-Varying Lyapunov Control Laws with Enhanced Estimation of Distribution Algorithm for Low-Thrust Trajectory Design

    Get PDF
    Enhancements in evolutionary optimization techniques are rapidly growing in many aspects of engineering, specifically in astrodynamics and space trajectory optimization and design. In this chapter, the problem of optimal design of space trajectories is tackled via an enhanced optimization algorithm within the framework of Estimation of Distribution Algorithms (EDAs), incorporated with Lyapunov and Q-law feedback control methods. First, both a simple Lyapunov function and a Q-law are formulated in Classical Orbital Elements (COEs) to provide a closed-loop low-thrust trajectory profile. The weighting coefficients of these controllers are approximated with various degrees of Hermite interpolation splines. Following this model, the unknown time series of weighting coefficients are converted to unknown interpolation points. Considering the interpolation points as the decision variables, a black-box optimization problem is formed with transfer time and fuel mass as the objective functions. An enhanced EDA is proposed and utilized to find the optimal variation of weighting coefficients for minimum-time and minimum-fuel transfer trajectories. The proposed approach is applied in some trajectory optimization problems of Earth-orbiting satellites. Results show the efficiency and the effectiveness of the proposed approach in finding optimal transfer trajectories. A comparison between the Q-law and simple Lyapunov controller is done to show the potential of the potential of the EEDA in enabling the simple Lyapunov controller to recover the finer nuances explicitly given within the analytical expressions in the Q-law

    Large scale estimation of distribution algorithms for continuous optimisation

    Get PDF
    Modern real world optimisation problems are increasingly becoming large scale. However, searching in high dimensional search spaces is notoriously difficult. Many methods break down as dimensionality increases and Estimation of Distribution Algorithm (EDA) is especially prone to the curse of dimensionality. In this thesis, we device new EDA variants that are capable of searching in large dimensional continuous domains. We in particular (i) investigated heavy tails search distributions, (ii) we clarify a controversy in the literature about the capabilities of Gaussian versus Cauchy search distributions, (iii) we constructed a new way of projecting a large dimensional search space to low dimensional subspaces in a way that gives us control of the size of covariance of the search distribution and we develop adaptation techniques to exploit this and (iv) we proposed a random embedding technique in EDA that takes advantage of low intrinsic dimensional structure of problems. All these developments avail us with new techniques to tackle high dimensional optimization problems
    corecore