67,602 research outputs found

    Population-based continuous optimization, probabilistic modelling and mean shift

    Get PDF
    Evolutionary algorithms perform optimization using a population of sample solution points. An interesting development has been to view population-based optimization as the process of evolving an explicit, probabilistic model of the search space. This paper investigates a formal basis for continuous, population-based optimization in terms of a stochastic gradient descent on the Kullback-Leibler divergence between the model probability density and the objective function, represented as an unknown density of assumed form. This leads to an update rule that is related and compared with previous theoretical work, a continuous version of the population-based incremental learning algorithm, and the generalized mean shift clustering framework. Experimental results are presented that demonstrate the dynamics of the new algorithm on a set of simple test problems

    Enhanced Rao Algorithms for Optimization of the Structures Considering the Deterministic and Probabilistic Constraints

    Get PDF
    Rao algorithms are metaheuristic algorithms that are based on population and do not have metaphors. These algorithms are extremely simple and do not require the use of any parameters that are dependent on the problem. Although these algorithms have some other benefits to, they are vulnerable of being trapped in local optima. The present work proposes Enhanced Rao algorithms denoted by ERao as a means of alleviating this drawback. In the ERao algorithms, the modified version of the statistically regenerated mechanism is added. Additionally, the mechanism that sticks the candidate solution to the border of the search space is modified. The efficiency of the ERao algorithms is tested on three structural design optimization problems with probabilistic and deterministic constraints. The optimization results are compared to those of the Rao algorithms and some other state-of-art optimization methods. The results show that the proposed optimization method can be an effective tool for solving structural design problems with probabilistic and deterministic constraints

    Scalability of Genetic Programming and Probabilistic Incremental Program Evolution

    Full text link
    This paper discusses scalability of standard genetic programming (GP) and the probabilistic incremental program evolution (PIPE). To investigate the need for both effective mixing and linkage learning, two test problems are considered: ORDER problem, which is rather easy for any recombination-based GP, and TRAP or the deceptive trap problem, which requires the algorithm to learn interactions among subsets of terminals. The scalability results show that both GP and PIPE scale up polynomially with problem size on the simple ORDER problem, but they both scale up exponentially on the deceptive problem. This indicates that while standard recombination is sufficient when no interactions need to be considered, for some problems linkage learning is necessary. These results are in agreement with the lessons learned in the domain of binary-string genetic algorithms (GAs). Furthermore, the paper investigates the effects of introducing utnnecessary and irrelevant primitives on the performance of GP and PIPE.Comment: Submitted to GECCO-200

    Bayesian Inference in Estimation of Distribution Algorithms

    Get PDF
    Metaheuristics such as Estimation of Distribution Algorithms and the Cross-Entropy method use probabilistic modelling and inference to generate candidate solutions in optimization problems. The model fitting task in this class of algorithms has largely been carried out to date based on maximum likelihood. An alternative approach that is prevalent in statistics and machine learning is to use Bayesian inference. In this paper, we provide a framework for the application of Bayesian inference techniques in probabilistic model-based optimization. Based on this framework, a simple continuous Bayesian Estimation of Distribution Algorithm is described. We evaluate and compare this algorithm experimentally with its maximum likelihood equivalent, UMDAG c
    corecore