5,802 research outputs found
Improved sampling of the pareto-front in multiobjective genetic optimizations by steady-state evolution: a Pareto converging genetic algorithm
Previous work on multiobjective genetic algorithms has been focused on preventing genetic drift and the issue of convergence has been given little attention. In this paper, we present a simple steady-state strategy, Pareto Converging Genetic Algorithm (PCGA), which naturally samples the solution space and ensures population advancement towards the Pareto-front. PCGA eliminates the need for sharing/niching and thus minimizes heuristically chosen parameters and procedures. A systematic approach based on histograms of rank is introduced for assessing convergence to the Pareto-front, which, by definition, is unknown in most real search problems.
We argue that there is always a certain inheritance of genetic material belonging to a population, and there is unlikely to be any significant gain beyond some point; a stopping criterion where terminating the computation is suggested. For further encouraging diversity and competition, a nonmigrating island model may optionally be used; this approach is particularly suited to many difficult (real-world) problems, which have a tendency to get stuck at (unknown) local minima. Results on three benchmark problems are presented and compared with those of earlier approaches. PCGA is found to produce diverse sampling of the Pareto-front without niching and with significantly less computational effort
Accelerating Asymptotically Exact MCMC for Computationally Intensive Models via Local Approximations
We construct a new framework for accelerating Markov chain Monte Carlo in
posterior sampling problems where standard methods are limited by the
computational cost of the likelihood, or of numerical models embedded therein.
Our approach introduces local approximations of these models into the
Metropolis-Hastings kernel, borrowing ideas from deterministic approximation
theory, optimization, and experimental design. Previous efforts at integrating
approximate models into inference typically sacrifice either the sampler's
exactness or efficiency; our work seeks to address these limitations by
exploiting useful convergence characteristics of local approximations. We prove
the ergodicity of our approximate Markov chain, showing that it samples
asymptotically from the \emph{exact} posterior distribution of interest. We
describe variations of the algorithm that employ either local polynomial
approximations or local Gaussian process regressors. Our theoretical results
reinforce the key observation underlying this paper: when the likelihood has
some \emph{local} regularity, the number of model evaluations per MCMC step can
be greatly reduced without biasing the Monte Carlo average. Numerical
experiments demonstrate multiple order-of-magnitude reductions in the number of
forward model evaluations used in representative ODE and PDE inference
problems, with both synthetic and real data.Comment: A major update of the theory and example
Making and breaking power laws in evolutionary algorithm population dynamics
Deepening our understanding of the characteristics and behaviors of population-based search algorithms remains an important ongoing challenge in Evolutionary Computation. To date however, most studies of Evolutionary Algorithms have only been able to take place within tightly restricted experimental conditions. For instance, many analytical methods can only be applied to canonical algorithmic forms or can only evaluate evolution over simple test functions. Analysis of EA behavior under more complex conditions is needed to broaden our understanding of this population-based search process. This paper presents an approach to analyzing EA behavior that can be applied to a diverse range of algorithm designs and environmental conditions. The approach is based on evaluating an individual’s impact on population dynamics using metrics derived from genealogical graphs.\ud
From experiments conducted over a broad range of conditions, some important conclusions are drawn in this study. First, it is determined that very few individuals in an EA population have a significant influence on future population dynamics with the impact size fitting a power law distribution. The power law distribution indicates there is a non-negligible probability that single individuals will dominate the entire population, irrespective of population size. Two EA design features are however found to cause strong changes to this aspect of EA behavior: i) the population topology and ii) the introduction of completely new individuals. If the EA population topology has a long path length or if new (i.e. historically uncoupled) individuals are continually inserted into the population, then power law deviations are observed for large impact sizes. It is concluded that such EA designs can not be dominated by a small number of individuals and hence should theoretically be capable of exhibiting higher degrees of parallel search behavior
Fitness Uniform Optimization
In evolutionary algorithms, the fitness of a population increases with time
by mutating and recombining individuals and by a biased selection of more fit
individuals. The right selection pressure is critical in ensuring sufficient
optimization progress on the one hand and in preserving genetic diversity to be
able to escape from local optima on the other hand. Motivated by a universal
similarity relation on the individuals, we propose a new selection scheme,
which is uniform in the fitness values. It generates selection pressure toward
sparsely populated fitness regions, not necessarily toward higher fitness, as
is the case for all other selection schemes. We show analytically on a simple
example that the new selection scheme can be much more effective than standard
selection schemes. We also propose a new deletion scheme which achieves a
similar result via deletion and show how such a scheme preserves genetic
diversity more effectively than standard approaches. We compare the performance
of the new schemes to tournament selection and random deletion on an artificial
deceptive problem and a range of NP-hard problems: traveling salesman, set
covering and satisfiability.Comment: 25 double-column pages, 12 figure
Parameter estimation for Boolean models of biological networks
Boolean networks have long been used as models of molecular networks and play
an increasingly important role in systems biology. This paper describes a
software package, Polynome, offered as a web service, that helps users
construct Boolean network models based on experimental data and biological
input. The key feature is a discrete analog of parameter estimation for
continuous models. With only experimental data as input, the software can be
used as a tool for reverse-engineering of Boolean network models from
experimental time course data.Comment: Web interface of the software is available at
http://polymath.vbi.vt.edu/polynome
Recommended from our members
Combinatorial optimization and metaheuristics
Today, combinatorial optimization is one of the youngest and most active areas of discrete mathematics. It is a branch of optimization in applied mathematics and computer science, related to operational research, algorithm theory and computational complexity theory. It sits at the intersection of several fields, including artificial intelligence, mathematics and software engineering. Its increasing interest arises for the fact that a large number of scientific and industrial problems can be formulated as abstract combinatorial optimization problems, through graphs and/or (integer) linear programs. Some of these problems have polynomial-time (“efficient”) algorithms, while most of them are NP-hard, i.e. it is not proved that they can be solved in polynomial-time. Mainly, it means that it is not possible to guarantee that an exact solution to the problem can be found and one has to settle for an approximate solution with known performance guarantees. Indeed, the goal of approximate methods is to find “quickly” (reasonable run-times), with “high” probability, provable “good” solutions (low error from the real optimal solution). In the last 20 years, a new kind of algorithm commonly called metaheuristics have emerged in this class, which basically try to combine heuristics in high level frameworks aimed at efficiently and effectively exploring the search space. This report briefly outlines the components, concepts, advantages and disadvantages of different metaheuristic approaches from a conceptual point of view, in order to analyze their similarities and differences. The two very significant forces of intensification and diversification, that mainly determine the behavior of a metaheuristic, will be pointed out. The report concludes by exploring the importance of hybridization and integration methods
Recommended from our members
An Inverse Geometry Problem for the Localization of Skin Tumours by Thermal Analysis
In this paper, the Dual Reciprocity Method (DRM) is coupled to a Genetic Algorithm (GA) in an inverse procedure through which the size and location of a skin tumour may be obtained from temperature measurements at the skin surface. The GA is an evolutionary process which does not require the calculation of sensitivities, search directions or the definition of initial guesses. The DRM in this case requires no internal nodes. It is also shown that the DRM approximation function used is not an important factor for the problem considered here. Results are presented for tumours of different sizes and positions in relation to the skin surface
- …