449,415 research outputs found
Fast Genetic Algorithms
For genetic algorithms using a bit-string representation of length~, the
general recommendation is to take as mutation rate. In this work, we
discuss whether this is really justified for multimodal functions. Taking jump
functions and the evolutionary algorithm as the simplest example, we
observe that larger mutation rates give significantly better runtimes. For the
\jump_{m,n} function, any mutation rate between and leads to a
speed-up at least exponential in compared to the standard choice.
The asymptotically best runtime, obtained from using the mutation rate
and leading to a speed-up super-exponential in , is very sensitive to small
changes of the mutation rate. Any deviation by a small (1 \pm \eps) factor
leads to a slow-down exponential in . Consequently, any fixed mutation rate
gives strongly sub-optimal results for most jump functions.
Building on this observation, we propose to use a random mutation rate
, where is chosen from a power-law distribution. We prove
that the EA with this heavy-tailed mutation rate optimizes any
\jump_{m,n} function in a time that is only a small polynomial (in~)
factor above the one stemming from the optimal rate for this .
Our heavy-tailed mutation operator yields similar speed-ups (over the best
known performance guarantees) for the vertex cover problem in bipartite graphs
and the matching problem in general graphs.
Following the example of fast simulated annealing, fast evolution strategies,
and fast evolutionary programming, we propose to call genetic algorithms using
a heavy-tailed mutation operator \emph{fast genetic algorithms}
Memory-based immigrants for genetic algorithms in dynamic environments
Copyright @ 2005 ACMInvestigating and enhancing the performance of genetic algorithms in dynamic environments have attracted a growing interest from the community of genetic algorithms in recent years. This trend reflects the fact that many real world problems are actually dynamic, which poses serious challenge to traditional genetic algorithms. Several approaches have been developed into genetic algorithms for dynamic optimization problems. Among these approches, random immigrants and memory schemes have shown to be beneficial in many dynamic problems. This paper proposes a hybrid memory and random immigrants scheme for genetic algorithms in dynamic environments. In the hybrid scheme, the best solution in memory is retrieved and acts as the base to create random immigrants to replace the worst individuals in the population. In this way, not only can diversity be maintained but it is done more efficiently to adapt the genetic algorithm to the changing environment. The experimental results based on a series of systematically constructed dynamic problems show that the proposed memory based immigrants scheme efficiently improves the performance of genetic algorithms in dynamic environments
Genetic algorithms: a pragmatic, non-parametric approach to exploratory analysis of questionnaires in educational research
Data from a survey to determine student attitudes to their courses are used as an example to show how genetic algorithms can be used in the analysis of questionnaire data. Genetic algorithms provide a means of generating logical rules which predict one variable in a data set by relating it to others. This paper explains the principle underlying genetic algorithms and gives a non-mathematical description of the means by which rules are generated. A commercially available computer program is used to apply genetic algorithms to the survey data. The results are discussed
Incremental multiple objective genetic algorithms
This paper presents a new genetic algorithm approach to multi-objective optimization problemsIncremental Multiple Objective Genetic Algorithms (IMOGA). Different from conventional MOGA methods, it takes each objective into consideration incrementally. The whole evolution is divided into as many phases as the number of objectives, and one more objective is considered in each phase. Each phase is composed of two stages: first, an independent population is evolved to optimize one specific objective; second, the better-performing individuals from the evolved single-objective population and the multi-objective population evolved in the last phase are joined together by the operation of integration. The resulting population then becomes an initial multi-objective population, to which a multi-objective evolution based on the incremented objective set is applied. The experiment results show that, in most problems, the performance of IMOGA is better than that of three other MOGAs, NSGA-II, SPEA and PAES. IMOGA can find more solutions during the same time span, and the quality of solutions is better
Higher-Order Quantum-Inspired Genetic Algorithms
This paper presents a theory and an empirical evaluation of Higher-Order
Quantum-Inspired Genetic Algorithms. Fundamental notions of the theory have
been introduced, and a novel Order-2 Quantum-Inspired Genetic Algorithm (QIGA2)
has been presented. Contrary to all QIGA algorithms which represent quantum
genes as independent qubits, in higher-order QIGAs quantum registers are used
to represent genes strings which allows modelling of genes relations using
quantum phenomena. Performance comparison has been conducted on a benchmark of
20 deceptive combinatorial optimization problems. It has been presented that
using higher quantum orders is beneficial for genetic algorithm efficiency, and
the new QIGA2 algorithm outperforms the old QIGA algorithm which was tuned in
highly compute intensive metaoptimization process
A Novel Genetic Algorithm using Helper Objectives for the 0-1 Knapsack Problem
The 0-1 knapsack problem is a well-known combinatorial optimisation problem.
Approximation algorithms have been designed for solving it and they return
provably good solutions within polynomial time. On the other hand, genetic
algorithms are well suited for solving the knapsack problem and they find
reasonably good solutions quickly. A naturally arising question is whether
genetic algorithms are able to find solutions as good as approximation
algorithms do. This paper presents a novel multi-objective optimisation genetic
algorithm for solving the 0-1 knapsack problem. Experiment results show that
the new algorithm outperforms its rivals, the greedy algorithm, mixed strategy
genetic algorithm, and greedy algorithm + mixed strategy genetic algorithm
Genetic algorithm based DSP multiprocessor scheduling
This paper presents recent work on the application of genetic algorithms to the NP-complete problem of multiprocessor scheduling for audio DSP algorithms. The genetic algorithm is used to schedule algorithms written in the form of data flow graphs onto specified multiprocessor arrays. A unique chromosome representation technique is described and a number of application-specific genetic operators are introduced. Comparisons of the performance of the genetic algorithm technique with heuristic scheduling techniques show that the choice of the most suitable technique varies with the structure and complexity of the scheduling problem. Finally, techniques for combining heuristic and genetic algorithm scheduling techniques are discusse
- …
