1,426 research outputs found
Reliability-based optimization for multiple constraints with evolutionary algorithms
In this paper, we combine reliability-based optimization with a multi-objective evolutionary algorithm for handling uncertainty in decision variables and parameters. This work is an extension to a previous study by the second author and his research group to more accurately compute a multi-constraint reliability. This means that the overall reliability of a solution regarding all constraints is examined, instead of a reliability computation of only one critical constraint. First, we present a brief introduction into this so-called 'structural reliability' aspects. Thereafter, we introduce a method for identifying inactive constraints according to the reliability evaluation. With this method, we show that with less number of constraint evaluations, an identical solution can be achieved. Furthermore, we apply our approach to a number of problems including a real-world car side impact design problem to illustrate our method
Finding Optimal Strategies in a Multi-Period Multi-Leader-Follower Stackelberg Game Using an Evolutionary Algorithm
Stackelberg games are a classic example of bilevel optimization problems,
which are often encountered in game theory and economics. These are complex
problems with a hierarchical structure, where one optimization task is nested
within the other. Despite a number of studies on handling bilevel optimization
problems, these problems still remain a challenging territory, and existing
methodologies are able to handle only simple problems with few variables under
assumptions of continuity and differentiability. In this paper, we consider a
special case of a multi-period multi-leader-follower Stackelberg competition
model with non-linear cost and demand functions and discrete production
variables. The model has potential applications, for instance in aircraft
manufacturing industry, which is an oligopoly where a few giant firms enjoy a
tremendous commitment power over the other smaller players. We solve cases with
different number of leaders and followers, and show how the entrance or exit of
a player affects the profits of the other players. In the presence of various
model complexities, we use a computationally intensive nested evolutionary
strategy to find an optimal solution for the model. The strategy is evaluated
on a test-suite of bilevel problems, and it has been shown that the method is
successful in handling difficult bilevel problems.Comment: To be published in Computers and Operations Researc
An investigation of messy genetic algorithms
Genetic algorithms (GAs) are search procedures based on the mechanics of natural selection and natural genetics. They combine the use of string codings or artificial chromosomes and populations with the selective and juxtapositional power of reproduction and recombination to motivate a surprisingly powerful search heuristic in many problems. Despite their empirical success, there has been a long standing objection to the use of GAs in arbitrarily difficult problems. A new approach was launched. Results to a 30-bit, order-three-deception problem were obtained using a new type of genetic algorithm called a messy genetic algorithm (mGAs). Messy genetic algorithms combine the use of variable-length strings, a two-phase selection scheme, and messy genetic operators to effect a solution to the fixed-coding problem of standard simple GAs. The results of the study of mGAs in problems with nonuniform subfunction scale and size are presented. The mGA approach is summarized, both its operation and the theory of its use. Experiments on problems of varying scale, varying building-block size, and combined varying scale and size are presented
A Tutorial on Evolutionary Multi-Objective Optimization (EMO)
Many real-world search and optimization problems are naturally posed
as non-linear programming problems having multiple objectives.
Due to lack of suitable solution techniques, such problems are
artificially converted into a single-objective problem and solved.
The difficulty arises because such problems give rise to a set
of Pareto-optimal solutions, instead of a single optimum solution.
It then becomes important to find not just one Pareto-optimal
solution but as many of them as possible. Classical methods are
not quite efficient in solving these problems because they require
repetitive applications to find multiple Pareto-optimal solutions
and in some occasions repetitive applications do not guarantee
finding distinct Pareto-optimal solutions. The population approach
of evolutionary algorithms (EAs) allows an efficient way to find
multiple Pareto-optimal solutions simultaneously in a single
simulation run.
In this tutorial, we discussed the following aspects related to
EMO:
1. The basic differences in principle of EMO with classical methods.
2. A gentle introduction to evolutionary algorithms with simple
examples. A simple method of handling constraints was also
discussed.
3. The concept of domination and methods of finding non-dominated
solutions in a population of solutions were discussed.
4. A brief history of the development of EMO is highlighted.
5. A number of main EMO methods (NSGA-II, SPEA and PAES) were
discussed.
6. The advantage of EMO methodologies was discussed by presenting
a number of case studies. They clearly showed the advantage of
finding a number of Pareto-optimal solutions simultaneously.
7. Three advantages of using an EMO methodology were stressed:
(i) For a better decision making (in terms of choosing a
compromised solution) in the presence of multiple solutions
(ii) For finding important relationships among decision variables
(useful in design optimization). Some case studies from engineering
demonstrated the importance of such studies.
(iii) For solving other optimization problems efficiently. For
example, in solving genetic programming problems, the so-called
`bloating problem of increased program size can be solved by using
a second objective of minimizing the size of the programs.
8. A number of salient research topics were highlighted. Some of
them are as follows:
(i) Development of scalable test problems
(ii) Development of computationally fast EMO methods
(iii) Performance metrics for evaluating EMO methods
(iv) Interactive EMO methodologies
(v) Robust multi-objective optimization procedures
(vi) Finding knee or other important solutions including partial
Pareto-optimal set
(vii) Multi-objective scheduling and other optimization problems.
It was clear from the discussions that
evolutionary search methods offers an alternate means of solving
multi-objective optimization problems compared to classical
approaches. This is why multi-objective optimization using EAs is
getting a growing attention in the recent years.
The motivated readers may explore
current research issues and other important studies from various
texts (Coello et al, 2003; Deb, 2001), conference proceedings
(EMO-01 and EMO-03 Proceedings) and numerous research papers
(http://www.lania.mx/~ccoello/EMOO/).
References:
----------
C. A. C. Coello, D. A. VanVeldhuizen, and G. Lamont.
Evolutionary Algorithms for Solving Multi-Objective Problems.
Boston, MA: Kluwer Academic Publishers, 2002.
K.Deb. Multi-objective optimization using evolutionary algorithms.
Chichester, UK: Wiley, 2001.
C. Fonseca, P. Fleming, E. Zitzler, K. Deb, and L. Thiele, editors.
Proceedings of the Second Evolutionary Multi-Criterion
Optimization (EMO-03) Conference
(Lecture Notes in Computer Science (LNCS) 2632).
Heidelberg: Springer, 2003.
E. Zitzler, K. Deb, L. Thiele, C. A. C. Coello, and D. Corne,
editors. Proceedings of the First Evolutionary Multi-Criterion
Optimization (EMO-01) Conference
(Lecture Notes in Computer Science (LNCS) 1993).
Heidelberg: Springer, 2001
- …