11,086 research outputs found
Evidence of coevolution in multi-objective evolutionary algorithms
This paper demonstrates that simple yet important characteristics of coevolution can occur in evolutionary algorithms when only a few conditions are met. We find that interaction-based fitness measurements such as fitness (linear) ranking allow for a form of coevolutionary dynamics that is observed when 1) changes are made in what solutions are able to interact during the ranking process and 2) evolution takes place in a multi-objective environment. This research contributes to the study of simulated evolution in a at least two ways. First, it establishes a broader relationship between coevolution and multi-objective optimization than has been previously considered in the literature. Second, it demonstrates that the preconditions for coevolutionary behavior are weaker than previously thought. In particular, our model indicates that direct cooperation or competition between species is not required for coevolution to take place. Moreover, our experiments provide evidence that environmental perturbations can drive coevolutionary processes; a conclusion that mirrors arguments put forth in dual phase evolution theory. In the discussion, we briefly consider how our results may shed light onto this and other recent theories of evolution
Solving the G-problems in less than 500 iterations: Improved efficient constrained optimization by surrogate modeling and adaptive parameter control
Constrained optimization of high-dimensional numerical problems plays an
important role in many scientific and industrial applications. Function
evaluations in many industrial applications are severely limited and no
analytical information about objective function and constraint functions is
available. For such expensive black-box optimization tasks, the constraint
optimization algorithm COBRA was proposed, making use of RBF surrogate modeling
for both the objective and the constraint functions. COBRA has shown remarkable
success in solving reliably complex benchmark problems in less than 500
function evaluations. Unfortunately, COBRA requires careful adjustment of
parameters in order to do so.
In this work we present a new self-adjusting algorithm SACOBRA, which is
based on COBRA and capable to achieve high-quality results with very few
function evaluations and no parameter tuning. It is shown with the help of
performance profiles on a set of benchmark problems (G-problems, MOPTA08) that
SACOBRA consistently outperforms any COBRA algorithm with fixed parameter
setting. We analyze the importance of the several new elements in SACOBRA and
find that each element of SACOBRA plays a role to boost up the overall
optimization performance. We discuss the reasons behind and get in this way a
better understanding of high-quality RBF surrogate modeling
Improved sampling of the pareto-front in multiobjective genetic optimizations by steady-state evolution: a Pareto converging genetic algorithm
Previous work on multiobjective genetic algorithms has been focused on preventing genetic drift and the issue of convergence has been given little attention. In this paper, we present a simple steady-state strategy, Pareto Converging Genetic Algorithm (PCGA), which naturally samples the solution space and ensures population advancement towards the Pareto-front. PCGA eliminates the need for sharing/niching and thus minimizes heuristically chosen parameters and procedures. A systematic approach based on histograms of rank is introduced for assessing convergence to the Pareto-front, which, by definition, is unknown in most real search problems.
We argue that there is always a certain inheritance of genetic material belonging to a population, and there is unlikely to be any significant gain beyond some point; a stopping criterion where terminating the computation is suggested. For further encouraging diversity and competition, a nonmigrating island model may optionally be used; this approach is particularly suited to many difficult (real-world) problems, which have a tendency to get stuck at (unknown) local minima. Results on three benchmark problems are presented and compared with those of earlier approaches. PCGA is found to produce diverse sampling of the Pareto-front without niching and with significantly less computational effort
Multiobjective genetic algorithm strategies for electricity production from generation IV nuclear technology
Development of a technico-economic optimization strategy of cogeneration systems of electricity/hydrogen, consists in finding an optimal efficiency of the generating cycle and heat delivery system, maximizing the energy production and minimizing the production costs. The first part of the paper is related to the development of a multiobjective optimization library (MULTIGEN) to tackle all types of problems arising from cogeneration. After a literature review for identifying the most efficient methods, the MULTIGEN library is described, and the innovative points are listed. A new stopping criterion, based on the stagnation of the Pareto front, may lead to significant decrease of computational times, particularly in the case of problems involving only integer variables. Two practical examples are presented in the last section. The former is devoted to a bicriteria optimization of both exergy destruction and total cost of the plant, for a generating cycle coupled with a Very High Temperature Reactor (VHTR). The second example consists in designing the heat exchanger of the generating turbomachine. Three criteria are optimized: the exchange surface, the exergy destruction and the number of exchange modules
LambdaOpt: Learn to Regularize Recommender Models in Finer Levels
Recommendation models mainly deal with categorical variables, such as
user/item ID and attributes. Besides the high-cardinality issue, the
interactions among such categorical variables are usually long-tailed, with the
head made up of highly frequent values and a long tail of rare ones. This
phenomenon results in the data sparsity issue, making it essential to
regularize the models to ensure generalization. The common practice is to
employ grid search to manually tune regularization hyperparameters based on the
validation data. However, it requires non-trivial efforts and large computation
resources to search the whole candidate space; even so, it may not lead to the
optimal choice, for which different parameters should have different
regularization strengths. In this paper, we propose a hyperparameter
optimization method, LambdaOpt, which automatically and adaptively enforces
regularization during training. Specifically, it updates the regularization
coefficients based on the performance of validation data. With LambdaOpt, the
notorious tuning of regularization hyperparameters can be avoided; more
importantly, it allows fine-grained regularization (i.e. each parameter can
have an individualized regularization coefficient), leading to better
generalized models. We show how to employ LambdaOpt on matrix factorization, a
classical model that is representative of a large family of recommender models.
Extensive experiments on two public benchmarks demonstrate the superiority of
our method in boosting the performance of top-K recommendation.Comment: Accepted by KDD 201
A Multiple-Expert Binarization Framework for Multispectral Images
In this work, a multiple-expert binarization framework for multispectral
images is proposed. The framework is based on a constrained subspace selection
limited to the spectral bands combined with state-of-the-art gray-level
binarization methods. The framework uses a binarization wrapper to enhance the
performance of the gray-level binarization. Nonlinear preprocessing of the
individual spectral bands is used to enhance the textual information. An
evolutionary optimizer is considered to obtain the optimal and some suboptimal
3-band subspaces from which an ensemble of experts is then formed. The
framework is applied to a ground truth multispectral dataset with promising
results. In addition, a generalization to the cross-validation approach is
developed that not only evaluates generalizability of the framework, it also
provides a practical instance of the selected experts that could be then
applied to unseen inputs despite the small size of the given ground truth
dataset.Comment: 12 pages, 8 figures, 6 tables. Presented at ICDAR'1
A Hierachical Evolutionary Algorithm for Multiobjective Optimization in IMRT
Purpose: Current inverse planning methods for IMRT are limited because they
are not designed to explore the trade-offs between the competing objectives
between the tumor and normal tissues. Our goal was to develop an efficient
multiobjective optimization algorithm that was flexible enough to handle any
form of objective function and that resulted in a set of Pareto optimal plans.
Methods: We developed a hierarchical evolutionary multiobjective algorithm
designed to quickly generate a diverse Pareto optimal set of IMRT plans that
meet all clinical constraints and reflect the trade-offs in the plans. The top
level of the hierarchical algorithm is a multiobjective evolutionary algorithm
(MOEA). The genes of the individuals generated in the MOEA are the parameters
that define the penalty function minimized during an accelerated deterministic
IMRT optimization that represents the bottom level of the hierarchy. The MOEA
incorporates clinical criteria to restrict the search space through protocol
objectives and then uses Pareto optimality among the fitness objectives to
select individuals.
Results: Acceleration techniques implemented on both levels of the
hierarchical algorithm resulted in short, practical runtimes for optimizations.
The MOEA improvements were evaluated for example prostate cases with one target
and two OARs. The modified MOEA dominated 11.3% of plans using a standard
genetic algorithm package. By implementing domination advantage and protocol
objectives, small diverse populations of clinically acceptable plans that were
only dominated 0.2% by the Pareto front could be generated in a fraction of an
hour.
Conclusions: Our MOEA produces a diverse Pareto optimal set of plans that
meet all dosimetric protocol criteria in a feasible amount of time. It
optimizes not only beamlet intensities but also objective function parameters
on a patient-specific basis
Constrained Optimization with Evolutionary Algorithms: A Comprehensive Review
Global optimization is an essential part of any kind of system. Various algorithms have been proposed that try to imitate the learning and problem solving abilities of the nature up to certain level. The main idea of all nature-inspired algorithms is to generate an interconnected network of individuals, a population. Although most of unconstrained optimization problems can be easily handled with Evolutionary Algorithms (EA), constrained optimization problems (COPs) are very complex. In this paper, a comprehensive literature review will be presented which summarizes the constraint handling techniques for COP
- âŠ