3,616 research outputs found
Differential evolution with an evolution path: a DEEP evolutionary algorithm
Utilizing cumulative correlation information already existing in an evolutionary process, this paper proposes a predictive approach to the reproduction mechanism of new individuals for differential evolution (DE) algorithms. DE uses a distributed model (DM) to generate new individuals, which is relatively explorative, whilst evolution strategy (ES) uses a centralized model (CM) to generate offspring, which through adaptation retains a convergence momentum. This paper adopts a key feature in the CM of a covariance matrix adaptation ES, the cumulatively learned evolution path (EP), to formulate a new evolutionary algorithm (EA) framework, termed DEEP, standing for DE with an EP. Without mechanistically combining two CM and DM based algorithms together, the DEEP framework offers advantages of both a DM and a CM and hence substantially enhances performance. Under this architecture, a self-adaptation mechanism can be built inherently in a DEEP algorithm, easing the task of predetermining algorithm control parameters. Two DEEP variants are developed and illustrated in the paper. Experiments on the CEC'13 test suites and two practical problems demonstrate that the DEEP algorithms offer promising results, compared with the original DEs and other relevant state-of-the-art EAs
A self-learning particle swarm optimizer for global optimization problems
Copyright @ 2011 IEEE. All Rights Reserved. This article was made available through the Brunel Open Access Publishing Fund.Particle swarm optimization (PSO) has been shown as an effective tool for solving global optimization problems. So far, most PSO algorithms use a single learning pattern for all particles, which means that all particles in a swarm use the same strategy. This monotonic learning pattern may cause the lack of intelligence for a particular particle, which makes it unable to deal with different complex situations. This paper presents a novel algorithm, called self-learning particle swarm optimizer (SLPSO), for global optimization problems. In SLPSO, each particle has a set of four strategies to cope with different situations in the search space. The cooperation of the four strategies is implemented by an adaptive learning framework at the individual level, which can enable a particle to choose the optimal strategy according to its own local fitness landscape. The experimental study on a set of 45 test functions and two real-world problems show that SLPSO has a superior performance in comparison with several other peer algorithms.This work was supported by the Engineering and Physical Sciences Research Council of U.K. under Grants EP/E060722/1 and EP/E060722/2
SQG-Differential Evolution for difficult optimization problems under a tight function evaluation budget
In the context of industrial engineering, it is important to integrate
efficient computational optimization methods in the product development
process. Some of the most challenging simulation-based engineering design
optimization problems are characterized by: a large number of design variables,
the absence of analytical gradients, highly non-linear objectives and a limited
function evaluation budget. Although a huge variety of different optimization
algorithms is available, the development and selection of efficient algorithms
for problems with these industrial relevant characteristics, remains a
challenge. In this communication, a hybrid variant of Differential Evolution
(DE) is introduced which combines aspects of Stochastic Quasi-Gradient (SQG)
methods within the framework of DE, in order to improve optimization efficiency
on problems with the previously mentioned characteristics. The performance of
the resulting derivative-free algorithm is compared with other state-of-the-art
DE variants on 25 commonly used benchmark functions, under tight function
evaluation budget constraints of 1000 evaluations. The experimental results
indicate that the new algorithm performs excellent on the 'difficult' (high
dimensional, multi-modal, inseparable) test functions. The operations used in
the proposed mutation scheme, are computationally inexpensive, and can be
easily implemented in existing differential evolution variants or other
population-based optimization algorithms by a few lines of program code as an
non-invasive optional setting. Besides the applicability of the presented
algorithm by itself, the described concepts can serve as a useful and
interesting addition to the algorithmic operators in the frameworks of
heuristics and evolutionary optimization and computing
Towards a standard jet definition
In a simulated measurement of the -boson mass, evaluation of Fisher's
information shows the optimal jet definition to be physically equivalent to the
algorithm while being much faster at large multiplicities.Comment: version to appear in Phys. Rev. Lett., 4 page
SZ and CMB reconstruction using Generalized Morphological Component Analysis
In the last decade, the study of cosmic microwave background (CMB) data has
become one of the most powerful tools to study and understand the Universe.
More precisely, measuring the CMB power spectrum leads to the estimation of
most cosmological parameters. Nevertheless, accessing such precious physical
information requires extracting several different astrophysical components from
the data. Recovering those astrophysical sources (CMB, Sunyaev-Zel'dovich
clusters, galactic dust) thus amounts to a component separation problem which
has already led to an intense activity in the field of CMB studies. In this
paper, we introduce a new sparsity-based component separation method coined
Generalized Morphological Component Analysis (GMCA). The GMCA approach is
formulated in a Bayesian maximum a posteriori (MAP) framework. Numerical
results show that this new source recovery technique performs well compared to
state-of-the-art component separation methods already applied to CMB data.Comment: 11 pages - Statistical Methodology - Special Issue on Astrostatistics
- in pres
Orthogonal learning particle swarm optimization
Particle swarm optimization (PSO) relies on its
learning strategy to guide its search direction. Traditionally,
each particle utilizes its historical best experience and its neighborhood’s
best experience through linear summation. Such a
learning strategy is easy to use, but is inefficient when searching
in complex problem spaces. Hence, designing learning strategies
that can utilize previous search information (experience) more
efficiently has become one of the most salient and active PSO
research topics. In this paper, we proposes an orthogonal learning
(OL) strategy for PSO to discover more useful information that
lies in the above two experiences via orthogonal experimental
design. We name this PSO as orthogonal learning particle swarm
optimization (OLPSO). The OL strategy can guide particles to
fly in better directions by constructing a much promising and
efficient exemplar. The OL strategy can be applied to PSO with
any topological structure. In this paper, it is applied to both global
and local versions of PSO, yielding the OLPSO-G and OLPSOL
algorithms, respectively. This new learning strategy and the
new algorithms are tested on a set of 16 benchmark functions, and
are compared with other PSO algorithms and some state of the
art evolutionary algorithms. The experimental results illustrate
the effectiveness and efficiency of the proposed learning strategy
and algorithms. The comparisons show that OLPSO significantly
improves the performance of PSO, offering faster global convergence,
higher solution quality, and stronger robustness
Multiagent cooperation for solving global optimization problems: an extendible framework with example cooperation strategies
This paper proposes the use of multiagent cooperation for solving global optimization problems through the introduction of a new multiagent environment, MANGO. The strength of the environment lays in itsflexible structure based on communicating software agents that attempt to solve a problem cooperatively. This structure allows the execution of a wide range of global optimization algorithms described as a set of interacting operations. At one extreme, MANGO welcomes an individual non-cooperating agent, which is basically the traditional way of solving a global optimization problem. At the other extreme, autonomous agents existing in the environment cooperate as they see fit during run time. We explain the development and communication tools provided in the environment as well as examples of agent realizations and cooperation scenarios. We also show how the multiagent structure is more effective than having a single nonlinear optimization algorithm with randomly selected initial points
- …