882 research outputs found
Orthogonal learning particle swarm optimization
Particle swarm optimization (PSO) relies on its
learning strategy to guide its search direction. Traditionally,
each particle utilizes its historical best experience and its neighborhood’s
best experience through linear summation. Such a
learning strategy is easy to use, but is inefficient when searching
in complex problem spaces. Hence, designing learning strategies
that can utilize previous search information (experience) more
efficiently has become one of the most salient and active PSO
research topics. In this paper, we proposes an orthogonal learning
(OL) strategy for PSO to discover more useful information that
lies in the above two experiences via orthogonal experimental
design. We name this PSO as orthogonal learning particle swarm
optimization (OLPSO). The OL strategy can guide particles to
fly in better directions by constructing a much promising and
efficient exemplar. The OL strategy can be applied to PSO with
any topological structure. In this paper, it is applied to both global
and local versions of PSO, yielding the OLPSO-G and OLPSOL
algorithms, respectively. This new learning strategy and the
new algorithms are tested on a set of 16 benchmark functions, and
are compared with other PSO algorithms and some state of the
art evolutionary algorithms. The experimental results illustrate
the effectiveness and efficiency of the proposed learning strategy
and algorithms. The comparisons show that OLPSO significantly
improves the performance of PSO, offering faster global convergence,
higher solution quality, and stronger robustness
A theoretical and empirical study on unbiased boundary-extended crossover for real-valued representation
Copyright © 2012 Elsevier. NOTICE: this is the author’s version of a work that was accepted for publication in Information Sciences. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Information Sciences Vol. 183 Issue 1 (2012), DOI: 10.1016/j.ins.2011.07.013We present a new crossover operator for real-coded genetic algorithms employing a novel methodology to remove the inherent bias of pre-existing crossover operators. This is done by transforming the topology of the hyper-rectangular real space by gluing opposite boundaries and designing a boundary extension method for making the fitness function smooth at the glued boundary. We show the advantages of the proposed crossover by comparing its performance with those of existing ones on test functions that are commonly used in the literature, and a nonlinear regression on a real-world dataset
Leo: Lagrange Elementary Optimization
Global optimization problems are frequently solved using the practical and
efficient method of evolutionary sophistication. But as the original problem
becomes more complex, so does its efficacy and expandability. Thus, the purpose
of this research is to introduce the Lagrange Elementary Optimization (Leo) as
an evolutionary method, which is self-adaptive inspired by the remarkable
accuracy of vaccinations using the albumin quotient of human blood. They
develop intelligent agents using their fitness function value after gene
crossing. These genes direct the search agents during both exploration and
exploitation. The main objective of the Leo algorithm is presented in this
paper along with the inspiration and motivation for the concept. To demonstrate
its precision, the proposed algorithm is validated against a variety of test
functions, including 19 traditional benchmark functions and the CECC06 2019
test functions. The results of Leo for 19 classic benchmark test functions are
evaluated against DA, PSO, and GA separately, and then two other recent
algorithms such as FDO and LPB are also included in the evaluation. In
addition, the Leo is tested by ten functions on CECC06 2019 with DA, WOA, SSA,
FDO, LPB, and FOX algorithms distinctly. The cumulative outcomes demonstrate
Leo's capacity to increase the starting population and move toward the global
optimum. Different standard measurements are used to verify and prove the
stability of Leo in both the exploration and exploitation phases. Moreover,
Statistical analysis supports the findings results of the proposed research.
Finally, novel applications in the real world are introduced to demonstrate the
practicality of Leo.Comment: 28 page
A Line Flow Granular Computing Approach for Economic Dispatch with Line Constraints
© 2017 IEEE. Line flow calculation plays a critically important role to guarantee the stable operation of power system in economic dispatch (ED) problems with line constraints. This paper presents a line flow granular computing approach for power flow calculation to assist the investigation on ED with line constraints, where the hierarchy method is adopted to divide the power network into multiple layers to reduce computational complexity. Each layer contains granules for granular computing, and the layer network is reduced by Ward equivalent retaining the PV nodes and boundary nodes of tie lines to decrease the data dimension. Then, Newton-Raphson method is applied further to calculate the power line flows within the layer. This approach is tested on IEEE 39-bus and 118-bus systems. The testing results show that the granular computing approach can solve the line flow problem in 9.2 s for the IEEE 118-bus system, while the conventional AC method needs 44.56 s. The maximum relative error of the granular computing approach in line flow tests is only 0.43%, which is quite small and acceptable. Therefore, the case studies demonstrate that the proposed granular computing approach is correct, effective, and can ensure the accuracy and efficiency of power line flow calculation
A Discrete Particle Swarm Optimizer for the Design of Cryptographic Boolean Functions
A Particle Swarm Optimizer for the search of balanced Boolean functions with good cryptographic properties is proposed in this paper. The algorithm is a modified version of the permutation PSO by Hu, Eberhart and Shi which preserves the Hamming weight of the particles positions, coupled with the Hill Climbing method devised by Millan, Clark and Dawson to improve the nonlinearity and deviation from correlation immunity of Boolean functions. The parameters for the PSO velocity equation are tuned by means of two meta-optimization techniques, namely Local Unimodal Sampling (LUS) and Continuous Genetic Algorithms (CGA), finding that CGA produces better results. Using the CGA-evolved parameters, the PSO algorithm is then run on the spaces of Boolean functions from to variables. The results of the experiments are reported, observing that this new PSO algorithm generates Boolean functions featuring similar or better combinations of nonlinearity, correlation immunity and propagation criterion with respect to the ones obtained by other optimization methods
- …