411 research outputs found
Impact of noise on a dynamical system: prediction and uncertainties from a swarm-optimized neural network
In this study, an artificial neural network (ANN) based on particle swarm
optimization (PSO) was developed for the time series prediction. The hybrid
ANN+PSO algorithm was applied on Mackey--Glass chaotic time series in the
short-term . The performance prediction was evaluated and compared with
another studies available in the literature. Also, we presented properties of
the dynamical system via the study of chaotic behaviour obtained from the
predicted time series. Next, the hybrid ANN+PSO algorithm was complemented with
a Gaussian stochastic procedure (called {\it stochastic} hybrid ANN+PSO) in
order to obtain a new estimator of the predictions, which also allowed us to
compute uncertainties of predictions for noisy Mackey--Glass chaotic time
series. Thus, we studied the impact of noise for several cases with a white
noise level () from 0.01 to 0.1.Comment: 11 pages, 8 figure
A new particle swarm optimization algorithm for neural network optimization
This paper presents a new particle swarm optimization (PSO) algorithm for tuning parameters (weights) of neural networks. The new PSO algorithm is called fuzzy logic-based particle swarm optimization with cross-mutated operation (FPSOCM), where the fuzzy inference system is applied to determine the inertia weight of PSO and the control parameter of the proposed cross-mutated operation by using human knowledge. By introducing the fuzzy system, the value of the inertia weight becomes variable. The cross-mutated operation is effectively force the solution to escape the local optimum. Tuning parameters (weights) of neural networks is presented using the FPSOCM. Numerical example of neural network is given to illustrate that the performance of the FPSOCM is good for tuning the parameters (weights) of neural networks
Adaptive particle swarm optimization
An adaptive particle swarm optimization (APSO) that features better search efficiency than classical particle swarm optimization (PSO) is presented. More importantly, it can perform a global search over the entire search space with faster convergence speed. The APSO consists of two main steps. First, by evaluating the population distribution and particle fitness, a real-time evolutionary state estimation procedure is performed to identify one of the following four defined evolutionary states, including exploration, exploitation, convergence, and jumping out in each generation. It enables the automatic control of inertia weight, acceleration coefficients, and other algorithmic parameters at run time to improve the search efficiency and convergence speed. Then, an elitist learning strategy is performed when the evolutionary state is classified as convergence state. The strategy will act on the globally best particle to jump out of the likely local optima. The APSO has comprehensively been evaluated on 12 unimodal and multimodal benchmark functions. The effects of parameter adaptation and elitist learning will be studied. Results show that APSO substantially enhances the performance of the PSO paradigm in terms of convergence speed, global optimality, solution accuracy, and algorithm reliability. As APSO introduces two new parameters to the PSO paradigm only, it does not introduce an additional design or implementation complexity
A Comparison of Selected Modifications of the Particle Swarm Optimization Algorithm
We compare 27 modifications of the original particle swarm optimization (PSO) algorithm. The analysis evaluated nine basic PSO types, which differ according to the swarm evolution as controlled by various inertia weights and constriction factor. Each of the basic PSO modifications was analyzed using three different distributed strategies. In the first strategy, the entire swarm population is considered as one unit (OC-PSO), the second strategy periodically partitions the population into equally large complexes according to the particleâs functional value (SCE-PSO), and the final strategy periodically splits the swarm population into complexes using random permutation (SCERand-PSO). All variants are tested using 11 benchmark functions that were prepared for the special session on real-parameter optimization of CEC 2005. It was found that the best modification of the PSO algorithm is a variant with adaptive inertia weight. The best distribution strategy is SCE-PSO, which gives better results than do OC-PSO and SCERand-PSO for seven functions. The sphere function showed no significant difference between SCE-PSO and SCERand-PSO. It follows that a shuffling mechanism improves the optimization process
Improved particle swarm optimization algorithms for economic load dispatch considering electric market
Economic load dispatch problem under the competitive electric market (ELDCEM) is becoming a hot problem that receives a big interest from researchers. A lot of measures are proposed to deal with the problem. In this paper, three versions of PSO method such as conventional particle swarm optimization (PSO), PSO with inertia weight (IWPSO) and PSO with constriction factor (CFPSO) are applied for handling ELDCEM problem. The core duty of the PSO methods is to determine the most optimal power output of generators to obtain total profit as much as possible for generation companies without violation of constraints. These methods are tested on three and ten-unit systems considering payment model for power delivered and different constraints. Results obtained from the PSO methods are compared with each other to evaluate the effectiveness and robustness. As results, IWPSO method is superior to other methods. Besides, comparing the PSO methods with other reported methods also gives a conclusion that IWPSO method is a very strong tool for solving ELDCEM problem because it can obtain the highest profit, fast converge speed and simulation time
Particle Swarm Optimization: Basic Concepts, Variants and Applications in Power Systems
Many areas in power systems require solving one or more nonlinear optimization problems. While analytical methods might suffer from slow convergence and the curse of dimensionality, heuristics-based swarm intelligence can be an efficient alternative. Particle swarm optimization (PSO), part of the swarm intelligence family, is known to effectively solve large-scale nonlinear optimization problems. This paper presents a detailed overview of the basic concepts of PSO and its variants. Also, it provides a comprehensive survey on the power system applications that have benefited from the powerful nature of PSO as an optimization technique. For each application, technical details that are required for applying PSO, such as its type, particle formulation (solution representation), and the most efficient fitness functions are also discussed
Nature-inspired algorithms for solving some hard numerical problems
Optimisation is a branch of mathematics that was developed to find the optimal solutions,
among all the possible ones, for a given problem. Applications of optimisation techniques
are currently employed in engineering, computing, and industrial problems. Therefore, optimisation is a very active research area, leading to the publication of a large number of
methods to solve specific problems to its optimality.
This dissertation focuses on the adaptation of two nature inspired algorithms that, based
on optimisation techniques, are able to compute approximations for zeros of polynomials
and roots of non-linear equations and systems of non-linear equations.
Although many iterative methods for finding all the roots of a given function already
exist, they usually require: (a) repeated deflations, that can lead to very inaccurate results
due to the problem of accumulating rounding errors, (b) good initial approximations to the
roots for the algorithm converge, or (c) the computation of first or second order derivatives,
which besides being computationally intensive, it is not always possible.
The drawbacks previously mentioned served as motivation for the use of Particle Swarm
Optimisation (PSO) and Artificial Neural Networks (ANNs) for root-finding, since they are
known, respectively, for their ability to explore high-dimensional spaces (not requiring good
initial approximations) and for their capability to model complex problems. Besides that,
both methods do not need repeated deflations, nor derivative information.
The algorithms were described throughout this document and tested using a test suite of
hard numerical problems in science and engineering. Results, in turn, were compared with
several results available on the literature and with the well-known DurandâKerner method,
depicting that both algorithms are effective to solve the numerical problems considered.A Optimização Ă© um ramo da matemĂĄtica desenvolvido para encontrar as soluçÔes Ăłptimas, de entre todas as possĂveis, para um determinado problema. Actualmente, sĂŁo vĂĄrias as
tĂ©cnicas de optimização aplicadas a problemas de engenharia, de informĂĄtica e da indĂșstria.
Dada a grande panĂłplia de aplicaçÔes, existem inĂșmeros trabalhos publicados que propĂ”em
mĂ©todos para resolver, de forma Ăłptima, problemas especĂficos.
Esta dissertação foca-se na adaptação de dois algoritmos inspirados na natureza que,
tendo como base técnicas de optimização, são capazes de calcular aproximaçÔes para zeros
de polinĂłmios e raĂzes de equaçÔes nĂŁo lineares e sistemas de equaçÔes nĂŁo lineares.
Embora jĂĄ existam muitos mĂ©todos iterativos para encontrar todas as raĂzes ou zeros de
uma função, eles usualmente exigem: (a) deflaçÔes repetidas, que podem levar a resultados
muito inexactos, devido ao problema da acumulação de erros de arredondamento a cada
iteração; (b) boas aproximaçÔes iniciais para as raĂzes para o algoritmo convergir, ou (c) o
cålculo de derivadas de primeira ou de segunda ordem que, além de ser computacionalmente
intensivo, para muitas funçÔes Ă© impossĂvel de se calcular.
Estas desvantagens motivaram o uso da Optimização por Enxame de PartĂculas (PSO) e
de Redes Neurais Artificiais (RNAs) para o cĂĄlculo de raĂzes. Estas tĂ©cnicas sĂŁo conhecidas,
respectivamente, pela sua capacidade de explorar espaços de dimensão superior (não exigindo
boas aproximaçÔes iniciais) e pela sua capacidade de modelar problemas complexos. Além
disto, tais técnicas não necessitam de deflaçÔes repetidas, nem do cålculo de derivadas.
Ao longo deste documento, os algoritmos sĂŁo descritos e testados, usando um conjunto de
problemas numĂ©ricos com aplicaçÔes nas ciĂȘncias e na engenharia. Os resultados foram comparados com outros disponĂveis na literatura e com o mĂ©todo de DurandâKerner, e sugerem
que ambos os algoritmos são capazes de resolver os problemas numéricos considerados
- âŠ