10,310 research outputs found
Fuzzy Adaptive Tuning of a Particle Swarm Optimization Algorithm for Variable-Strength Combinatorial Test Suite Generation
Combinatorial interaction testing is an important software testing technique
that has seen lots of recent interest. It can reduce the number of test cases
needed by considering interactions between combinations of input parameters.
Empirical evidence shows that it effectively detects faults, in particular, for
highly configurable software systems. In real-world software testing, the input
variables may vary in how strongly they interact, variable strength
combinatorial interaction testing (VS-CIT) can exploit this for higher
effectiveness. The generation of variable strength test suites is a
non-deterministic polynomial-time (NP) hard computational problem
\cite{BestounKamalFuzzy2017}. Research has shown that stochastic
population-based algorithms such as particle swarm optimization (PSO) can be
efficient compared to alternatives for VS-CIT problems. Nevertheless, they
require detailed control for the exploitation and exploration trade-off to
avoid premature convergence (i.e. being trapped in local optima) as well as to
enhance the solution diversity. Here, we present a new variant of PSO based on
Mamdani fuzzy inference system
\cite{Camastra2015,TSAKIRIDIS2017257,KHOSRAVANIAN2016280}, to permit adaptive
selection of its global and local search operations. We detail the design of
this combined algorithm and evaluate it through experiments on multiple
synthetic and benchmark problems. We conclude that fuzzy adaptive selection of
global and local search operations is, at least, feasible as it performs only
second-best to a discrete variant of PSO, called DPSO. Concerning obtaining the
best mean test suite size, the fuzzy adaptation even outperforms DPSO
occasionally. We discuss the reasons behind this performance and outline
relevant areas of future work.Comment: 21 page
A self-learning particle swarm optimizer for global optimization problems
Copyright @ 2011 IEEE. All Rights Reserved. This article was made available through the Brunel Open Access Publishing Fund.Particle swarm optimization (PSO) has been shown as an effective tool for solving global optimization problems. So far, most PSO algorithms use a single learning pattern for all particles, which means that all particles in a swarm use the same strategy. This monotonic learning pattern may cause the lack of intelligence for a particular particle, which makes it unable to deal with different complex situations. This paper presents a novel algorithm, called self-learning particle swarm optimizer (SLPSO), for global optimization problems. In SLPSO, each particle has a set of four strategies to cope with different situations in the search space. The cooperation of the four strategies is implemented by an adaptive learning framework at the individual level, which can enable a particle to choose the optimal strategy according to its own local fitness landscape. The experimental study on a set of 45 test functions and two real-world problems show that SLPSO has a superior performance in comparison with several other peer algorithms.This work was supported by the Engineering and Physical Sciences Research Council of U.K. under Grants EP/E060722/1 and EP/E060722/2
Adaptive hybrid optimization strategy for calibration and parameter estimation of physical models
A new adaptive hybrid optimization strategy, entitled squads, is proposed for
complex inverse analysis of computationally intensive physical models. The new
strategy is designed to be computationally efficient and robust in
identification of the global optimum (e.g. maximum or minimum value of an
objective function). It integrates a global Adaptive Particle Swarm
Optimization (APSO) strategy with a local Levenberg-Marquardt (LM) optimization
strategy using adaptive rules based on runtime performance. The global strategy
optimizes the location of a set of solutions (particles) in the parameter
space. The LM strategy is applied only to a subset of the particles at
different stages of the optimization based on the adaptive rules. After the LM
adjustment of the subset of particle positions, the updated particles are
returned to the APSO strategy. The advantages of coupling APSO and LM in the
manner implemented in squads is demonstrated by comparisons of squads
performance against Levenberg-Marquardt (LM), Particle Swarm Optimization
(PSO), Adaptive Particle Swarm Optimization (APSO; the TRIBES strategy), and an
existing hybrid optimization strategy (hPSO). All the strategies are tested on
2D, 5D and 10D Rosenbrock and Griewank polynomial test functions and a
synthetic hydrogeologic application to identify the source of a contaminant
plume in an aquifer. Tests are performed using a series of runs with random
initial guesses for the estimated (function/model) parameters. Squads is
observed to have the best performance when both robustness and efficiency are
taken into consideration than the other strategies for all test functions and
the hydrogeologic application
Nonlinear system identification and control using state transition algorithm
By transforming identification and control for nonlinear system into
optimization problems, a novel optimization method named state transition
algorithm (STA) is introduced to solve the problems. In the proposed STA, a
solution to a optimization problem is considered as a state, and the updating
of a solution equates to a state transition, which makes it easy to understand
and convenient to implement. First, the STA is applied to identify the optimal
parameters of the estimated system with previously known structure. With the
accurate estimated model, an off-line PID controller is then designed optimally
by using the STA as well. Experimental results have demonstrated the validity
of the methodology, and comparisons to STA with other optimization algorithms
have testified that STA is a promising alternative method for system
identification and control due to its stronger search ability, faster
convergence rate and more stable performance.Comment: 20 pages, 18 figure
Free Search and Particle Swarm Optimisation applied to Non-constrained Test
This article presents an evaluation of Particle Swarm Optimisation (PSO) with variable inertia weight and Free Search (FS) with variable neighbour space applied to nonconstrained numerical test. The objectives are to assess how high convergence speed reflects on adaptation to various test problems and to identify possible balance between convergence speed and adaptation, which allows the algorithms to complete successfully the process of search on heterogeneous tasks with limited computational resources within a reasonable finite time and with acceptable for engineering purposes precision. Modification strategies of both algorithms are compared in terms of their ability for search space exploration. Five numerical tests are explored. Achieved experimental results are presented and analysed
- …