28,022 research outputs found

    Parameter selection and performance comparison of particle swarm optimization in sensor networks localization

    Get PDF
    Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors\u27 memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm

    An elastic net orthogonal forward regression algorithm

    No full text
    In this paper we propose an efficient two-level model identification method for a large class of linear-in-the-parameters models from the observational data. A new elastic net orthogonal forward regression (ENOFR) algorithm is employed at the lower level to carry out simultaneous model selection and elastic net parameter estimation. The two regularization parameters in the elastic net are optimized using a particle swarm optimization (PSO) algorithm at the upper level by minimizing the leave one out (LOO) mean square error (LOOMSE). Illustrative examples are included to demonstrate the effectiveness of the new approaches

    Particle Swarm Optimization—An Adaptation for the Control of Robotic Swarms

    Get PDF
    Particle Swarm Optimization (PSO) is a numerical optimization technique based on the motion of virtual particles within a multidimensional space. The particles explore the space in an attempt to find minima or maxima to the optimization problem. The motion of the particles is linked, and the overall behavior of the particle swarm is controlled by several parameters. PSO has been proposed as a control strategy for physical swarms of robots that are localizing a source; the robots are analogous to the virtual particles. However, previous attempts to achieve this have shown that there are inherent problems. This paper addresses these problems by introducing a modified version of PSO, as well as introducing new guidelines for parameter selection. The proposed algorithm links the parameters to the velocity and acceleration of each robot, and demonstrates obstacle avoidance. Simulation results from both MATLAB and Gazebo show close agreement and demonstrate that the proposed algorithm is capable of effective control of a robotic swarm and obstacle avoidance

    Full model selection in the space of data mining operators

    Get PDF
    We propose a framework and a novel algorithm for the full model selection (FMS) problem. The proposed algorithm, combining both genetic algorithms (GA) and particle swarm optimization (PSO), is named GPS (which stands for GAPSO-FMS), in which a GA is used for searching the optimal structure of a data mining solution, and PSO is used for searching the optimal parameter set for a particular structure instance. Given a classification or regression problem, GPS outputs a FMS solution as a directed acyclic graph consisting of diverse data mining operators that are applicable to the problem, including data cleansing, data sampling, feature transformation/selection and algorithm operators. The solution can also be represented graphically in a human readable form. Experimental results demonstrate the benefit of the algorithm

    Rock-burst occurrence prediction based on optimized naïve bayes models

    Get PDF
    Rock-burst is a common failure in hard rock related projects in civil and mining construction and therefore, proper classification and prediction of this phenomenon is of interest. This research presents the development of optimized naïve Bayes models, in predicting rock-burst failures in underground projects. The naïve Bayes models were optimized using four weight optimization techniques including forward, backward, particle swarm optimization, and evolutionary. An evolutionary random forest model was developed to identify the most significant input parameters. The maximum tangential stress, elastic energy index, and uniaxial tensile stress were then selected by the feature selection technique (i.e., evolutionary random forest) to develop the optimized naïve Bayes models. The performance of the models was assessed using various criteria as well as a simple ranking system. The results of this research showed that particle swarm optimization was the most effective technique in improving the accuracy of the naïve Bayes model for rock-burst prediction (cumulative ranking = 21), while the backward technique was the worst weight optimization technique (cumulative ranking = 11). All the optimized naïve Bayes models identified the maximum tangential stress as the most significant parameter in predicting rock-burst failures. The results of this research demonstrate that particle swarm optimization technique may improve the accuracy of naïve Bayes algorithms in predicting rock-burst occurrence. © 2013 IEEE

    Researches on Hierarchical Bare Bones Particle Swarm Optimization for Single-Objective Optimization Problems

    Get PDF
    In experiments and applications, optimization problems aim at finding the best solution from all possible solutions. According to the number of objective functions, optimization problems can be divided into single-objective problems and multi-objective problems. In this thesis, we focus on solutions for single-objective optimization problems. The purpose of this thesis is to clarify a means for realizing high search accuracy without parameter adjustment.To achieve high accuracy results for single-objective optimization problems, there are four major points to note: the local search ability in unimodal problems, the global search ability in multimodal problems, diverse search patterns for different problems, and the convergence speed controlling. Population-based methods like the particle swarm optimization (PSO) algorithms are often used to solve single-objective optimization problems. However, the PSO is a parameter-needed method which means it needs to adjust parameters for better performances. The adjustment of parameters becomes an overhead when considering for engineering applications. Besides, the bare bones particle swarm optimization (BBPSO) algorithm is a parameter-free method but unable to change the search pattern according to different problems. Also, the convergence speed of the BBPSO is too fast to achieve high accuracy results. To cross the shortcoming of existing methods and present high accuracy results for single-objective optimization problems, seven different hierarchical strategies are combined with the BBPSO in this thesis. Four of the proposed algorithms are designed with swarm division which are able to converge to the global optimum fast. The other three algorithms are designed with swarm reconstruction which are able to slow down the convergence and solve shifted or rotated problems. Moreover, no parameter adjustment is needed when controlling the convergence speed.First of all, four algorithms with swarm division are proposed. In the pair-wise bare bones particle swarm optimization (PBBPSO) algorithm, the swarm splits into several search units. Two particle are placed in one unit to enhance the local search ability of the particle swarm.To increase the global search ability, the dynamic allocation bare bones particle swarm optimization (DABBPSO) algorithm is proposed. Particles in DABBPSO are divided into two groups before evaluation according to their personal best position. One group is named as the core group (CG) and the other one is called the edge group (EG). The CG focuses on digging and trying to find the optimal point in the current local optimum. Conversely, the EG aims at exploring the research area and giving the whole swarm more chances to escape from the local optimum. The two groups work together to find the global optimum in the search area.To solve the shifted of rotated problems, traditional methods usually need to increase the population size. However, the growth of population size may increase the computing time. To cross this shortcoming, a multilayer structure is used in the triple bare bones particle swarm optimization (TBBPSO) algorithm. The TBBPSO is able to present high accuracy results in shifted and rotated problems without the increasing of population size.In real-world applications, optimization methods are required to solve different types of optimization problems. However, the original BBPSO can not change its search pattern according to different problems. To solve this problem, a bare bones particle swarm optimization algorithm with dynamic local search (DLS-BBPSO) is proposed. The dynamic local search strategy is able to provide different search patterns based on different questions.In engineering applications, the optimization results can be improved by controlling the convergence speed. Normally, traditional methods need parameter adjustment to control the convergence speed. It is difficult to adjust the parameters for every single problem. To solve this problem, three different reorganization strategies are combined with the BBPSO. In the bare bones particle swarm optimization algorithm with co-evaluation (BBPSO-C), a shadow swarm is used to increase the diversity of the original swarm. A dynamic grouping method is used to disperse both the shadow particle swarm and the original particle swarm. After the dispersion, an exchanging process will be held between the two swarms. The original swarm will be more concentrated and the shadow swarm will be more scattered. With the moving of particles between the two swarms, the BBPSO-C gains the ability to slow down the convergence without parameter adjustment.With the improvement of technologies, it is possible to get high accuracy results with a long calculation. In the dynamic reconstruction bare bones particle swarm optimization (DRBBPSO) algorithm, a dynamic elite selection strategy is used to improve the diversity of the swarm. After elite selection, the swarm will be reconstructed by elite particles. According to experimental results, the DRBBPSO is able to provide high accuracy results after a long calculation.To adapt to different types of optimization problems, a fission-fusion hybrid bare bones bare bones particle swarm optimization (FHBBPSO) is proposed. The FHBBPSO combines a fission strategy and a fusion strategy to sample new positions of the particles. The fission strategy aims at splitting the search space. Particles are assigned to different local groups to sample the corresponding regions. On the other side, the fusion strategy aims at narrowing the search space. Marginal groups will be gradually merged by the central groups until only one group is left. The two strategies work together for the theoretically best solution. The FHBBPSO shows perfect results on experiments with multiple optimization functions.To conclude, the proposed hierarchical strategies provide each of the BBPSO-based algorithms variants with different search characteristics, which makes them able to realize high search accuracy without parameter adjustment.博士(理学)法政大学 (Hosei University
    corecore