310 research outputs found

    Researches on Hierarchical Bare Bones Particle Swarm Optimization for Single-Objective Optimization Problems

    Get PDF
    In experiments and applications, optimization problems aim at finding the best solution from all possible solutions. According to the number of objective functions, optimization problems can be divided into single-objective problems and multi-objective problems. In this thesis, we focus on solutions for single-objective optimization problems. The purpose of this thesis is to clarify a means for realizing high search accuracy without parameter adjustment.To achieve high accuracy results for single-objective optimization problems, there are four major points to note: the local search ability in unimodal problems, the global search ability in multimodal problems, diverse search patterns for different problems, and the convergence speed controlling. Population-based methods like the particle swarm optimization (PSO) algorithms are often used to solve single-objective optimization problems. However, the PSO is a parameter-needed method which means it needs to adjust parameters for better performances. The adjustment of parameters becomes an overhead when considering for engineering applications. Besides, the bare bones particle swarm optimization (BBPSO) algorithm is a parameter-free method but unable to change the search pattern according to different problems. Also, the convergence speed of the BBPSO is too fast to achieve high accuracy results. To cross the shortcoming of existing methods and present high accuracy results for single-objective optimization problems, seven different hierarchical strategies are combined with the BBPSO in this thesis. Four of the proposed algorithms are designed with swarm division which are able to converge to the global optimum fast. The other three algorithms are designed with swarm reconstruction which are able to slow down the convergence and solve shifted or rotated problems. Moreover, no parameter adjustment is needed when controlling the convergence speed.First of all, four algorithms with swarm division are proposed. In the pair-wise bare bones particle swarm optimization (PBBPSO) algorithm, the swarm splits into several search units. Two particle are placed in one unit to enhance the local search ability of the particle swarm.To increase the global search ability, the dynamic allocation bare bones particle swarm optimization (DABBPSO) algorithm is proposed. Particles in DABBPSO are divided into two groups before evaluation according to their personal best position. One group is named as the core group (CG) and the other one is called the edge group (EG). The CG focuses on digging and trying to find the optimal point in the current local optimum. Conversely, the EG aims at exploring the research area and giving the whole swarm more chances to escape from the local optimum. The two groups work together to find the global optimum in the search area.To solve the shifted of rotated problems, traditional methods usually need to increase the population size. However, the growth of population size may increase the computing time. To cross this shortcoming, a multilayer structure is used in the triple bare bones particle swarm optimization (TBBPSO) algorithm. The TBBPSO is able to present high accuracy results in shifted and rotated problems without the increasing of population size.In real-world applications, optimization methods are required to solve different types of optimization problems. However, the original BBPSO can not change its search pattern according to different problems. To solve this problem, a bare bones particle swarm optimization algorithm with dynamic local search (DLS-BBPSO) is proposed. The dynamic local search strategy is able to provide different search patterns based on different questions.In engineering applications, the optimization results can be improved by controlling the convergence speed. Normally, traditional methods need parameter adjustment to control the convergence speed. It is difficult to adjust the parameters for every single problem. To solve this problem, three different reorganization strategies are combined with the BBPSO. In the bare bones particle swarm optimization algorithm with co-evaluation (BBPSO-C), a shadow swarm is used to increase the diversity of the original swarm. A dynamic grouping method is used to disperse both the shadow particle swarm and the original particle swarm. After the dispersion, an exchanging process will be held between the two swarms. The original swarm will be more concentrated and the shadow swarm will be more scattered. With the moving of particles between the two swarms, the BBPSO-C gains the ability to slow down the convergence without parameter adjustment.With the improvement of technologies, it is possible to get high accuracy results with a long calculation. In the dynamic reconstruction bare bones particle swarm optimization (DRBBPSO) algorithm, a dynamic elite selection strategy is used to improve the diversity of the swarm. After elite selection, the swarm will be reconstructed by elite particles. According to experimental results, the DRBBPSO is able to provide high accuracy results after a long calculation.To adapt to different types of optimization problems, a fission-fusion hybrid bare bones bare bones particle swarm optimization (FHBBPSO) is proposed. The FHBBPSO combines a fission strategy and a fusion strategy to sample new positions of the particles. The fission strategy aims at splitting the search space. Particles are assigned to different local groups to sample the corresponding regions. On the other side, the fusion strategy aims at narrowing the search space. Marginal groups will be gradually merged by the central groups until only one group is left. The two strategies work together for the theoretically best solution. The FHBBPSO shows perfect results on experiments with multiple optimization functions.To conclude, the proposed hierarchical strategies provide each of the BBPSO-based algorithms variants with different search characteristics, which makes them able to realize high search accuracy without parameter adjustment.博士(理学)法政大学 (Hosei University

    A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications

    Get PDF
    Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms

    Genetic learning particle swarm optimization

    Get PDF
    Social learning in particle swarm optimization (PSO) helps collective efficiency, whereas individual reproduction in genetic algorithm (GA) facilitates global effectiveness. This observation recently leads to hybridizing PSO with GA for performance enhancement. However, existing work uses a mechanistic parallel superposition and research has shown that construction of superior exemplars in PSO is more effective. Hence, this paper first develops a new framework so as to organically hybridize PSO with another optimization technique for “learning.” This leads to a generalized “learning PSO” paradigm, the *L-PSO. The paradigm is composed of two cascading layers, the first for exemplar generation and the second for particle updates as per a normal PSO algorithm. Using genetic evolution to breed promising exemplars for PSO, a specific novel *L-PSO algorithm is proposed in the paper, termed genetic learning PSO (GL-PSO). In particular, genetic operators are used to generate exemplars from which particles learn and, in turn, historical search information of particles provides guidance to the evolution of the exemplars. By performing crossover, mutation, and selection on the historical information of particles, the constructed exemplars are not only well diversified, but also high qualified. Under such guidance, the global search ability and search efficiency of PSO are both enhanced. The proposed GL-PSO is tested on 42 benchmark functions widely adopted in the literature. Experimental results verify the effectiveness, efficiency, robustness, and scalability of the GL-PSO

    AN ADAPTIVE LOCALIZATION SYSTEM USING PARTICLE SWARM OPTIMIZATION IN A CIRCULAR DISTRIBUTION FORM

    Get PDF
    Tracking the user location in indoor environment becomes substantial issue in recent research High accuracy and fast convergence are very important issues for a good localization system. One of the techniques that are used in localization systems is particle swarm optimization (PSO). This technique is a stochastic optimization based on the movement and velocity of particles. In this paper, we introduce an algorithm using PSO for indoor localization system. The proposed algorithm uses PSO to generate several particles that have circular distribution around one access point (AP). The PSO generates particles where the distance from each particle to the AP is the same distance from the AP to the target. The particle which achieves correct distances (distances from each AP to target) is selected as the target. Four PSO variants, namely standard PSO (SPSO), linearly decreasing inertia weight PSO (LDIW PSO), self-organizing hierarchical PSO with time acceleration coefficients (HPSO-TVAC), and constriction factor PSO (CFPSO) are used to find the minimum distance error. The simulation results show the proposed method using HPSO-TVAC variant achieves very low distance error of 0.19 mete

    Optimization of Bi-Directional V2G Behavior With Active Battery Anti-Aging Scheduling

    Get PDF

    Improving Robustness in Social Fabric-based Cultural Algorithms

    Get PDF
    In this thesis, we propose two new approaches which aim at improving robustness in social fabric-based cultural algorithms. Robustness is one of the most significant issues when designing evolutionary algorithms. These algorithms should be capable of adapting themselves to various search landscapes. In the first proposed approach, we utilize the dynamics of social interactions in solving complex and multi-modal problems. In the literature of Cultural Algorithms, Social fabric has been suggested as a new method to use social phenomena to improve the search process of CAs. In this research, we introduce the Irregular Neighborhood Restructuring as a new adaptive method to allow individuals to rearrange their neighborhoods to avoid local optima or stagnation during the search process. In the second approach, we apply the concept of Confidence Interval from Inferential Statistics to improve the performance of knowledge sources in the Belief Space. This approach aims at improving the robustness and accuracy of the normative knowledge source. It is supposed to be more stable against sudden changes in the values of incoming solutions. The IEEE-CEC2015 benchmark optimization functions are used to evaluate our proposed methods against standard versions of CA and Social Fabric. IEEE-CEC2015 is a set of 15 multi-modal and hybrid functions which are used as a standard benchmark to evaluate optimization algorithms. We observed that both of the proposed approaches produce promising results on the majority of benchmark functions. Finally, we state that our proposed strategies enhance the robustness of the social fabric-based CAs against challenges such as multi-modality, copious local optima, and diverse landscapes
    corecore