14 research outputs found

    A hybrid bacterial foraging and modified particle swarm optimization for model order reduction

    Get PDF
    This paper study the model reduction procedures used for the reduction of large-scale dynamic models into a smaller one through some sort of differential and algebraic equations. A confirmed relevance between these two models exists, and it shows same characteristics under study. These reduction procedures are generally utilized for mitigating computational complexity, facilitating system analysis, and thence reducing time and costs. This paper comes out with a study showing the impact of the consolidation between the Bacterial-Foraging (BF) and Modified particle swarm optimization (MPSO) for the reduced order model (ROM). The proposed hybrid algorithm (BF-MPSO) is comprehensively compared with the BF and MPSO algorithms; a comparison is also made with selected existing techniques

    Dual sub-swarm interaction QPSO algorithm based on different correlation coefficients

    Get PDF
    A novel quantum-behaved particle swarm optimization (QPSO) algorithm, the dual sub-swarm interaction QPSO algorithm based on different correlation coefficients (DCC-QPSO), is proposed by constructing master-slave sub-swarms with different potential well centres. In the novel algorithm, the master sub-swarm and the slave sub-swarm have different functinons during the evolutionary process through separate information processing strategies. The master subswarm is conducive to maintaining population diversity and enhancing the global search ability of particles. The slave sub-swarm accelerates the convergence rate and strengthens the particles’ local searching ability. With the critical information contained in the search space and results of the basic QPSO algorithm, this new algorithm avoids the rapid disappearance of swarm diversity and enhances searching ability through collaboration between sub-swarms. Experimental results on six test functions show that DCC-QPSO outperforms the traditional QPSO algorithm regarding optimization of multimodal functions, with enhancement in both convergence speed and precision

    TPPSO: A Novel Two-Phase Particle Swarm Optimization

    Get PDF
    Particle swarm optimization (PSO) is a stout and rapid searching algorithm that has been used in various applications. Nevertheless, its major drawback is the stagnation problem that arises in the later phases of the search process. To solve this problem, a proper balance between investigation and manipulation throughout the search process should be maintained. This article proposes a new PSO variant named two-phases PSO (TPPSO). The concept of TPPSO is to split the search process into two phases. The first phase performs the original PSO operations with linearly decreasing inertia weight, and its objective is to focus on exploration. The second phase focuses on exploitation by generating two random positions in each iteration that are close to the global best position. The two generated positions are compared with the global best position sequentially. If a generated position performs better than the global best position, then it replaces the global best position. To prove the effectiveness of the proposed algorithm, sixteen popular unimodal, multimodal, shifted, and rotated benchmarking functions have been used to compare its performance with other existing well-known PSO variants and non-PSO algorithms. Simulation results show that TPPSO outperforms the other modified and hybrid PSO variants regarding solution quality, convergence speed, and robustness. The convergence speed of TPPSO is extremely fast, making it a suitable optimizer for real-world optimization problems

    Array Pattern Synthesis Using a Digital Position Shift Method

    Get PDF
    Considering all possible steering directions for beam scanning, a digital position shift method (DPSM) is presented to minimize the Peak Sidelobe Level (PSL) by searching the best position solution for every sensor and calculating the pattern with position offset factor. For the truly minimum PSL, digital position shift with optimal amplitude (DPSOA) is considered simultaneously for beam scanning. For searching the best solution to the two methods, constrained conditions for position shift range and amplitude range are described. The method of feedback particle swarm optimization (FPSO) is presented to obtain a large searching space and fast convergence in local space with refined solution. Numerical examples show that the optimized results by DPSM and DPSOA in all steering directions can be used in beam scanning for its digital realization. When compared with the other techniques published in the literature, especially the steering direction close to endfire direction, this method has lower PSL when the main beam width is maintained

    Niching particle swarm optimization based euclidean distance and hierarchical clustering for multimodal optimization

    Get PDF
    Abstract : Multimodal optimization is still one of the most challenging tasks in the evolutionary computation field, when multiple global and local optima need to be effectively and efficiently located. In this paper, a niching Particle Swarm Optimization (PSO) based Euclidean Distance and Hierarchical Clustering (EDHC) for multimodal optimization is proposed. This technique first uses the Euclidean distance based PSO algorithm to perform preliminarily search. In this phase, the particles are rapidly clustered around peaks. Secondly, hierarchical clustering is applied to identify and concentrate the particles distributed around each peak to finely search as a whole. Finally, a small world network topology is adopted in each niche to improve the exploitation ability of the algorithm. At the end of this paper, the proposed EDHC-PSO algorithm is applied to the Traveling Salesman Problems (TSP) after being discretized. The experiments demonstrate that the proposed method outperforms existing niching techniques on benchmark problems, and is effective for TSP

    Artificial Bee Colony Algorithm Combined with Grenade Explosion Method and Cauchy Operator for Global Optimization

    Get PDF
    Artificial bee colony (ABC) algorithm is a popular swarm intelligence technique inspired by the intelligent foraging behavior of honey bees. However, ABC is good at exploration but poor at exploitation and its convergence speed is also an issue in some cases. To improve the performance of ABC, a novel ABC combined with grenade explosion method (GEM) and Cauchy operator, namely, ABCGC, is proposed. GEM is embedded in the onlooker bees' phase to enhance the exploitation ability and accelerate convergence of ABCGC; meanwhile, Cauchy operator is introduced into the scout bees' phase to help ABCGC escape from local optimum and further enhance its exploration ability. Two sets of well-known benchmark functions are used to validate the better performance of ABCGC. The experiments confirm that ABCGC is significantly superior to ABC and other competitors; particularly it converges to the global optimum faster in most cases. These results suggest that ABCGC usually achieves a good balance between exploitation and exploration and can effectively serve as an alternative for global optimization

    Evolving interval-based representation for multiple classifier fusion.

    Get PDF
    Designing an ensemble of classifiers is one of the popular research topics in machine learning since it can give better results than using each constituent member. Furthermore, the performance of ensemble can be improved using selection or adaptation. In the former, the optimal set of base classifiers, meta-classifier, original features, or meta-data is selected to obtain a better ensemble than using the entire classifiers and features. In the latter, the base classifiers or combining algorithms working on the outputs of the base classifiers are made to adapt to a particular problem. The adaptation here means that the parameters of these algorithms are trained to be optimal for each problem. In this study, we propose a novel evolving combining algorithm using the adaptation approach for the ensemble systems. Instead of using numerical value when computing the representation for each class, we propose to use the interval-based representation for the class. The optimal value of the representation is found through Particle Swarm Optimization. During classification, a test instance is assigned to the class with the interval-based representation that is closest to the base classifiers’ prediction. Experiments conducted on a number of popular dataset confirmed that the proposed method is better than the well-known ensemble systems using Decision Template and Sum Rule as combiner, L2-loss Linear Support Vector Machine, Multiple Layer Neural Network, and the ensemble selection methods based on GA-Meta-data, META-DES, and ACO
    corecore