2,126 research outputs found

    Comparison of Evolutionary Optimization Algorithms for FM-TV Broadcasting Antenna Array Null Filling

    Get PDF
    Broadcasting antenna array null filling is a very challenging problem for antenna design optimization. This paper compares five antenna design optimization algorithms (Differential Evolution, Particle Swarm, Taguchi, Invasive Weed, Adaptive Invasive Weed) as solutions to the antenna array null filling problem. The algorithms compared are evolutionary algorithms which use mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection. The focus of the comparison is given to the algorithm with the best results, nevertheless, it becomes obvious that the algorithm which produces the best fitness (Invasive Weed Optimization) requires very substantial computational resources due to its random search nature

    Pseudo derivative evolutionary algorithm and convergence analysis

    Get PDF

    Comparison of Particle Swarm Optimization and Self-Adaptive Dynamic Differential Evolution for the Imaging of a Periodic Conductor

    Get PDF
    [[abstract]]The application of two techniques to reconstruct the shape of a two-dimensional periodic perfect conductor from mimic the measurement data is presented. A periodic conducting cylinder of unknown periodic length and shape scatters the incident wave in half-space and the scattered field is recorded outside. After an integral formulation, the microwave imaging is recast as a nonlinear optimization problem; a cost functional is defined by the norm of a difference between the measured scattered electric fields and the calculated scattered fields for an estimated shape of a conductor. Thus, the shape of conductor can be obtained by minimizing the cost function. In order to solve this inverse scattering problem, transverse magnetic (TM) waves are incident upon the objects and two techniques are employed to solve these problems. The first is based on an particle swarm optimization (PSO) and the second is a self-adaptive dynamic differential evolution (SADDE). Both techniques have been tested in the case of simulated mimic the measurement data contaminated by additive white Gaussian noise. Numerical results indicate that the SADDE algorithm is better than the PSO in reconstructed accuracy and convergence speed.[[notice]]補正完畢[[incitationindex]]SC

    Hybrid Intelligent Optimization Methods for Engineering Problems

    Get PDF
    The purpose of optimization is to obtain the best solution under certain conditions. There are numerous optimization methods because different problems need different solution methodologies; therefore, it is difficult to construct patterns. Also mathematical modeling of a natural phenomenon is almost based on differentials. Differential equations are constructed with relative increments among the factors related to yield. Therefore, the gradients of these increments are essential to search the yield space. However, the landscape of yield is not a simple one and mostly multi-modal. Another issue is differentiability. Engineering design problems are usually nonlinear and they sometimes exhibit discontinuous derivatives for the objective and constraint functions. Due to these difficulties, non-gradient-based algorithms have become more popular in recent decades. Genetic algorithms (GA) and particle swarm optimization (PSO) algorithms are popular, non-gradient based algorithms. Both are population-based search algorithms and have multiple points for initiation. A significant difference from a gradient-based method is the nature of the search methodologies. For example, randomness is essential for the search in GA or PSO. Hence, they are also called stochastic optimization methods. These algorithms are simple, robust, and have high fidelity. However, they suffer from similar defects, such as, premature convergence, less accuracy, or large computational time. The premature convergence is sometimes inevitable due to the lack of diversity. As the generations of particles or individuals in the population evolve, they may lose their diversity and become similar to each other. To overcome this issue, we studied the diversity concept in GA and PSO algorithms. Diversity is essential for a healthy search, and mutations are the basic operators to provide the necessary variety within a population. After having a close scrutiny of the diversity concept based on qualification and quantification studies, we improved new mutation strategies and operators to provide beneficial diversity within the population. We called this new approach as multi-frequency vibrational GA or PSO. They were applied to different aeronautical engineering problems in order to study the efficiency of these new approaches. These implementations were: applications to selected benchmark test functions, inverse design of two-dimensional (2D) airfoil in subsonic flow, optimization of 2D airfoil in transonic flow, path planning problems of autonomous unmanned aerial vehicle (UAV) over a 3D terrain environment, 3D radar cross section minimization problem for a 3D air vehicle, and active flow control over a 2D airfoil. As demonstrated by these test cases, we observed that new algorithms outperform the current popular algorithms. The principal role of this multi-frequency approach was to determine which individuals or particles should be mutated, when they should be mutated, and which ones should be merged into the population. The new mutation operators, when combined with a mutation strategy and an artificial intelligent method, such as, neural networks or fuzzy logic process, they provided local and global diversities during the reproduction phases of the generations. Additionally, the new approach also introduced random and controlled diversity. Due to still being population-based techniques, these methods were as robust as the plain GA or PSO algorithms. Based on the results obtained, it was concluded that the variants of the present multi-frequency vibrational GA and PSO were efficient algorithms, since they successfully avoided all local optima within relatively short optimization cycles

    Spatio-Temporal Patterns act as Computational Mechanisms governing Emergent behavior in Robotic Swarms

    Get PDF
    open access articleOur goal is to control a robotic swarm without removing its swarm-like nature. In other words, we aim to intrinsically control a robotic swarm emergent behavior. Past attempts at governing robotic swarms or their selfcoordinating emergent behavior, has proven ineffective, largely due to the swarm’s inherent randomness (making it difficult to predict) and utter simplicity (they lack a leader, any kind of centralized control, long-range communication, global knowledge, complex internal models and only operate on a couple of basic, reactive rules). The main problem is that emergent phenomena itself is not fully understood, despite being at the forefront of current research. Research into 1D and 2D Cellular Automata has uncovered a hidden computational layer which bridges the micromacro gap (i.e., how individual behaviors at the micro-level influence the global behaviors on the macro-level). We hypothesize that there also lie embedded computational mechanisms at the heart of a robotic swarm’s emergent behavior. To test this theory, we proceeded to simulate robotic swarms (represented as both particles and dynamic networks) and then designed local rules to induce various types of intelligent, emergent behaviors (as well as designing genetic algorithms to evolve robotic swarms with emergent behaviors). Finally, we analysed these robotic swarms and successfully confirmed our hypothesis; analyzing their developments and interactions over time revealed various forms of embedded spatiotemporal patterns which store, propagate and parallel process information across the swarm according to some internal, collision-based logic (solving the mystery of how simple robots are able to self-coordinate and allow global behaviors to emerge across the swarm)

    The implementation frameworks of meta-heuristics hybridization with dynamic parameterization

    Get PDF
    The hybridization of meta-heuristics algorithms has achieved a remarkable improvement fromthe adaptation of dynamic parameterization. This paper proposes a variety of implementationframeworks for the hybridization of Particle Swarm Optimization (PSO) and GeneticAlgorithm (GA) and the dynamic parameterization. In this paper, taxonomy of the PSO-GAwith dynamic parameterization is presented to provide a common terminology andclassification mechanisms. Based on the taxonomy, thirty implementation frameworks arepossible to be adapted. Furthermore, different algorithms that used the implementationframeworks with sequential scheme and dynamic parameterizations approaches are tested insolving a facility layout problem. The results present the effectiveness of each tested algorithm in comparison to the single PSO and constant parameterization.Keywords: hybridization; PSO; GA; implementation frameworks; dynamic parameterization

    Efficiency Analysis of Swarm Intelligence and Randomization Techniques

    Full text link
    Swarm intelligence has becoming a powerful technique in solving design and scheduling tasks. Metaheuristic algorithms are an integrated part of this paradigm, and particle swarm optimization is often viewed as an important landmark. The outstanding performance and efficiency of swarm-based algorithms inspired many new developments, though mathematical understanding of metaheuristics remains partly a mystery. In contrast to the classic deterministic algorithms, metaheuristics such as PSO always use some form of randomness, and such randomization now employs various techniques. This paper intends to review and analyze some of the convergence and efficiency associated with metaheuristics such as firefly algorithm, random walks, and L\'evy flights. We will discuss how these techniques are used and their implications for further research.Comment: 10 pages. arXiv admin note: substantial text overlap with arXiv:1212.0220, arXiv:1208.0527, arXiv:1003.146
    corecore