2,021 research outputs found

    Efficiency Analysis of Swarm Intelligence and Randomization Techniques

    Full text link
    Swarm intelligence has becoming a powerful technique in solving design and scheduling tasks. Metaheuristic algorithms are an integrated part of this paradigm, and particle swarm optimization is often viewed as an important landmark. The outstanding performance and efficiency of swarm-based algorithms inspired many new developments, though mathematical understanding of metaheuristics remains partly a mystery. In contrast to the classic deterministic algorithms, metaheuristics such as PSO always use some form of randomness, and such randomization now employs various techniques. This paper intends to review and analyze some of the convergence and efficiency associated with metaheuristics such as firefly algorithm, random walks, and L\'evy flights. We will discuss how these techniques are used and their implications for further research.Comment: 10 pages. arXiv admin note: substantial text overlap with arXiv:1212.0220, arXiv:1208.0527, arXiv:1003.146

    A Review on the Application of Natural Computing in Environmental Informatics

    Get PDF
    Natural computing offers new opportunities to understand, model and analyze the complexity of the physical and human-created environment. This paper examines the application of natural computing in environmental informatics, by investigating related work in this research field. Various nature-inspired techniques are presented, which have been employed to solve different relevant problems. Advantages and disadvantages of these techniques are discussed, together with analysis of how natural computing is generally used in environmental research.Comment: Proc. of EnviroInfo 201

    The design and applications of the african buffalo algorithm for general optimization problems

    Get PDF
    Optimization, basically, is the economics of science. It is concerned with the need to maximize profit and minimize cost in terms of time and resources needed to execute a given project in any field of human endeavor. There have been several scientific investigations in the past several decades on discovering effective and efficient algorithms to providing solutions to the optimization needs of mankind leading to the development of deterministic algorithms that provide exact solutions to optimization problems. In the past five decades, however, the attention of scientists has shifted from the deterministic algorithms to the stochastic ones since the latter have proven to be more robust and efficient, even though they do not guarantee exact solutions. Some of the successfully designed stochastic algorithms include Simulated Annealing, Genetic Algorithm, Ant Colony Optimization, Particle Swarm Optimization, Bee Colony Optimization, Artificial Bee Colony Optimization, Firefly Optimization etc. A critical look at these ‘efficient’ stochastic algorithms reveals the need for improvements in the areas of effectiveness, the number of several parameters used, premature convergence, ability to search diverse landscapes and complex implementation strategies. The African Buffalo Optimization (ABO), which is inspired by the herd management, communication and successful grazing cultures of the African buffalos, is designed to attempt solutions to the observed shortcomings of the existing stochastic optimization algorithms. Through several experimental procedures, the ABO was used to successfully solve benchmark optimization problems in mono-modal and multimodal, constrained and unconstrained, separable and non-separable search landscapes with competitive outcomes. Moreover, the ABO algorithm was applied to solve over 100 out of the 118 benchmark symmetric and all the asymmetric travelling salesman’s problems available in TSPLIB95. Based on the successful experimentation with the novel algorithm, it is safe to conclude that the ABO is a worthy contribution to the scientific literature

    Genetic learning particle swarm optimization

    Get PDF
    Social learning in particle swarm optimization (PSO) helps collective efficiency, whereas individual reproduction in genetic algorithm (GA) facilitates global effectiveness. This observation recently leads to hybridizing PSO with GA for performance enhancement. However, existing work uses a mechanistic parallel superposition and research has shown that construction of superior exemplars in PSO is more effective. Hence, this paper first develops a new framework so as to organically hybridize PSO with another optimization technique for “learning.” This leads to a generalized “learning PSO” paradigm, the *L-PSO. The paradigm is composed of two cascading layers, the first for exemplar generation and the second for particle updates as per a normal PSO algorithm. Using genetic evolution to breed promising exemplars for PSO, a specific novel *L-PSO algorithm is proposed in the paper, termed genetic learning PSO (GL-PSO). In particular, genetic operators are used to generate exemplars from which particles learn and, in turn, historical search information of particles provides guidance to the evolution of the exemplars. By performing crossover, mutation, and selection on the historical information of particles, the constructed exemplars are not only well diversified, but also high qualified. Under such guidance, the global search ability and search efficiency of PSO are both enhanced. The proposed GL-PSO is tested on 42 benchmark functions widely adopted in the literature. Experimental results verify the effectiveness, efficiency, robustness, and scalability of the GL-PSO

    Lattice dynamical wavelet neural networks implemented using particle swarm optimization for spatio-temporal system identification

    No full text
    In this brief, by combining an efficient wavelet representation with a coupled map lattice model, a new family of adaptive wavelet neural networks, called lattice dynamical wavelet neural networks (LDWNNs), is introduced for spatio-temporal system identification. A new orthogonal projection pursuit (OPP) method, coupled with a particle swarm optimization (PSO) algorithm, is proposed for augmenting the proposed network. A novel two-stage hybrid training scheme is developed for constructing a parsimonious network model. In the first stage, by applying the OPP algorithm, significant wavelet neurons are adaptively and successively recruited into the network, where adjustable parameters of the associated wavelet neurons are optimized using a particle swarm optimizer. The resultant network model, obtained in the first stage, however, may be redundant. In the second stage, an orthogonal least squares algorithm is then applied to refine and improve the initially trained network by removing redundant wavelet neurons from the network. An example for a real spatio-temporal system identification problem is presented to demonstrate the performance of the proposed new modeling framework
    corecore