528 research outputs found

    A population-based optimization method using Newton fractal

    Get PDF
    Department of Mathematical SciencesMetaheuristic is a general procedure to draw an agreement in a group based on the decision making of each individual beyond heuristic. For last decade, there have been many attempts to develop metaheuristic methods based on swarm intelligence to solve global optimization such as particle swarm optimizer, ant colony optimizer, firefly optimizer. These methods are mostly stochastic and independent on specific problems. Since metaheuristic methods based on swarm intelligence require no central coordination (or minimal, if any), they are especially well-applicable to those problems which have distributed or parallel structures. Each individual follows few simple rules, keeping the searching cost at a decent level. Despite its simplicity, the methods often yield a fast approximation in good precision, compared to conventional methods. Exploration and exploitation are two important features that we need to consider to find a global optimum in a high dimensional domain, especially when prior information is not given. Exploration is to investigate the unknown space without using the information from history to find undiscovered optimum. Exploitation is to trace the neighborhood of the current best to improve it using the information from history. Because these two concepts are at opposite ends of spectrum, the tradeoff significantly affects the performance at the limited cost of search. In this work, we develop a chaos-based metaheuristic method, ???Newton Particle Optimization(NPO)???, to solve global optimization problems. The method is based on the Newton method which is a well-established mathematical root-finding procedure. It actively utilizes the chaotic nature of the Newton method to place a proper balance between exploration and exploitation. While most current population-based methods adopt stochastic effects to maximize exploration, they often suffer from weak exploitation. In addition, stochastic methods generally show poor reproducing ability and premature convergence. It has been argued that an alternative approach using chaos may mitigate such disadvantages. The unpredictability of chaos is correspondent with the randomness of stochastic methods. Chaos-based methods are deterministic and therefore easy to reproduce the results with less memory. It has been shown that chaos avoids local optimum better than stochastic methods and buffers the premature convergence issue. Newton method is deterministic but shows chaotic movements near the roots. It is such complexity that enables the particles to search the space for global optimization. We initialize the particle???s position randomly at first and choose the ???leading particles??? to attract other particles near them. We can make a polynomial function whose roots are those leading particles, called ???a guiding function???. Then we update the positions of particles using the guiding function by Newton method. Since the roots are not updated by Newton method, the leading particles survive after update. For diverse movements of particles, we use modified newton method, which has a coefficient mm in the variation of movements for each particle. Efficiency in local search is closely related to the value of m which determines the convergence rate of the Newton method. We can control the balance between exploration and exploitation by choice of leading particles. It is interesting that selection of excellent particles as leading particles not always results in the best result. Including mediocre particles in the roots of guiding function maintains the diversity of particles in position. Though diversity seems to be inefficient at first, those particles contribute to the exploration for global search finally. We study the conditions for the convergence of NPO. NPO enjoys the well-established analysis of the Newton method. This contrasts with other ???nature-inspired??? algorithms which have often been criticized for lack of rigorous mathematical ground. We compare the results of NPO with those of two popular metaheuristic methods, particle swarm optimizer(PSO) and firefly optimizer(FO). Though it has been shown that there are no such algorithms superior to all problems by no free lunch theorem, that is why the researchers are concerned about adaptable global optimizer for specific problems. NPO shows good performance to CEC 2013 competition test problems comparing to PSO and FO.ope

    Stochastic Fractal Based Multiobjective Fruit Fly Optimization

    Get PDF
    The fruit fly optimization algorithm (FOA) is a global optimization algorithm inspired by the foraging behavior of a fruit fly swarm. In this study, a novel stochastic fractal model based fruit fly optimization algorithm is proposed for multiobjective optimization. A food source generating method based on a stochastic fractal with an adaptive parameter updating strategy is introduced to improve the convergence performance of the fruit fly optimization algorithm. To deal with multiobjective optimization problems, the Pareto domination concept is integrated into the selection process of fruit fly optimization and a novel multiobjective fruit fly optimization algorithm is then developed. Similarly to most of other multiobjective evolutionary algorithms (MOEAs), an external elitist archive is utilized to preserve the nondominated solutions found so far during the evolution, and a normalized nearest neighbor distance based density estimation strategy is adopted to keep the diversity of the external elitist archive. Eighteen benchmarks are used to test the performance of the stochastic fractal based multiobjective fruit fly optimization algorithm (SFMOFOA). Numerical results show that the SFMOFOA is able to well converge to the Pareto fronts of the test benchmarks with good distributions. Compared with four state-of-the-art methods, namely, the non-dominated sorting generic algorithm (NSGA-II), the strength Pareto evolutionary algorithm (SPEA2), multi-objective particle swarm optimization (MOPSO), and multiobjective self-adaptive differential evolution (MOSADE), the proposed SFMOFOA has better or competitive multiobjective optimization performance

    Multi-objective Synthesis of Antennas from Special and Conventional Materials

    Get PDF
    In the paper, we try to provide a comprehensive look on a multi-objective design of radiating, guiding and reflecting structures fabricated both from special materials (semiconductors, high-impedance surfaces) and conventional ones (microwave substrates, fully metallic antennas). Discussions are devoted to the proper selection of the numerical solver used for evaluating partial objectives, to the selection of the domain of analysis, to the proper formulation of the multi-objective function and to the way of computing the Pareto front of optimal solutions (here, we exploit swarm-intelligence algorithms, evolutionary methods and self-organizing migrating algorithms). The above-described approaches are applied to the design of selected types of microwave antennas, transmission lines and reflectors. Considering obtained results, the paper is concluded by generalizing remarks

    Distribution network reconfiguration considering DGs using a hybrid CS-GWO algorithm for power loss minimization and voltage profile enhancement

    Get PDF
    This paper presents an implementation of the hybrid Cuckoo search and Grey wolf (CS-GWO) optimization algorithm for solving the problem of distribution network reconfiguration (DNR) and optimal location and sizing of distributed generations (DGs) simultaneously in radial distribution systems (RDSs). This algorithm is being used significantly to minimize the system power loss, voltage deviation at load buses and improve the voltage profile. When solving the high-dimensional datasets optimization problem using the GWO algorithm, it simply falls into an optimum local region. To enhance and strengthen the GWO algorithm searchability, CS algorithm is integrated to update the best three candidate solutions. This hybrid CS-GWO algorithm has a more substantial search capability to simultaneously find optimal candidate solutions for problem. Furthermore, to validate the effectiveness and performances of the proposed hybrid CS-GWO algorithm is being tested and evaluated for standard IEEE 33-bus and 69-bus RDSs by considering different scenarios

    One more look on visualization of operation of a root-finding algorithm

    Get PDF
    Many algorithms that iteratively find solution of an equation require tuning. Due to the complex dependence of many algorithm’s elements, it is difficult to know their impact on the work of the algorithm. The article presents a simple root-finding algorithm with self-adaptation that requires tuning, similarly to evolutionary algorithms. Moreover, the use of various iteration processes instead of the standard Picard iteration is presented. In the algorithm’s analysis, visualizations of the dynamics were used. The conducted experiments and the discussion regarding their results allow to understand the influence of tuning on the proposed algorithm. The understanding of the tuning mechanisms can be helpful in using other evolutionary algorithms. Moreover, the presented visualizations show intriguing patterns of potential artistic applications
    corecore