55,572 research outputs found

    An Improved Differential Evolution Method Based on the Dynamic Search Strategy to Solve Dynamic Economic Dispatch Problem with Valve-Point Effects

    Get PDF
    An improved differential evolution (DE) method based on the dynamic search strategy (IDEBDSS) is proposed to solve dynamic economic dispatch problem with valve-point effects in this paper. The proposed method combines the DE algorithm with the dynamic search strategy, which improves the performance of the algorithm. DE is the main optimizer in the method proposed. While chaotic sequences are applied to obtain the dynamic parameter settings in DE, dynamic search strategy which consists of two steps, global search strategy and local search strategy, is used to improve algorithm efficiency. To accelerate convergence, a new infeasible solution handing method is adopted in the local search strategy; meanwhile, an orthogonal crossover (OX) operator is added to the global search strategy to enhance the optimization search ability. Finally, the feasibility and effectiveness of the proposed methods are demonstrated by three test systems, and the simulation results reveal that the IDEBDSS method can obtain better solutions with higher efficiency than the standard DE and other methods reported in the recent literature

    Experimental Comparisons of Derivative Free Optimization Algorithms

    Get PDF
    In this paper, the performances of the quasi-Newton BFGS algorithm, the NEWUOA derivative free optimizer, the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), the Differential Evolution (DE) algorithm and Particle Swarm Optimizers (PSO) are compared experimentally on benchmark functions reflecting important challenges encountered in real-world optimization problems. Dependence of the performances in the conditioning of the problem and rotational invariance of the algorithms are in particular investigated.Comment: 8th International Symposium on Experimental Algorithms, Dortmund : Germany (2009

    Genetic learning particle swarm optimization

    Get PDF
    Social learning in particle swarm optimization (PSO) helps collective efficiency, whereas individual reproduction in genetic algorithm (GA) facilitates global effectiveness. This observation recently leads to hybridizing PSO with GA for performance enhancement. However, existing work uses a mechanistic parallel superposition and research has shown that construction of superior exemplars in PSO is more effective. Hence, this paper first develops a new framework so as to organically hybridize PSO with another optimization technique for “learning.” This leads to a generalized “learning PSO” paradigm, the *L-PSO. The paradigm is composed of two cascading layers, the first for exemplar generation and the second for particle updates as per a normal PSO algorithm. Using genetic evolution to breed promising exemplars for PSO, a specific novel *L-PSO algorithm is proposed in the paper, termed genetic learning PSO (GL-PSO). In particular, genetic operators are used to generate exemplars from which particles learn and, in turn, historical search information of particles provides guidance to the evolution of the exemplars. By performing crossover, mutation, and selection on the historical information of particles, the constructed exemplars are not only well diversified, but also high qualified. Under such guidance, the global search ability and search efficiency of PSO are both enhanced. The proposed GL-PSO is tested on 42 benchmark functions widely adopted in the literature. Experimental results verify the effectiveness, efficiency, robustness, and scalability of the GL-PSO

    Observation of rotation in star forming regions: clouds, cores, disks, and jets

    Full text link
    Angular momentum plays a crucial role in the formation of stars and planets. It has long been noticed that parcels of gas in molecular clouds need to reduce their specific angular momentum by 6 to 7 orders of magnitude to participate in the building of a typical star like the Sun. Several physical processes on different scales and at different stages of evolution can contribute to this loss of angular momentum. In order to set constraints on these processes and better understand this transfer of angular momentum, a detailed observational census and characterization of rotation at all stages of evolution and over all scales of star forming regions is necessary. This review presents the main results obtained in low-mass star forming regions over the past four decades in this field of research. It addresses the search and characterization of rotation in molecular clouds, prestellar and protostellar cores, circumstellar disks, and jets. Perspectives offered by ALMA are briefly discussed.Comment: 43 pages, 8 figures. To appear in the Proceedings of the Evry Schatzman School 2012 of PNPS and CNRS/INSU on the "Role and mechanisms of angular momentum transport during the formation and early evolution of stars", Eds. P.Hennebelle and C.Charbonne

    An Introduction to Conformal Ricci Flow

    Full text link
    We introduce a variation of the classical Ricci flow equation that modifies the unit volume constraint of that equation to a scalar curvature constraint. The resulting equations are named the Conformal Ricci Flow Equations because of the role that conformal geometry plays in constraining the scalar curvature. These equations are analogous to the incompressible Navier-Stokes equations of fluid mechanics inasmuch as a conformal pressure arises as a Lagrange multiplier to conformally deform the metric flow so as to maintain the scalar curvature constraint. The equilibrium points are Einstein metrics with a negative Einstein constant and the conformal pressue is shown to be zero at an equilibrium point and strictly positive otherwise. The geometry of the conformal Ricci flow is discussed as well as the remarkable analytic fact that the constraint force does not lose derivatives and thus analytically the conformal Ricci equation is a bounded perturbation of the classical unnormalized Ricci equation. That the constraint force does not lose derivatives is exactly analogous to the fact that the real physical pressure force that occurs in the Navier-Stokes equations is a bounded function of the velocity. Using a nonlinear Trotter product formula, existence and uniqueness of solutions to the conformal Ricci flow equations is proven. Lastly, we discuss potential applications to Perelman's proposed implementation of Hamilton's program to prove Thurston's 3-manifold geometrization conjectures.Comment: 52 pages, 1 figur

    Orthogonal learning particle swarm optimization

    Get PDF
    Particle swarm optimization (PSO) relies on its learning strategy to guide its search direction. Traditionally, each particle utilizes its historical best experience and its neighborhood’s best experience through linear summation. Such a learning strategy is easy to use, but is inefficient when searching in complex problem spaces. Hence, designing learning strategies that can utilize previous search information (experience) more efficiently has become one of the most salient and active PSO research topics. In this paper, we proposes an orthogonal learning (OL) strategy for PSO to discover more useful information that lies in the above two experiences via orthogonal experimental design. We name this PSO as orthogonal learning particle swarm optimization (OLPSO). The OL strategy can guide particles to fly in better directions by constructing a much promising and efficient exemplar. The OL strategy can be applied to PSO with any topological structure. In this paper, it is applied to both global and local versions of PSO, yielding the OLPSO-G and OLPSOL algorithms, respectively. This new learning strategy and the new algorithms are tested on a set of 16 benchmark functions, and are compared with other PSO algorithms and some state of the art evolutionary algorithms. The experimental results illustrate the effectiveness and efficiency of the proposed learning strategy and algorithms. The comparisons show that OLPSO significantly improves the performance of PSO, offering faster global convergence, higher solution quality, and stronger robustness
    corecore