2,799 research outputs found

    Enhanced Differential Evolution Based on Adaptive Mutation and Wrapper Local Search Strategies for Global Optimization Problems

    Get PDF
    AbstractDifferential evolution (DE) is a simple, powerful optimization algorithm, which has been widely used in many areas. However, the choices of the best mutation and search strategies are difficult for the specific issues. To alleviate these drawbacks and enhance the performance of DE, in this paper, the hybrid framework based on the adaptive mutation and Wrapper Local Search (WLS) schemes, is proposed to improve searching ability to efficiently guide the evolution of the population toward the global optimum. Furthermore, the effective particle encoding representation named Particle Segment Operation-Machine Assignment (PSOMA) that we previously published is applied to always produce feasible candidate solutions for solving the Flexible Job-shop Scheduling Problem (FJSP). Experiments were conducted on comprehensive set of complex benchmarks including the unimodal, multimodal and hybrid composition function, to validate performance of the proposed method and to compare with other state-of-the art DE variants such as jDE, JADE, MDE_pBX etc. Meanwhile, the hybrid DE model incorporating PSOMA is used to solve different representative instances based on practical data for multi-objective FJSP verifications. Simulation results indicate that the proposed method performs better for the majority of the single-objective scalable benchmark functions in terms of the solution accuracy and convergence rate. In addition, the wide range of Pareto-optimal solutions and more Gantt chart decision-makings can be provided for the multi-objective FJSP combinatorial optimizations

    Biogeography-based learning particle swarm optimization

    Get PDF

    Genetic learning particle swarm optimization

    Get PDF
    Social learning in particle swarm optimization (PSO) helps collective efficiency, whereas individual reproduction in genetic algorithm (GA) facilitates global effectiveness. This observation recently leads to hybridizing PSO with GA for performance enhancement. However, existing work uses a mechanistic parallel superposition and research has shown that construction of superior exemplars in PSO is more effective. Hence, this paper first develops a new framework so as to organically hybridize PSO with another optimization technique for “learning.” This leads to a generalized “learning PSO” paradigm, the *L-PSO. The paradigm is composed of two cascading layers, the first for exemplar generation and the second for particle updates as per a normal PSO algorithm. Using genetic evolution to breed promising exemplars for PSO, a specific novel *L-PSO algorithm is proposed in the paper, termed genetic learning PSO (GL-PSO). In particular, genetic operators are used to generate exemplars from which particles learn and, in turn, historical search information of particles provides guidance to the evolution of the exemplars. By performing crossover, mutation, and selection on the historical information of particles, the constructed exemplars are not only well diversified, but also high qualified. Under such guidance, the global search ability and search efficiency of PSO are both enhanced. The proposed GL-PSO is tested on 42 benchmark functions widely adopted in the literature. Experimental results verify the effectiveness, efficiency, robustness, and scalability of the GL-PSO

    Artificial Bee Colony Algorithm with Improved Explorations for Numerical Function Optimization

    Get PDF
    A major problem with Artificial Bee Colony (ABC) algorithm is its premature convergence to local optima, which originates from lack of explorative search capability of the algorithm. This paper introduces ABC with Improved Explorations (ABC-IX), a novel algorithm that modifies both the selection and perturbation operations of the basic ABC algorithm in an explorative way. Unlike the basic ABC algorithm, ABC-IX employs a probabilistic, explorative selection scheme based on simulated annealing which can accept both better and worse candidate solutions. ABC-IX also maintains a self-adaptive perturbation rate, separately for each candidate solution, to promote more explorations. ABC-IX is tested on a number of benchmark problems for numerical optimization and compared with several recent variants of ABC. Results show that ABC-IX often outperforms the other ABC-variants on most of the problems

    Coarsening dynamics in one dimension: The phase diffusion equation and its numerical implementation

    Full text link
    Many nonlinear partial differential equations (PDEs) display a coarsening dynamics, i.e., an emerging pattern whose typical length scale LL increases with time. The so-called coarsening exponent nn characterizes the time dependence of the scale of the pattern, L(t)tnL(t)\approx t^n, and coarsening dynamics can be described by a diffusion equation for the phase of the pattern. By means of a multiscale analysis we are able to find the analytical expression of such diffusion equations. Here, we propose a recipe to implement numerically the determination of D(λ)D(\lambda), the phase diffusion coefficient, as a function of the wavelength λ\lambda of the base steady state u0(x)u_0(x). DD carries all information about coarsening dynamics and, through the relation D(L)L2/t|D(L)| \simeq L^2 /t, it allows us to determine the coarsening exponent. The main conceptual message is that the coarsening exponent is determined without solving a time-dependent equation, but only by inspecting the periodic steady-state solutions. This provides a much faster strategy than a forward time-dependent calculation. We discuss our method for several different PDEs, both conserved and not conserved

    High-dimensional Black-box Optimization via Divide and Approximate Conquer

    Get PDF
    Divide and Conquer (DC) is conceptually well suited to high-dimensional optimization by decomposing a problem into multiple small-scale sub-problems. However, appealing performance can be seldom observed when the sub-problems are interdependent. This paper suggests that the major difficulty of tackling interdependent sub-problems lies in the precise evaluation of a partial solution (to a sub-problem), which can be overwhelmingly costly and thus makes sub-problems non-trivial to conquer. Thus, we propose an approximation approach, named Divide and Approximate Conquer (DAC), which reduces the cost of partial solution evaluation from exponential time to polynomial time. Meanwhile, the convergence to the global optimum (of the original problem) is still guaranteed. The effectiveness of DAC is demonstrated empirically on two sets of non-separable high-dimensional problems.Comment: 7 pages, 2 figures, conferenc

    Adaptive particle swarm optimization

    Get PDF
    An adaptive particle swarm optimization (APSO) that features better search efficiency than classical particle swarm optimization (PSO) is presented. More importantly, it can perform a global search over the entire search space with faster convergence speed. The APSO consists of two main steps. First, by evaluating the population distribution and particle fitness, a real-time evolutionary state estimation procedure is performed to identify one of the following four defined evolutionary states, including exploration, exploitation, convergence, and jumping out in each generation. It enables the automatic control of inertia weight, acceleration coefficients, and other algorithmic parameters at run time to improve the search efficiency and convergence speed. Then, an elitist learning strategy is performed when the evolutionary state is classified as convergence state. The strategy will act on the globally best particle to jump out of the likely local optima. The APSO has comprehensively been evaluated on 12 unimodal and multimodal benchmark functions. The effects of parameter adaptation and elitist learning will be studied. Results show that APSO substantially enhances the performance of the PSO paradigm in terms of convergence speed, global optimality, solution accuracy, and algorithm reliability. As APSO introduces two new parameters to the PSO paradigm only, it does not introduce an additional design or implementation complexity
    corecore