12,150 research outputs found

    An evolutionary algorithm with double-level archives for multiobjective optimization

    Get PDF
    Existing multiobjective evolutionary algorithms (MOEAs) tackle a multiobjective problem either as a whole or as several decomposed single-objective sub-problems. Though the problem decomposition approach generally converges faster through optimizing all the sub-problems simultaneously, there are two issues not fully addressed, i.e., distribution of solutions often depends on a priori problem decomposition, and the lack of population diversity among sub-problems. In this paper, a MOEA with double-level archives is developed. The algorithm takes advantages of both the multiobjective-problemlevel and the sub-problem-level approaches by introducing two types of archives, i.e., the global archive and the sub-archive. In each generation, self-reproduction with the global archive and cross-reproduction between the global archive and sub-archives both breed new individuals. The global archive and sub-archives communicate through cross-reproduction, and are updated using the reproduced individuals. Such a framework thus retains fast convergence, and at the same time handles solution distribution along Pareto front (PF) with scalability. To test the performance of the proposed algorithm, experiments are conducted on both the widely used benchmarks and a set of truly disconnected problems. The results verify that, compared with state-of-the-art MOEAs, the proposed algorithm offers competitive advantages in distance to the PF, solution coverage, and search speed

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era

    A convergence acceleration operator for multiobjective optimisation

    Get PDF
    A novel multiobjective optimisation accelerator is introduced that uses direct manipulation in objective space together with neural network mappings from objective space to decision space. This operator is a portable component that can be hybridized with any multiobjective optimisation algorithm. The purpose of this Convergence Acceleration Operator (CAO) is to enhance the search capability and the speed of convergence of the host algorithm. The operator acts directly in objective space to suggest improvements to solutions obtained by a multiobjective evolutionary algorithm (MOEA). These suggested improved objective vectors are then mapped into decision variable space and tested. The CAO is incorporated with two leading MOEAs, the Non-Dominated Sorting Genetic Algorithm (NSGA-II) and the Strength Pareto Evolutionary Algorithm (SPEA2) and tested. Results show that the hybridized algorithms consistently improve the speed of convergence of the original algorithm whilst maintaining the desired distribution of solutions

    Discovering Evolutionary Stepping Stones through Behavior Domination

    Full text link
    Behavior domination is proposed as a tool for understanding and harnessing the power of evolutionary systems to discover and exploit useful stepping stones. Novelty search has shown promise in overcoming deception by collecting diverse stepping stones, and several algorithms have been proposed that combine novelty with a more traditional fitness measure to refocus search and help novelty search scale to more complex domains. However, combinations of novelty and fitness do not necessarily preserve the stepping stone discovery that novelty search affords. In several existing methods, competition between solutions can lead to an unintended loss of diversity. Behavior domination defines a class of algorithms that avoid this problem, while inheriting theoretical guarantees from multiobjective optimization. Several existing algorithms are shown to be in this class, and a new algorithm is introduced based on fast non-dominated sorting. Experimental results show that this algorithm outperforms existing approaches in domains that contain useful stepping stones, and its advantage is sustained with scale. The conclusion is that behavior domination can help illuminate the complex dynamics of behavior-driven search, and can thus lead to the design of more scalable and robust algorithms.Comment: To Appear in Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2017

    A Hierachical Evolutionary Algorithm for Multiobjective Optimization in IMRT

    Full text link
    Purpose: Current inverse planning methods for IMRT are limited because they are not designed to explore the trade-offs between the competing objectives between the tumor and normal tissues. Our goal was to develop an efficient multiobjective optimization algorithm that was flexible enough to handle any form of objective function and that resulted in a set of Pareto optimal plans. Methods: We developed a hierarchical evolutionary multiobjective algorithm designed to quickly generate a diverse Pareto optimal set of IMRT plans that meet all clinical constraints and reflect the trade-offs in the plans. The top level of the hierarchical algorithm is a multiobjective evolutionary algorithm (MOEA). The genes of the individuals generated in the MOEA are the parameters that define the penalty function minimized during an accelerated deterministic IMRT optimization that represents the bottom level of the hierarchy. The MOEA incorporates clinical criteria to restrict the search space through protocol objectives and then uses Pareto optimality among the fitness objectives to select individuals. Results: Acceleration techniques implemented on both levels of the hierarchical algorithm resulted in short, practical runtimes for optimizations. The MOEA improvements were evaluated for example prostate cases with one target and two OARs. The modified MOEA dominated 11.3% of plans using a standard genetic algorithm package. By implementing domination advantage and protocol objectives, small diverse populations of clinically acceptable plans that were only dominated 0.2% by the Pareto front could be generated in a fraction of an hour. Conclusions: Our MOEA produces a diverse Pareto optimal set of plans that meet all dosimetric protocol criteria in a feasible amount of time. It optimizes not only beamlet intensities but also objective function parameters on a patient-specific basis
    • …
    corecore