3,613 research outputs found

    Tournament versus Fitness Uniform Selection

    Get PDF
    In evolutionary algorithms a critical parameter that must be tuned is that of selection pressure. If it is set too low then the rate of convergence towards the optimum is likely to be slow. Alternatively if the selection pressure is set too high the system is likely to become stuck in a local optimum due to a loss of diversity in the population. The recent Fitness Uniform Selection Scheme (FUSS) is a conceptually simple but somewhat radical approach to addressing this problem - rather than biasing the selection towards higher fitness, FUSS biases selection towards sparsely populated fitness levels. In this paper we compare the relative performance of FUSS with the well known tournament selection scheme on a range of problems.Comment: 10 pages, 8 figure

    Fitness Uniform Optimization

    Full text link
    In evolutionary algorithms, the fitness of a population increases with time by mutating and recombining individuals and by a biased selection of more fit individuals. The right selection pressure is critical in ensuring sufficient optimization progress on the one hand and in preserving genetic diversity to be able to escape from local optima on the other hand. Motivated by a universal similarity relation on the individuals, we propose a new selection scheme, which is uniform in the fitness values. It generates selection pressure toward sparsely populated fitness regions, not necessarily toward higher fitness, as is the case for all other selection schemes. We show analytically on a simple example that the new selection scheme can be much more effective than standard selection schemes. We also propose a new deletion scheme which achieves a similar result via deletion and show how such a scheme preserves genetic diversity more effectively than standard approaches. We compare the performance of the new schemes to tournament selection and random deletion on an artificial deceptive problem and a range of NP-hard problems: traveling salesman, set covering and satisfiability.Comment: 25 double-column pages, 12 figure

    A lexicographic multi-objective genetic algorithm for multi-label correlation-based feature selection

    Get PDF
    This paper proposes a new Lexicographic multi-objective Genetic Algorithm for Multi-Label Correlation-based Feature Selection (LexGA-ML-CFS), which is an extension of the previous single-objective Genetic Algorithm for Multi-label Correlation-based Feature Selection (GA-ML-CFS). This extension uses a LexGA as a global search method for generating candidate feature subsets. In our experiments, we compare the results obtained by LexGA-ML-CFS with the results obtained by the original hill climbing-based ML-CFS, the single-objective GA-ML-CFS and a baseline Binary Relevance method, using ML-kNN as the multi-label classifier. The results from our experiments show that LexGA-ML-CFS improved predictive accuracy, by comparison with other methods, in some cases, but in general there was no statistically significant different between the results of LexGA-ML-CFS and other methods

    Clustering analysis of railway driving missions with niching

    Get PDF
    A wide number of applications requires classifying or grouping data into a set of categories or clusters. Most popular clustering techniques to achieve this objective are K-means clustering and hierarchical clustering. However, both of these methods necessitate the a priori setting of the cluster number. In this paper, a clustering method based on the use of a niching genetic algorithm is presented, with the aim of finding the best compromise between the inter-cluster distance maximization and the intra-cluster distance minimization. This method is applied to three clustering benchmarks and to the classification of driving missions for railway applications

    Temporal difference learning with interpolated table value functions

    Get PDF
    This paper introduces a novel function approximation architecture especially well suited to temporal difference learning. The architecture is based on using sets of interpolated table look-up functions. These offer rapid and stable learning, and are efficient when the number of inputs is small. An empirical investigation is conducted to test their performance on a supervised learning task, and on themountain car problem, a standard reinforcement learning benchmark. In each case, the interpolated table functions offer competitive performance. ©2009 IEEE

    A Hierachical Evolutionary Algorithm for Multiobjective Optimization in IMRT

    Full text link
    Purpose: Current inverse planning methods for IMRT are limited because they are not designed to explore the trade-offs between the competing objectives between the tumor and normal tissues. Our goal was to develop an efficient multiobjective optimization algorithm that was flexible enough to handle any form of objective function and that resulted in a set of Pareto optimal plans. Methods: We developed a hierarchical evolutionary multiobjective algorithm designed to quickly generate a diverse Pareto optimal set of IMRT plans that meet all clinical constraints and reflect the trade-offs in the plans. The top level of the hierarchical algorithm is a multiobjective evolutionary algorithm (MOEA). The genes of the individuals generated in the MOEA are the parameters that define the penalty function minimized during an accelerated deterministic IMRT optimization that represents the bottom level of the hierarchy. The MOEA incorporates clinical criteria to restrict the search space through protocol objectives and then uses Pareto optimality among the fitness objectives to select individuals. Results: Acceleration techniques implemented on both levels of the hierarchical algorithm resulted in short, practical runtimes for optimizations. The MOEA improvements were evaluated for example prostate cases with one target and two OARs. The modified MOEA dominated 11.3% of plans using a standard genetic algorithm package. By implementing domination advantage and protocol objectives, small diverse populations of clinically acceptable plans that were only dominated 0.2% by the Pareto front could be generated in a fraction of an hour. Conclusions: Our MOEA produces a diverse Pareto optimal set of plans that meet all dosimetric protocol criteria in a feasible amount of time. It optimizes not only beamlet intensities but also objective function parameters on a patient-specific basis

    Investigating learning rates for evolution and temporal difference learning

    Get PDF
    Evidently, any learning algorithm can only learn on the basis of the information given to it. This paper presents a first attempt to place an upper bound on the information rates attainable with standard co-evolution and with TDL. The upper bound for TDL is shown to be much higher than for coevolution. Under commonly used settings for learning to play Othello for example, TDL may have an upper bound that is hundreds or even thousands of times higher than that of coevolution. To test how well these bounds correlate with actual learning rates, a simple two-player game called Treasure Hunt. is developed. While the upper bounds cannot be used to predict the number of games required to learn the optimal policy, they do correctly predict the rank order of the number of games required by each algorithm. © 2008 IEEE
    • 

    corecore