24,885 research outputs found

    Enhancing the diversity of genetic algorithm for improved feature selection

    Full text link
    Genetic algorithm (GA) is one of the most widely used population-based evolutionary search algorithms. One of the challenging optimization problems in which GA has been extensively applied is feature selection. It aims at finding an optimal small size subset of features from the original large feature set. It has been found that the main limitation of the traditional GA-based feature selection is that it tends to get trapped in local minima, a problem known as premature convergence. A number of implementations are presented in the literature to overcome this problem based on fitness scaling, genetic operator modification, boosting genetic population diversity, etc. This paper presents a new modified genetic algorithm based on enhanced population diversity, parents' selection and improved genetic operators. Practical results indicate the significance of the proposed GA variant in comparison to many other algorithms from the literature on different datasets. ©2010 IEEE

    Ensemble Learning for Free with Evolutionary Algorithms ?

    Get PDF
    Evolutionary Learning proceeds by evolving a population of classifiers, from which it generally returns (with some notable exceptions) the single best-of-run classifier as final result. In the meanwhile, Ensemble Learning, one of the most efficient approaches in supervised Machine Learning for the last decade, proceeds by building a population of diverse classifiers. Ensemble Learning with Evolutionary Computation thus receives increasing attention. The Evolutionary Ensemble Learning (EEL) approach presented in this paper features two contributions. First, a new fitness function, inspired by co-evolution and enforcing the classifier diversity, is presented. Further, a new selection criterion based on the classification margin is proposed. This criterion is used to extract the classifier ensemble from the final population only (Off-line) or incrementally along evolution (On-line). Experiments on a set of benchmark problems show that Off-line outperforms single-hypothesis evolutionary learning and state-of-art Boosting and generates smaller classifier ensembles

    Semantic variation operators for multidimensional genetic programming

    Full text link
    Multidimensional genetic programming represents candidate solutions as sets of programs, and thereby provides an interesting framework for exploiting building block identification. Towards this goal, we investigate the use of machine learning as a way to bias which components of programs are promoted, and propose two semantic operators to choose where useful building blocks are placed during crossover. A forward stagewise crossover operator we propose leads to significant improvements on a set of regression problems, and produces state-of-the-art results in a large benchmark study. We discuss this architecture and others in terms of their propensity for allowing heuristic search to utilize information during the evolutionary process. Finally, we look at the collinearity and complexity of the data representations that result from these architectures, with a view towards disentangling factors of variation in application.Comment: 9 pages, 8 figures, GECCO 201
    • …
    corecore