426 research outputs found

    Static and dynamic overproduction and selection of classifier ensembles with genetic algorithms

    Get PDF
    The overproduce-and-choose sttategy is a static classifier ensemble selection approach, which is divided into overproduction and selection phases. This thesis focuses on the selection phase, which is the challenge in overproduce-and-choose strategy. When this phase is implemented as an optimization process, the search criterion and the search algorithm are the two major topics involved. In this thesis, we concentrate in optimization processes conducted using genetic algorithms guided by both single- and multi-objective functions. We first focus on finding the best search criterion. Various search criteria are investigated, such as diversity, the error rate and ensemble size. Error rate and diversity measures are directly compared in the single-objective optimization approach. Diversity measures are combined with the error rate and with ensemble size, in pairs of objective functions, to guide the multi-optimization approach. Experimental results are presented and discussed. Thereafter, we show that besides focusing on the characteristics of the decision profiles of ensemble members, the control of overfitting at the selection phase of overproduce-and-choose strategy must also be taken into account. We show how overfitting can be detected at the selection phase and present three strategies to control overfitting. These strategies are tailored for the classifier ensemble selection problcm and compared. This comparison allows us to show that a global validation strategy should be applied to control overfitting in optimization processes involving a classifier ensembles selection task. Furthermore, this study has helped us establish that this global validation strategy can be used as a tool to measure the relationship between diversity and classification performance when diversity measures are employed as single-objective functions. Finally, the main contribution of this thesis is a proposed dynamic overproduce-and-choose strategy. While the static overproduce-and-choose selection strategy has traditionally focused on finding the most accurate subset of classifiers during the selection phase, and using it to predict the class of all the test samples, our dynamic overproduce-and- choose strategy allows the selection of the most confident subset of classifiers to label each test sample individually. Our method combines optimization and dynamic selection in a two-level selection phase. The optimization level is intended to generate a population of highly accurate classifier ensembles, while the dynamic selection level applies measures of confidence in order to select the ensemble with the highest degree of confidence in the current decision. Three different confidence measures are presented and compared. Our method outperforms classical static and dynamic selection strategies

    An Enhanced Random Linear Oracle Ensemble Method using Feature Selection Approach based on Naïve Bayes Classifier

    Get PDF
    Random Linear Oracle (RLO) ensemble replaced each classifier with two mini-ensembles, allowing base classifiers to be trained using different data set, improving the variety of trained classifiers. Naïve Bayes (NB) classifier was chosen as the base classifier for this research due to its simplicity and computational inexpensive. Different feature selection algorithms are applied to RLO ensemble to investigate the effect of different sized data towards its performance. Experiments were carried out using 30 data sets from UCI repository, as well as 6 learning algorithms, namely NB classifier, RLO ensemble, RLO ensemble trained with Genetic Algorithm (GA) feature selection using accuracy of NB classifier as fitness function, RLO ensemble trained with GA feature selection using accuracy of RLO ensemble as fitness function, RLO ensemble trained with t-test feature selection, and RLO ensemble trained with Kruskal-Wallis test feature selection. The results showed that RLO ensemble could significantly improve the diversity of NB classifier in dealing with distinctively selected feature sets through its fusionselection paradigm. Consequently, feature selection algorithms could greatly benefit RLO ensemble, with properly selected number of features from filter approach, or GA natural selection from wrapper approach, it received great classification accuracy improvement, as well as growth in diversity
    • …
    corecore