16 research outputs found

    A Multiple Cascade-Classifier System for a Robust and Partially Unsupervised Updating of Land-Cover Maps

    Get PDF
    A system for a regular updating of land-cover maps is proposed that is based on the use of multitemporal remote-sensing images. Such a system is able to face the updating problem under the realistic but critical constraint that, for the image to be classified (i.e., the most recent of the considered multitemporal data set), no ground truth information is available. The system is composed of an ensemble of partially unsupervised classifiers integrated in a multiple classifier architecture. Each classifier of the ensemble exhibits the following novel peculiarities: i) it is developed in the framework of the cascade-classification approach to exploit the temporal correlation existing between images acquired at different times in the considered area; ii) it is based on a partially unsupervised methodology capable to accomplish the classification process under the aforementioned critical constraint. Both a parametric maximum-likelihood classification approach and a non-parametric radial basis function (RBF) neural-network classification approach are used as basic methods for the development of partially unsupervised cascade classifiers. In addition, in order to generate an effective ensemble of classification algorithms, hybrid maximum-likelihood and RBF neural network cascade classifiers are defined by exploiting the peculiarities of the cascade-classification methodology. The results yielded by the different classifiers are combined by using standard unsupervised combination strategies. This allows the definition of a robust and accurate partially unsupervised classification system capable of analyzing a wide typology of remote-sensing data (e.g., images acquired by passive sensors, SAR images, multisensor and multisource data). Experimental results obtained on a real multitemporal and multisource data set confirm the effectiveness of the proposed system

    A new artificial neural network ensemble based on feature selection and class recoding

    Get PDF
    Many of the studies related to supervised learning have focused on the resolution of multiclass problems. A standard technique used to resolve these problems is to decompose the original multiclass problem into multiple binary problems. In this paper, we propose a new learning model applicable to multi-class domains in which the examples are described by a large number of features. The proposed model is an Artificial Neural Network ensemble in which the base learners are composed by the union of a binary classifier and a multiclass classifier. To analyze the viability and quality of this system, it will be validated in two real domains: traffic sign recognition and hand-written digit recognition. Experimental results show that our model is at least as accurate as other methods reported in the bibliography but has a considerable advantage respecting size, computational complexity, and running tim

    Ensemble diversity measures and their application to thinning

    Get PDF

    Diversity creation methods: a survey and categorisation

    Get PDF

    Adaptive decision making systems

    Get PDF
    Given a population of classifiers, we consider the problem of designing highly compact and error adaptive decision making systems. A selection approach based on misclassification diversity and potential cooperation among classifiers is proposed. The compactness constraint allows us the efficient implementation of fuzzy integral combination rules regarding both the interpretability of fuzzy measures and low complexity of fuzzy integral operator. Experimental results show the feasibility of our approach.VI Workshop de Agentes y Sistemas Inteligentes (WASI)Red de Universidades con Carreras en Informática (RedUNCI

    Incremental construction of classifier and discriminant ensembles

    Get PDF
    We discuss approaches to incrementally construct an ensemble. The first constructs an ensemble of classifiers choosing a subset from a larger set, and the second constructs an ensemble of discriminants, where a classifier is used for some classes only. We investigate criteria including accuracy, significant improvement, diversity, correlation, and the role of search direction. For discriminant ensembles, we test subset selection and trees. Fusion is by voting or by a linear model. Using 14 classifiers on 38 data sets. incremental search finds small, accurate ensembles in polynomial time. The discriminant ensemble uses a subset of discriminants and is simpler, interpretable, and accurate. We see that an incremental ensemble has higher accuracy than bagging and random subspace method; and it has a comparable accuracy to AdaBoost. but fewer classifiers.We would like to thank the three anonymous referees and the editor for their constructive comments, pointers to related literature, and pertinent questions which allowed us to better situate our work as well as organize the ms and improve the presentation. This work has been supported by the Turkish Academy of Sciences in the framework of the Young Scientist Award Program (EA-TUBA-GEBIP/2001-1-1), Bogazici University Scientific Research Project 05HA101 and Turkish Scientific Technical Research Council TUBITAK EEEAG 104EO79Publisher's VersionAuthor Pre-Prin

    Competitive Learning Neural Network Ensemble Weighted by Predicted Performance

    Get PDF
    Ensemble approaches have been shown to enhance classification by combining the outputs from a set of voting classifiers. Diversity in error patterns among base classifiers promotes ensemble performance. Multi-task learning is an important characteristic for Neural Network classifiers. Introducing a secondary output unit that receives different training signals for base networks in an ensemble can effectively promote diversity and improve ensemble performance. Here a Competitive Learning Neural Network Ensemble is proposed where a secondary output unit predicts the classification performance of the primary output unit in each base network. The networks compete with each other on the basis of classification performance and partition the stimulus space. The secondary units adaptively receive different training signals depending on the competition. As the result, each base network develops ¡°preference¡± over different regions of the stimulus space as indicated by their secondary unit outputs. To form an ensemble decision, all base networks¡¯ primary unit outputs are combined and weighted according to the secondary unit outputs. The effectiveness of the proposed approach is demonstrated with the experiments on one real-world and four artificial classification problems

    A multiple-cascade-classifier system for a robust and partially unsupervised updating of land-cover maps

    Full text link
    corecore