22 research outputs found

    Toward a General-Purpose Heterogeneous Ensemble for Pattern Classification

    Get PDF
    We perform an extensive study of the performance of different classification approaches on twenty-five datasets (fourteen image datasets and eleven UCI data mining datasets). The aim is to find General-Purpose (GP) heterogeneous ensembles (requiring little to no parameter tuning) that perform competitively across multiple datasets. The state-of-the-art classifiers examined in this study include the support vector machine, Gaussian process classifiers, random subspace of adaboost, random subspace of rotation boosting, and deep learning classifiers. We demonstrate that a heterogeneous ensemble based on the simple fusion by sum rule of different classifiers performs consistently well across all twenty-five datasets. The most important result of our investigation is demonstrating that some very recent approaches, including the heterogeneous ensemble we propose in this paper, are capable of outperforming an SVM classifier (implemented with LibSVM), even when both kernel selection and SVM parameters are carefully tuned for each dataset

    Ensemble Data Mining Methods

    Get PDF
    Ensemble Data Mining Methods, also known as Committee Methods or Model Combiners, are machine learning methods that leverage the power of multiple models to achieve better prediction accuracy than any of the individual models could on their own. The basic goal when designing an ensemble is the same as when establishing a committee of people: each member of the committee should be as competent as possible, but the members should be complementary to one another. If the members are not complementary, Le., if they always agree, then the committee is unnecessary---any one member is sufficient. If the members are complementary, then when one or a few members make an error, the probability is high that the remaining members can correct this error. Research in ensemble methods has largely revolved around designing ensembles consisting of competent yet complementary models

    Local learning for multi-layer, multi-component predictive system

    Get PDF
    This study introduces a new multi-layer multi-component ensemble. The components of this ensemble are trained locally on subsets of features for disjoint sets of data. The data instances are assigned to local regions using the similarity of their features pairwise squared correlation. Many ensemble methods encourage diversity among their base predictors by training them on different subsets of data or different subsets of features. In the proposed architecture the local regions contain disjoint sets of data and for this data only the most similar features are selected. The pairwise squared correlations of the features are used to weight the predictions of the ensemble's models. The proposed architecture has been tested on a number of data sets and its performance was compared to five benchmark algorithms. The results showed that the testing accuracy of the developed architecture is comparable to the rotation forest and is better than the other benchmark algorithms

    double committee adaboost

    Get PDF
    Abstract In this paper we make an extensive study of different combinations of ensemble techniques for improving the performance of adaboost considering the following strategies: reducing the correlation problem among the features, reducing the effect of the outliers in adaboost training, and proposing an efficient way for selecting/weighing the weak learners. First, we show that random subspace works well coupled with several adaboost techniques. Second, we show that an ensemble based on training perturbation using editing methods (to reduce the importance of the outliers) further improves performance. We examine the robustness of the new approach by applying it to a number of benchmark datasets representing a range of different problems. We find that compared with other state-of-the-art classifiers our proposed method performs consistently well across all the tested datasets. One useful finding is that this approach obtains a performance similar to support vector machine (SVM), using the well-known LibSVM implementation, even when both kernel selection and various parameters of SVM are carefully tuned for each dataset. The main drawback of the proposed approach is the computation time, which is high as a result of combining the different ensemble techniques. We have also tested the fusion between our selected committee of adaboost with SVM (again using the widely tested LibSVM tool) where the parameters of SVM are tuned for each dataset. We find that the fusion between SVM and a committee of adaboost (i.e., a heterogeneous ensemble) statistically outperforms the most used SVM tool with parameters tuned for each dataset. The MATLAB code of our best approach is available at bias.csr.unibo.it/nanni/ADA.rar

    Supervised And Semi-supervised Learning Using Informative Feature Subspaces

    Get PDF
    Tez (Doktora) -- İstanbul Teknik Üniversitesi, Fen Bilimleri Enstitüsü, 2010Thesis (PhD) -- İstanbul Technical University, Institute of Science and Technology, 2010Web madenciliği, biyoinformatik ve konuşma tanıma gibi birçok farklı alanda çok yüksek miktarda etiketsiz veri ve farklı öznitelik uzayları bulunmaktadır. Birlikte öğrenme (Co-training) algoritması gibi yarı-eğitmenli algoritmalar etiketsiz verinin kullanımını amaçlamaktadır. Rastgele öznitelik alt uzayları (RAS) metodu farklı öznitelik alt uzaylarını kullanarak sınıflandırıcı eğitmeyi ve bu sınıflandırıcıları, topluluklarda birleştirmeyi amaçlamaktadır. Bu tez çalışmasında, sınıflandırıcı toplulukları için ilişkili öznitelik alt uzayları rastgele seçilerek; bilgi içeren ve çeşitliliği sağlanmış öznitelik alt uzaylarının oluşturulması sağlanmıştır. Oluşturulan sınıflandırıcı toplulukları, eğitmenli ve yarı-eğitmenli öğrenme için kullanılmıştır. Önerdiğimiz ilk yöntem, öznitelik alt uzaylarını karşılıklı bilgi miktarına bağlı ilişki değerlerini kullanarak seçmektedir. Bu yöntem Rel-RAS (eğitmenli) ve Rel-RASCO (yarı-eğitmenli) algoritmalarında kullanılmıştır. İkinci yöntem, ilişkili ve artık olmayan öznitelik alt uzaylarını seçmek için, mRMR (en düşük artıklık ve en yüksek ilişkili) öznitelik seçme algoritmasının değiştirilmiş şeklini kullanmaktadır. Bu yöntem mRMR-RAS (eğitmenli) ve mRMR-RASCO (yarı-eğitmenli) algoritmalarında kullanılmıştır. Önerilen yöntemlerin deneysel analizleri belirli sayıda veri kümesinde gerçekleştirilmiş ve mevcut yöntemlerle karşılaştırılmıştır. Aynı zamanda önerilen yöntemlerle oluşturulmuş sınıflandırıcı topluluklarının teorik analizleri; Kohavi Wolpert (KW) varyans, bilgi kuramı tabanlı düşük düzeyli çeşitlilik (LOD) ve bilgi kuramı sayısı (ITS) kullanılarak gerçekleştirilmiştir. LOD ve KW-varyansının davranışları arasında benzerlik bulunmuş ve topluluk sınıflandırma başarımının ITS ile açıklanabileceği görülmüştür.In many different fields, such as web mining, bioinformatics, speech recognition, there is an abundance of unlabeled data and different feature views. Semi-supervised learning algorithms such as Co-training aim to make use of unlabeled data. Random (feature) subspace (RAS) methods aim to use different feature subspaces to train different classifiers and combine them in an ensemble. In this thesis, we obtain informative and diverse feature subspaces for classifier ensembles by means of randomly drawing relevant feature subspaces. We then use these ensembles for supervised and semi-supervised learning. Our first algorithm produces relevant random subspaces using the mutual information based relevance values. This method is used in Rel-RAS (supervised) and Rel-RASCO (semi-supervised) algorithms. The second algorithm modifies the mRMR (Minimum Redundancy Maximum Relevance) feature selection algorithm to produce random feature subsets that are both relevant and non-redundant. This method is used in mRMR-RAS (supervised) and mRMR-RASCO (semi-supervised) algorithms. We perform experimental analysis of our methods on a number of datasets and compare them to existing methods. We also do theoretical analysis of classifier ensembles produced by our methods using Kohavi Wolpert (KW) variance, information theory based low order diversity (LOD) and information theoretic scores (ITS). We find out that LOD has a similar tendency with KW-variance and ensemble accuracy of the algorithms can be explained using ITS.DoktoraPh

    Designing multiple classifier combinations a survey

    Get PDF
    Classification accuracy can be improved through multiple classifier approach. It has been proven that multiple classifier combinations can successfully obtain better classification accuracy than using a single classifier. There are two main problems in designing a multiple classifier combination which are determining the classifier ensemble and combiner construction. This paper reviews approaches in constructing the classifier ensemble and combiner. For each approach, methods have been reviewed and their advantages and disadvantages have been highlighted. A random strategy and majority voting are the most commonly used to construct the ensemble and combiner, respectively. The results presented in this review are expected to be a road map in designing multiple classifier combinations

    Data Mining at NASA: From Theory to Applications

    Get PDF
    This slide presentation demonstrates the data mining/machine learning capabilities of NASA Ames and Intelligent Data Understanding (IDU) group. This will encompass the work done recently in the group by various group members. The IDU group develops novel algorithms to detect, classify, and predict events in large data streams for scientific and engineering systems. This presentation for Knowledge Discovery and Data Mining 2009 is to demonstrate the data mining/machine learning capabilities of NASA Ames and IDU group. This will encompass the work done re cently in the group by various group members

    Building well-performing classifier ensembles: model and decision level combination.

    Get PDF
    There is a continuing drive for better, more robust generalisation performance from classification systems, and prediction systems in general. Ensemble methods, or the combining of multiple classifiers, have become an accepted and successful tool for doing this, though the reasons for success are not always entirely understood. In this thesis, we review the multiple classifier literature and consider the properties an ensemble of classifiers - or collection of subsets - should have in order to be combined successfully. We find that the framework of Stochastic Discrimination provides a well-defined account of these properties, which are shown to be strongly encouraged in a number of the most popular/successful methods in the literature via differing algorithmic devices. This uncovers some interesting and basic links between these methods, and aids understanding of their success and operation in terms of a kernel induced on the training data, with form particularly well suited to classification. One property that is desirable in both the SD framework and in a regression context, the ambiguity decomposition of the error, is de-correlation of individuals. This motivates the introduction of the Negative Correlation Learning method, in which neural networks are trained in parallel in a way designed to encourage de-correlation of the individual networks. The training is controlled by a parameter λ governing the extent to which correlations are penalised. Theoretical analysis of the dynamics of training results in an exact expression for the interval in which we can choose λ while ensuring stability of the training, and a value λ∗ for which the training has some interesting optimality properties. These values depend only on the size N of the ensemble. Decision level combination methods often result in a difficult to interpret model, and NCL is no exception. However in some applications, there is a need for understandable decisions and interpretable models. In response to this, we depart from the standard decision level combination paradigm to introduce a number of model level combination methods. As decision trees are one of the most interpretable model structures used in classification, we chose to combine structure from multiple individual trees to build a single combined model. We show that extremely compact, well performing models can be built in this way. In particular, a generalisation of bottom-up pruning to a multiple-tree context produces good results in this regard. Finally, we develop a classification system for a real-world churn prediction problem, illustrating some of the concepts introduced in the thesis, and a number of more practical considerations which are of importance when developing a prediction system for a specific problem
    corecore