31,843 research outputs found

    Multi-Label Super Learner: Multi-Label Classification and Improving Its Performance Using Heterogenous Ensemble Methods

    Get PDF
    Classification is the task of predicting the label(s) of future instances by learning and inferring from the patterns of instances with known labels. Traditional classification methods focus on single-label classification; however, many real-life problems require multi-label classification that classifies each instance into multiple categories. For example, in sentiment analysis, a person may feel multiple emotions at the same time; in bioinformatics, a gene or protein may have a number of functional expressions; in text categorization, an email, medical record, or social media posting can be identified by various tags simultaneously. As a result of such wide a range of applications, in recent years, multi-label classification has become an emerging research area. There are two general approaches to realize multi-label classification: problem transformation and algorithm adaption. The problem transformation methodology, at its core, converts a multi-label dataset into several single-label datasets, thereby allowing the transformed datasets to be modeled using existing binary or multi-class classification methods. On the other hand, the algorithm adaption methodology transforms single-label classification algorithms in order to be applied to original multi-label datasets. This thesis proposes a new method, called Multi-Label Super Leaner (MLSL), which is a stacking-based heterogeneous ensemble method. An improved multi-label classification algorithm following the problem transformation approach, MLSL combines the prediction power of several multi-label classification methods through an ensemble algorithm, super learner. The performance of this new method is compared to existing problem transformation algorithms, and our numerical results show that MLSL outperforms existing algorithms for almost all of the performance metrics

    KFHE-HOMER: A multi-label ensemble classification algorithm exploiting sensor fusion properties of the Kalman filter

    Full text link
    Multi-label classification allows a datapoint to be labelled with more than one class at the same time. In spite of their success in multi-class classification problems, ensemble methods based on approaches other than bagging have not been widely explored for multi-label classification problems. The Kalman Filter-based Heuristic Ensemble (KFHE) is a recent ensemble method that exploits the sensor fusion properties of the Kalman filter to combine several classifier models, and that has been shown to be very effective. This article proposes KFHE-HOMER, an extension of the KFHE ensemble approach to the multi-label domain. KFHE-HOMER sequentially trains multiple HOMER multi-label classifiers and aggregates their outputs using the sensor fusion properties of the Kalman filter. Experiments described in this article show that KFHE-HOMER performs consistently better than existing multi-label methods including existing approaches based on ensembles.Comment: The paper is under consideration at Pattern Recognition Letters, Elsevie

    Multilabel Consensus Classification

    Full text link
    In the era of big data, a large amount of noisy and incomplete data can be collected from multiple sources for prediction tasks. Combining multiple models or data sources helps to counteract the effects of low data quality and the bias of any single model or data source, and thus can improve the robustness and the performance of predictive models. Out of privacy, storage and bandwidth considerations, in certain circumstances one has to combine the predictions from multiple models or data sources to obtain the final predictions without accessing the raw data. Consensus-based prediction combination algorithms are effective for such situations. However, current research on prediction combination focuses on the single label setting, where an instance can have one and only one label. Nonetheless, data nowadays are usually multilabeled, such that more than one label have to be predicted at the same time. Direct applications of existing prediction combination methods to multilabel settings can lead to degenerated performance. In this paper, we address the challenges of combining predictions from multiple multilabel classifiers and propose two novel algorithms, MLCM-r (MultiLabel Consensus Maximization for ranking) and MLCM-a (MLCM for microAUC). These algorithms can capture label correlations that are common in multilabel classifications, and optimize corresponding performance metrics. Experimental results on popular multilabel classification tasks verify the theoretical analysis and effectiveness of the proposed methods
    • …
    corecore