31,843 research outputs found
Multi-Label Super Learner: Multi-Label Classification and Improving Its Performance Using Heterogenous Ensemble Methods
Classification is the task of predicting the label(s) of future instances by learning and inferring from the patterns of instances with known labels. Traditional classification methods focus on single-label classification; however, many real-life problems require multi-label classification that classifies each instance into multiple categories. For example, in sentiment analysis, a person may feel multiple emotions at the same time; in bioinformatics, a gene or protein may have a number of functional expressions; in text categorization, an email, medical record, or social media posting can be identified by various tags simultaneously. As a result of such wide a range of applications, in recent years, multi-label classification has become an emerging research area.
There are two general approaches to realize multi-label classification: problem transformation and algorithm adaption. The problem transformation methodology, at its core, converts a multi-label dataset into several single-label datasets, thereby allowing the transformed datasets to be modeled using existing binary or multi-class classification methods. On the other hand, the algorithm adaption methodology transforms single-label classification algorithms in order to be applied to original multi-label datasets.
This thesis proposes a new method, called Multi-Label Super Leaner (MLSL), which is a stacking-based heterogeneous ensemble method. An improved multi-label classification algorithm following the problem transformation approach, MLSL combines the prediction power of several multi-label classification methods through an ensemble algorithm, super learner. The performance of this new method is compared to existing problem transformation algorithms, and our numerical results show that MLSL outperforms existing algorithms for almost all of the performance metrics
KFHE-HOMER: A multi-label ensemble classification algorithm exploiting sensor fusion properties of the Kalman filter
Multi-label classification allows a datapoint to be labelled with more than
one class at the same time. In spite of their success in multi-class
classification problems, ensemble methods based on approaches other than
bagging have not been widely explored for multi-label classification problems.
The Kalman Filter-based Heuristic Ensemble (KFHE) is a recent ensemble method
that exploits the sensor fusion properties of the Kalman filter to combine
several classifier models, and that has been shown to be very effective. This
article proposes KFHE-HOMER, an extension of the KFHE ensemble approach to the
multi-label domain. KFHE-HOMER sequentially trains multiple HOMER multi-label
classifiers and aggregates their outputs using the sensor fusion properties of
the Kalman filter. Experiments described in this article show that KFHE-HOMER
performs consistently better than existing multi-label methods including
existing approaches based on ensembles.Comment: The paper is under consideration at Pattern Recognition Letters,
Elsevie
Multilabel Consensus Classification
In the era of big data, a large amount of noisy and incomplete data can be
collected from multiple sources for prediction tasks. Combining multiple models
or data sources helps to counteract the effects of low data quality and the
bias of any single model or data source, and thus can improve the robustness
and the performance of predictive models. Out of privacy, storage and bandwidth
considerations, in certain circumstances one has to combine the predictions
from multiple models or data sources to obtain the final predictions without
accessing the raw data. Consensus-based prediction combination algorithms are
effective for such situations. However, current research on prediction
combination focuses on the single label setting, where an instance can have one
and only one label. Nonetheless, data nowadays are usually multilabeled, such
that more than one label have to be predicted at the same time. Direct
applications of existing prediction combination methods to multilabel settings
can lead to degenerated performance. In this paper, we address the challenges
of combining predictions from multiple multilabel classifiers and propose two
novel algorithms, MLCM-r (MultiLabel Consensus Maximization for ranking) and
MLCM-a (MLCM for microAUC). These algorithms can capture label correlations
that are common in multilabel classifications, and optimize corresponding
performance metrics. Experimental results on popular multilabel classification
tasks verify the theoretical analysis and effectiveness of the proposed
methods
Recommended from our members
Improving Patch-Based Convolutional Neural Networks for MRI Brain Tumor Segmentation by Leveraging Location Information.
The manual brain tumor annotation process is time consuming and resource consuming, therefore, an automated and accurate brain tumor segmentation tool is greatly in demand. In this paper, we introduce a novel method to integrate location information with the state-of-the-art patch-based neural networks for brain tumor segmentation. This is motivated by the observation that lesions are not uniformly distributed across different brain parcellation regions and that a locality-sensitive segmentation is likely to obtain better segmentation accuracy. Toward this, we use an existing brain parcellation atlas in the Montreal Neurological Institute (MNI) space and map this atlas to the individual subject data. This mapped atlas in the subject data space is integrated with structural Magnetic Resonance (MR) imaging data, and patch-based neural networks, including 3D U-Net and DeepMedic, are trained to classify the different brain lesions. Multiple state-of-the-art neural networks are trained and integrated with XGBoost fusion in the proposed two-level ensemble method. The first level reduces the uncertainty of the same type of models with different seed initializations, and the second level leverages the advantages of different types of neural network models. The proposed location information fusion method improves the segmentation performance of state-of-the-art networks including 3D U-Net and DeepMedic. Our proposed ensemble also achieves better segmentation performance compared to the state-of-the-art networks in BraTS 2017 and rivals state-of-the-art networks in BraTS 2018. Detailed results are provided on the public multimodal brain tumor segmentation (BraTS) benchmarks
- …