673 research outputs found
Multilabel Classification with R Package mlr
We implemented several multilabel classification algorithms in the machine
learning package mlr. The implemented methods are binary relevance, classifier
chains, nested stacking, dependent binary relevance and stacking, which can be
used with any base learner that is accessible in mlr. Moreover, there is access
to the multilabel classification versions of randomForestSRC and rFerns. All
these methods can be easily compared by different implemented multilabel
performance measures and resampling methods in the standardized mlr framework.
In a benchmark experiment with several multilabel datasets, the performance of
the different methods is evaluated.Comment: 18 pages, 2 figures, to be published in R Journal; reference
correcte
Multi-Target Prediction: A Unifying View on Problems and Methods
Multi-target prediction (MTP) is concerned with the simultaneous prediction
of multiple target variables of diverse type. Due to its enormous application
potential, it has developed into an active and rapidly expanding research field
that combines several subfields of machine learning, including multivariate
regression, multi-label classification, multi-task learning, dyadic prediction,
zero-shot learning, network inference, and matrix completion. In this paper, we
present a unifying view on MTP problems and methods. First, we formally discuss
commonalities and differences between existing MTP problems. To this end, we
introduce a general framework that covers the above subfields as special cases.
As a second contribution, we provide a structured overview of MTP methods. This
is accomplished by identifying a number of key properties, which distinguish
such methods and determine their suitability for different types of problems.
Finally, we also discuss a few challenges for future research
Collaboration based Multi-Label Learning
It is well-known that exploiting label correlations is crucially important to
multi-label learning. Most of the existing approaches take label correlations
as prior knowledge, which may not correctly characterize the real relationships
among labels. Besides, label correlations are normally used to regularize the
hypothesis space, while the final predictions are not explicitly correlated. In
this paper, we suggest that for each individual label, the final prediction
involves the collaboration between its own prediction and the predictions of
other labels. Based on this assumption, we first propose a novel method to
learn the label correlations via sparse reconstruction in the label space.
Then, by seamlessly integrating the learned label correlations into model
training, we propose a novel multi-label learning approach that aims to
explicitly account for the correlated predictions of labels while training the
desired model simultaneously. Extensive experimental results show that our
approach outperforms the state-of-the-art counterparts.Comment: Accepted by AAAI-1
- …