23,362 research outputs found
A study of hierarchical and flat classification of proteins
Automatic classification of proteins using machine learning is an important problem that has received significant attention in the literature. One feature of this problem is that expert-defined hierarchies of protein classes exist and can potentially be exploited to improve classification performance. In this article we investigate empirically whether this is the case for two such hierarchies. We compare multi-class classification techniques that exploit the information in those class hierarchies and those that do not, using logistic regression, decision trees, bagged decision trees, and support vector machines as the underlying base learners. In particular, we compare hierarchical and flat variants of ensembles of nested dichotomies. The latter have been shown to deliver strong classification performance in multi-class settings. We present experimental results for synthetic, fold recognition, enzyme classification, and remote homology detection data. Our results show that exploiting the class hierarchy improves performance on the synthetic data, but not in the case of the protein classification problems. Based on this we recommend that strong flat multi-class methods be used as a baseline to establish the benefit of exploiting class hierarchies in this area
A unifying view for performance measures in multi-class prediction
In the last few years, many different performance measures have been
introduced to overcome the weakness of the most natural metric, the Accuracy.
Among them, Matthews Correlation Coefficient has recently gained popularity
among researchers not only in machine learning but also in several application
fields such as bioinformatics. Nonetheless, further novel functions are being
proposed in literature. We show that Confusion Entropy, a recently introduced
classifier performance measure for multi-class problems, has a strong
(monotone) relation with the multi-class generalization of a classical metric,
the Matthews Correlation Coefficient. Computational evidence in support of the
claim is provided, together with an outline of the theoretical explanation
Open-Category Classification by Adversarial Sample Generation
In real-world classification tasks, it is difficult to collect training
samples from all possible categories of the environment. Therefore, when an
instance of an unseen class appears in the prediction stage, a robust
classifier should be able to tell that it is from an unseen class, instead of
classifying it to be any known category. In this paper, adopting the idea of
adversarial learning, we propose the ASG framework for open-category
classification. ASG generates positive and negative samples of seen categories
in the unsupervised manner via an adversarial learning strategy. With the
generated samples, ASG then learns to tell seen from unseen in the supervised
manner. Experiments performed on several datasets show the effectiveness of
ASG.Comment: Published in IJCAI 201
- …