27 research outputs found
A computationally and cognitively plausible model of supervised and unsupervised learning
Author version made available in accordance with the publisher's policy. "The final publication is available at link.springer.com”The issue of chance correction has been discussed for many decades in the context of
statistics, psychology and machine learning, with multiple measures being shown to
have desirable properties, including various definitions of Kappa or Correlation, and
the psychologically validated ΔP measures. In this paper, we discuss the relationships
between these measures, showing that they form part of a single family of measures,
and that using an appropriate measure can positively impact learning
Unbiased taxonomic annotation of metagenomic samples
The classification of reads from a metagenomic sample using a reference taxonomy is usually based on first mapping the reads to the reference sequences and then classifying each read at a node under the lowest common ancestor of the candidate sequences in the reference taxonomy with the least classification error. However, this taxonomic annotation can be biased by an imbalanced taxonomy and also by the presence of multiple nodes in the taxonomy with the least classification error for a given read. In this article, we show that the Rand index is a better indicator of classification error than the often used area under thereceiver operating characteristic (ROC) curve andF-measure for both balanced and imbalanced reference taxonomies, and we also address the second source of bias by reducing the taxonomic annotation problem for a whole metagenomic sample to a set cover problem, for which a logarithmic approximation can be obtained in linear time and an exact solution can be obtained by integer linear programming. Experimental results with a proof-of-concept implementation of the set cover approach to taxonomic annotation in a next release of the TANGO software show that the set cover approach further reduces ambiguity in the taxonomic annotation obtained with TANGO without distorting the relative abundance profile of the metagenomic sample.Peer ReviewedPostprint (published version
Rough diamonds in natural language learning
Abstract. Machine Learning of Natural Language provides a rich environment for exploring supervised and unsupervised learning techniques including soft clustering and rough sets. This keynote presentation will trace the course of our Natural Language Learning as well as some quite intriguing spin-off applications. The focus of the paper will be learning, by both human and computer, reinterpreting our work of the last 30 years [1-12,20-24] in terms of recent developments in Rough Sets