26,732 research outputs found
Validation of Soft Classification Models using Partial Class Memberships: An Extended Concept of Sensitivity & Co. applied to the Grading of Astrocytoma Tissues
We use partial class memberships in soft classification to model uncertain
labelling and mixtures of classes. Partial class memberships are not restricted
to predictions, but may also occur in reference labels (ground truth, gold
standard diagnosis) for training and validation data.
Classifier performance is usually expressed as fractions of the confusion
matrix, such as sensitivity, specificity, negative and positive predictive
values. We extend this concept to soft classification and discuss the bias and
variance properties of the extended performance measures. Ambiguity in
reference labels translates to differences between best-case, expected and
worst-case performance. We show a second set of measures comparing expected and
ideal performance which is closely related to regression performance, namely
the root mean squared error RMSE and the mean absolute error MAE.
All calculations apply to classical crisp classification as well as to soft
classification (partial class memberships and/or one-class classifiers). The
proposed performance measures allow to test classifiers with actual borderline
cases. In addition, hardening of e.g. posterior probabilities into class labels
is not necessary, avoiding the corresponding information loss and increase in
variance.
We implement the proposed performance measures in the R package
"softclassval", which is available from CRAN and at
http://softclassval.r-forge.r-project.org.
Our reasoning as well as the importance of partial memberships for
chemometric classification is illustrated by a real-word application:
astrocytoma brain tumor tissue grading (80 patients, 37000 spectra) for finding
surgical excision borders. As borderline cases are the actual target of the
analytical technique, samples which are diagnosed to be borderline cases must
be included in the validation.Comment: The manuscript is accepted for publication in Chemometrics and
Intelligent Laboratory Systems. Supplementary figures and tables are at the
end of the pd
On Machine-Learned Classification of Variable Stars with Sparse and Noisy Time-Series Data
With the coming data deluge from synoptic surveys, there is a growing need
for frameworks that can quickly and automatically produce calibrated
classification probabilities for newly-observed variables based on a small
number of time-series measurements. In this paper, we introduce a methodology
for variable-star classification, drawing from modern machine-learning
techniques. We describe how to homogenize the information gleaned from light
curves by selection and computation of real-numbered metrics ("feature"),
detail methods to robustly estimate periodic light-curve features, introduce
tree-ensemble methods for accurate variable star classification, and show how
to rigorously evaluate the classification results using cross validation. On a
25-class data set of 1542 well-studied variable stars, we achieve a 22.8%
overall classification error using the random forest classifier; this
represents a 24% improvement over the best previous classifier on these data.
This methodology is effective for identifying samples of specific science
classes: for pulsational variables used in Milky Way tomography we obtain a
discovery efficiency of 98.2% and for eclipsing systems we find an efficiency
of 99.1%, both at 95% purity. We show that the random forest (RF) classifier is
superior to other machine-learned methods in terms of accuracy, speed, and
relative immunity to features with no useful class information; the RF
classifier can also be used to estimate the importance of each feature in
classification. Additionally, we present the first astronomical use of
hierarchical classification methods to incorporate a known class taxonomy in
the classifier, which further reduces the catastrophic error rate to 7.8%.
Excluding low-amplitude sources, our overall error rate improves to 14%, with a
catastrophic error rate of 3.5%.Comment: 23 pages, 9 figure
A Winnow-Based Approach to Context-Sensitive Spelling Correction
A large class of machine-learning problems in natural language require the
characterization of linguistic context. Two characteristic properties of such
problems are that their feature space is of very high dimensionality, and their
target concepts refer to only a small subset of the features in the space.
Under such conditions, multiplicative weight-update algorithms such as Winnow
have been shown to have exceptionally good theoretical properties. We present
an algorithm combining variants of Winnow and weighted-majority voting, and
apply it to a problem in the aforementioned class: context-sensitive spelling
correction. This is the task of fixing spelling errors that happen to result in
valid words, such as substituting "to" for "too", "casual" for "causal", etc.
We evaluate our algorithm, WinSpell, by comparing it against BaySpell, a
statistics-based method representing the state of the art for this task. We
find: (1) When run with a full (unpruned) set of features, WinSpell achieves
accuracies significantly higher than BaySpell was able to achieve in either the
pruned or unpruned condition; (2) When compared with other systems in the
literature, WinSpell exhibits the highest performance; (3) The primary reason
that WinSpell outperforms BaySpell is that WinSpell learns a better linear
separator; (4) When run on a test set drawn from a different corpus than the
training set was drawn from, WinSpell is better able than BaySpell to adapt,
using a strategy we will present that combines supervised learning on the
training set with unsupervised learning on the (noisy) test set.Comment: To appear in Machine Learning, Special Issue on Natural Language
Learning, 1999. 25 page
- …