31 research outputs found
Stabilized Nearest Neighbor Classifier and Its Statistical Properties
The stability of statistical analysis is an important indicator for
reproducibility, which is one main principle of scientific method. It entails
that similar statistical conclusions can be reached based on independent
samples from the same underlying population. In this paper, we introduce a
general measure of classification instability (CIS) to quantify the sampling
variability of the prediction made by a classification method. Interestingly,
the asymptotic CIS of any weighted nearest neighbor classifier turns out to be
proportional to the Euclidean norm of its weight vector. Based on this concise
form, we propose a stabilized nearest neighbor (SNN) classifier, which
distinguishes itself from other nearest neighbor classifiers, by taking the
stability into consideration. In theory, we prove that SNN attains the minimax
optimal convergence rate in risk, and a sharp convergence rate in CIS. The
latter rate result is established for general plug-in classifiers under a
low-noise condition. Extensive simulated and real examples demonstrate that SNN
achieves a considerable improvement in CIS over existing nearest neighbor
classifiers, with comparable classification accuracy. We implement the
algorithm in a publicly available R package snn.Comment: 48 Pages, 11 Figures. To Appear in JASA--T&
On Reject and Refine Options in Multicategory Classification
In many real applications of statistical learning, a decision made from
misclassification can be too costly to afford; in this case, a reject option,
which defers the decision until further investigation is conducted, is often
preferred. In recent years, there has been much development for binary
classification with a reject option. Yet, little progress has been made for the
multicategory case. In this article, we propose margin-based multicategory
classification methods with a reject option. In addition, and more importantly,
we introduce a new and unique refine option for the multicategory problem,
where the class of an observation is predicted to be from a set of class
labels, whose cardinality is not necessarily one. The main advantage of both
options lies in their capacity of identifying error-prone observations.
Moreover, the refine option can provide more constructive information for
classification by effectively ruling out implausible classes. Efficient
implementations have been developed for the proposed methods. On the
theoretical side, we offer a novel statistical learning theory and show a fast
convergence rate of the excess -risk of our methods with emphasis on
diverging dimensionality and number of classes. The results can be further
improved under a low noise assumption. A set of comprehensive simulation and
real data studies has shown the usefulness of the new learning tools compared
to regular multicategory classifiers. Detailed proofs of theorems and extended
numerical results are included in the supplemental materials available online.Comment: A revised version of this paper was accepted for publication in the
Journal of the American Statistical Association Theory and Methods Section.
52 pages, 6 figure
Significance Analysis for Pairwise Variable Selection in Classification
The goal of this article is to select important variables that can
distinguish one class of data from another. A marginal variable selection
method ranks the marginal effects for classification of individual variables,
and is a useful and efficient approach for variable selection. Our focus here
is to consider the bivariate effect, in addition to the marginal effect. In
particular, we are interested in those pairs of variables that can lead to
accurate classification predictions when they are viewed jointly. To accomplish
this, we propose a permutation test called Significance test of Joint Effect
(SigJEff). In the absence of joint effect in the data, SigJEff is similar or
equivalent to many marginal methods. However, when joint effects exist, our
method can significantly boost the performance of variable selection. Such
joint effects can help to provide additional, and sometimes dominating,
advantage for classification. We illustrate and validate our approach using
both simulated example and a real glioblastoma multiforme data set, which
provide promising results.Comment: 28 pages, 7 figure
Efficient Online Set-valued Classification with Bandit Feedback
Conformal prediction is a distribution-free method that wraps a given machine
learning model and returns a set of plausible labels that contain the true
label with a prescribed coverage rate. In practice, the empirical coverage
achieved highly relies on fully observed label information from data both in
the training phase for model fitting and the calibration phase for quantile
estimation. This dependency poses a challenge in the context of online learning
with bandit feedback, where a learner only has access to the correctness of
actions (i.e., pulled an arm) but not the full information of the true label.
In particular, when the pulled arm is incorrect, the learner only knows that
the pulled one is not the true class label, but does not know which label is
true. Additionally, bandit feedback further results in a smaller labeled
dataset for calibration, limited to instances with correct actions, thereby
affecting the accuracy of quantile estimation. To address these limitations, we
propose Bandit Class-specific Conformal Prediction (BCCP), offering coverage
guarantees on a class-specific granularity. Using an unbiased estimation of an
estimand involving the true label, BCCP trains the model and makes set-valued
inferences through stochastic gradient descent. Our approach overcomes the
challenges of sparsely labeled data in each iteration and generalizes the
reliability and applicability of conformal prediction to online decision-making
environments