20,628 research outputs found
Optimal classification in sparse Gaussian graphic model
Consider a two-class classification problem where the number of features is
much larger than the sample size. The features are masked by Gaussian noise
with mean zero and covariance matrix , where the precision matrix
is unknown but is presumably sparse. The useful features,
also unknown, are sparse and each contributes weakly (i.e., rare and weak) to
the classification decision. By obtaining a reasonably good estimate of
, we formulate the setting as a linear regression model. We propose a
two-stage classification method where we first select features by the method of
Innovated Thresholding (IT), and then use the retained features and Fisher's
LDA for classification. In this approach, a crucial problem is how to set the
threshold of IT. We approach this problem by adapting the recent innovation of
Higher Criticism Thresholding (HCT). We find that when useful features are rare
and weak, the limiting behavior of HCT is essentially just as good as the
limiting behavior of ideal threshold, the threshold one would choose if the
underlying distribution of the signals is known (if only). Somewhat
surprisingly, when is sufficiently sparse, its off-diagonal
coordinates usually do not have a major influence over the classification
decision. Compared to recent work in the case where is the identity
matrix [Proc. Natl. Acad. Sci. USA 105 (2008) 14790-14795; Philos. Trans. R.
Soc. Lond. Ser. A Math. Phys. Eng. Sci. 367 (2009) 4449-4470], the current
setting is much more general, which needs a new approach and much more
sophisticated analysis. One key component of the analysis is the intimate
relationship between HCT and Fisher's separation. Another key component is the
tight large-deviation bounds for empirical processes for data with
unconventional correlation structures, where graph theory on vertex coloring
plays an important role.Comment: Published in at http://dx.doi.org/10.1214/13-AOS1163 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Fixed Boundary Flows
We consider the fixed boundary flow with canonical interpretability as
principal components extended on the non-linear Riemannian manifolds. We aim to
find a flow with fixed starting and ending point for multivariate datasets
lying on an embedded non-linear Riemannian manifold, differing from the
principal flow that starts from the center of the data cloud. Both points are
given in advance, using the intrinsic metric on the manifolds. From the
perspective of geometry, the fixed boundary flow is defined as an optimal curve
that moves in the data cloud. At any point on the flow, it maximizes the inner
product of the vector field, which is calculated locally, and the tangent
vector of the flow. We call the new flow the fixed boundary flow. The rigorous
definition is given by means of an Euler-Lagrange problem, and its solution is
reduced to that of a Differential Algebraic Equation (DAE). A high level
algorithm is created to numerically compute the fixed boundary. We show that
the fixed boundary flow yields a concatenate of three segments, one of which
coincides with the usual principal flow when the manifold is reduced to the
Euclidean space. We illustrate how the fixed boundary flow can be used and
interpreted, and its application in real data
- …