6,573 research outputs found
"Influence Sketching": Finding Influential Samples In Large-Scale Regressions
There is an especially strong need in modern large-scale data analysis to
prioritize samples for manual inspection. For example, the inspection could
target important mislabeled samples or key vulnerabilities exploitable by an
adversarial attack. In order to solve the "needle in the haystack" problem of
which samples to inspect, we develop a new scalable version of Cook's distance,
a classical statistical technique for identifying samples which unusually
strongly impact the fit of a regression model (and its downstream predictions).
In order to scale this technique up to very large and high-dimensional
datasets, we introduce a new algorithm which we call "influence sketching."
Influence sketching embeds random projections within the influence computation;
in particular, the influence score is calculated using the randomly projected
pseudo-dataset from the post-convergence Generalized Linear Model (GLM). We
validate that influence sketching can reliably and successfully discover
influential samples by applying the technique to a malware detection dataset of
over 2 million executable files, each represented with almost 100,000 features.
For example, we find that randomly deleting approximately 10% of training
samples reduces predictive accuracy only slightly from 99.47% to 99.45%,
whereas deleting the same number of samples with high influence sketch scores
reduces predictive accuracy all the way down to 90.24%. Moreover, we find that
influential samples are especially likely to be mislabeled. In the case study,
we manually inspect the most influential samples, and find that influence
sketching pointed us to new, previously unidentified pieces of malware.Comment: fixed additional typo
Random projections for Bayesian regression
This article deals with random projections applied as a data reduction
technique for Bayesian regression analysis. We show sufficient conditions under
which the entire -dimensional distribution is approximately preserved under
random projections by reducing the number of data points from to in the case . Under mild
assumptions, we prove that evaluating a Gaussian likelihood function based on
the projected data instead of the original data yields a
-approximation in terms of the Wasserstein
distance. Our main result shows that the posterior distribution of Bayesian
linear regression is approximated up to a small error depending on only an
-fraction of its defining parameters. This holds when using
arbitrary Gaussian priors or the degenerate case of uniform distributions over
for . Our empirical evaluations involve different
simulated settings of Bayesian linear regression. Our experiments underline
that the proposed method is able to recover the regression model up to small
error while considerably reducing the total running time
Randomized Riemannian Preconditioning for Orthogonality Constrained Problems
Optimization problems with (generalized) orthogonality constraints are
prevalent across science and engineering. For example, in computational science
they arise in the symmetric (generalized) eigenvalue problem, in nonlinear
eigenvalue problems, and in electronic structures computations, to name a few
problems. In statistics and machine learning, they arise, for example, in
canonical correlation analysis and in linear discriminant analysis. In this
article, we consider using randomized preconditioning in the context of
optimization problems with generalized orthogonality constraints. Our proposed
algorithms are based on Riemannian optimization on the generalized Stiefel
manifold equipped with a non-standard preconditioned geometry, which
necessitates development of the geometric components necessary for developing
algorithms based on this approach. Furthermore, we perform asymptotic
convergence analysis of the preconditioned algorithms which help to
characterize the quality of a given preconditioner using second-order
information. Finally, for the problems of canonical correlation analysis and
linear discriminant analysis, we develop randomized preconditioners along with
corresponding bounds on the relevant condition number
Optimal Principal Component Analysis in Distributed and Streaming Models
We study the Principal Component Analysis (PCA) problem in the distributed
and streaming models of computation. Given a matrix a
rank parameter , and an accuracy parameter , we
want to output an orthonormal matrix for which where is the best rank- approximation to .
This paper provides improved algorithms for distributed PCA and streaming
PCA.Comment: STOC2016 full versio
Dimensionality Reduction for k-Means Clustering and Low Rank Approximation
We show how to approximate a data matrix with a much smaller
sketch that can be used to solve a general class of
constrained k-rank approximation problems to within error.
Importantly, this class of problems includes -means clustering and
unconstrained low rank approximation (i.e. principal component analysis). By
reducing data points to just dimensions, our methods generically
accelerate any exact, approximate, or heuristic algorithm for these ubiquitous
problems.
For -means dimensionality reduction, we provide relative
error results for many common sketching techniques, including random row
projection, column selection, and approximate SVD. For approximate principal
component analysis, we give a simple alternative to known algorithms that has
applications in the streaming setting. Additionally, we extend recent work on
column-based matrix reconstruction, giving column subsets that not only `cover'
a good subspace for \bv{A}, but can be used directly to compute this
subspace.
Finally, for -means clustering, we show how to achieve a
approximation by Johnson-Lindenstrauss projecting data points to just dimensions. This gives the first result that leverages the
specific structure of -means to achieve dimension independent of input size
and sublinear in
- …