9,352 research outputs found
Randomized Dimension Reduction on Massive Data
Scalability of statistical estimators is of increasing importance in modern
applications and dimension reduction is often used to extract relevant
information from data. A variety of popular dimension reduction approaches can
be framed as symmetric generalized eigendecomposition problems. In this paper
we outline how taking into account the low rank structure assumption implicit
in these dimension reduction approaches provides both computational and
statistical advantages. We adapt recent randomized low-rank approximation
algorithms to provide efficient solutions to three dimension reduction methods:
Principal Component Analysis (PCA), Sliced Inverse Regression (SIR), and
Localized Sliced Inverse Regression (LSIR). A key observation in this paper is
that randomization serves a dual role, improving both computational and
statistical performance. This point is highlighted in our experiments on real
and simulated data.Comment: 31 pages, 6 figures, Key Words:dimension reduction, generalized
eigendecompositon, low-rank, supervised, inverse regression, random
projections, randomized algorithms, Krylov subspace method
Self-adaptive node-based PCA encodings
In this paper we propose an algorithm, Simple Hebbian PCA, and prove that it
is able to calculate the principal component analysis (PCA) in a distributed
fashion across nodes. It simplifies existing network structures by removing
intralayer weights, essentially cutting the number of weights that need to be
trained in half
Asynchronous Gossip for Averaging and Spectral Ranking
We consider two variants of the classical gossip algorithm. The first variant
is a version of asynchronous stochastic approximation. We highlight a
fundamental difficulty associated with the classical asynchronous gossip
scheme, viz., that it may not converge to a desired average, and suggest an
alternative scheme based on reinforcement learning that has guaranteed
convergence to the desired average. We then discuss a potential application to
a wireless network setting with simultaneous link activation constraints. The
second variant is a gossip algorithm for distributed computation of the
Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant
draws upon a reinforcement learning algorithm for an average cost controlled
Markov decision problem, the second variant draws upon a reinforcement learning
algorithm for risk-sensitive control. We then discuss potential applications of
the second variant to ranking schemes, reputation networks, and principal
component analysis.Comment: 14 pages, 7 figures. Minor revisio
- …