63,994 research outputs found
Optimal approximate matrix product in terms of stable rank
We prove, using the subspace embedding guarantee in a black box way, that one
can achieve the spectral norm guarantee for approximate matrix multiplication
with a dimensionality-reducing map having
rows. Here is the maximum stable rank, i.e. squared ratio of
Frobenius and operator norms, of the two matrices being multiplied. This is a
quantitative improvement over previous work of [MZ11, KVZ14], and is also
optimal for any oblivious dimensionality-reducing map. Furthermore, due to the
black box reliance on the subspace embedding property in our proofs, our
theorem can be applied to a much more general class of sketching matrices than
what was known before, in addition to achieving better bounds. For example, one
can apply our theorem to efficient subspace embeddings such as the Subsampled
Randomized Hadamard Transform or sparse subspace embeddings, or even with
subspace embedding constructions that may be developed in the future.
Our main theorem, via connections with spectral error matrix multiplication
shown in prior work, implies quantitative improvements for approximate least
squares regression and low rank approximation. Our main result has also already
been applied to improve dimensionality reduction guarantees for -means
clustering [CEMMP14], and implies new results for nonparametric regression
[YPW15].
We also separately point out that the proof of the "BSS" deterministic
row-sampling result of [BSS12] can be modified to show that for any matrices
of stable rank at most , one can achieve the spectral norm
guarantee for approximate matrix multiplication of by deterministically
sampling rows that can be found in polynomial
time. The original result of [BSS12] was for rank instead of stable rank. Our
observation leads to a stronger version of a main theorem of [KMST10].Comment: v3: minor edits; v2: fixed one step in proof of Theorem 9 which was
wrong by a constant factor (see the new Lemma 5 and its use; final theorem
unaffected
Efficient SDP Inference for Fully-connected CRFs Based on Low-rank Decomposition
Conditional Random Fields (CRF) have been widely used in a variety of
computer vision tasks. Conventional CRFs typically define edges on neighboring
image pixels, resulting in a sparse graph such that efficient inference can be
performed. However, these CRFs fail to model long-range contextual
relationships. Fully-connected CRFs have thus been proposed. While there are
efficient approximate inference methods for such CRFs, usually they are
sensitive to initialization and make strong assumptions. In this work, we
develop an efficient, yet general algorithm for inference on fully-connected
CRFs. The algorithm is based on a scalable SDP algorithm and the low- rank
approximation of the similarity/kernel matrix. The core of the proposed
algorithm is a tailored quasi-Newton method that takes advantage of the
low-rank matrix approximation when solving the specialized SDP dual problem.
Experiments demonstrate that our method can be applied on fully-connected CRFs
that cannot be solved previously, such as pixel-level image co-segmentation.Comment: 15 pages. A conference version of this work appears in Proc. IEEE
Conference on Computer Vision and Pattern Recognition, 201
A Fast Algorithm for Sparse Controller Design
We consider the task of designing sparse control laws for large-scale systems
by directly minimizing an infinite horizon quadratic cost with an
penalty on the feedback controller gains. Our focus is on an improved algorithm
that allows us to scale to large systems (i.e. those where sparsity is most
useful) with convergence times that are several orders of magnitude faster than
existing algorithms. In particular, we develop an efficient proximal Newton
method which minimizes per-iteration cost with a coordinate descent active set
approach and fast numerical solutions to the Lyapunov equations. Experimentally
we demonstrate the appeal of this approach on synthetic examples and real power
networks significantly larger than those previously considered in the
literature
Dimensionality Reduction for k-Means Clustering and Low Rank Approximation
We show how to approximate a data matrix with a much smaller
sketch that can be used to solve a general class of
constrained k-rank approximation problems to within error.
Importantly, this class of problems includes -means clustering and
unconstrained low rank approximation (i.e. principal component analysis). By
reducing data points to just dimensions, our methods generically
accelerate any exact, approximate, or heuristic algorithm for these ubiquitous
problems.
For -means dimensionality reduction, we provide relative
error results for many common sketching techniques, including random row
projection, column selection, and approximate SVD. For approximate principal
component analysis, we give a simple alternative to known algorithms that has
applications in the streaming setting. Additionally, we extend recent work on
column-based matrix reconstruction, giving column subsets that not only `cover'
a good subspace for \bv{A}, but can be used directly to compute this
subspace.
Finally, for -means clustering, we show how to achieve a
approximation by Johnson-Lindenstrauss projecting data points to just dimensions. This gives the first result that leverages the
specific structure of -means to achieve dimension independent of input size
and sublinear in
Numerical Hermitian Yang-Mills Connections and Vector Bundle Stability in Heterotic Theories
A numerical algorithm is presented for explicitly computing the gauge
connection on slope-stable holomorphic vector bundles on Calabi-Yau manifolds.
To illustrate this algorithm, we calculate the connections on stable monad
bundles defined on the K3 twofold and Quintic threefold. An error measure is
introduced to determine how closely our algorithmic connection approximates a
solution to the Hermitian Yang-Mills equations. We then extend our results by
investigating the behavior of non slope-stable bundles. In a variety of
examples, it is shown that the failure of these bundles to satisfy the
Hermitian Yang-Mills equations, including field-strength singularities, can be
accurately reproduced numerically. These results make it possible to
numerically determine whether or not a vector bundle is slope-stable, thus
providing an important new tool in the exploration of heterotic vacua.Comment: 52 pages, 15 figures. LaTex formatting of figures corrected in
version 2
The achievable performance of convex demixing
Demixing is the problem of identifying multiple structured signals from a
superimposed, undersampled, and noisy observation. This work analyzes a general
framework, based on convex optimization, for solving demixing problems. When
the constituent signals follow a generic incoherence model, this analysis leads
to precise recovery guarantees. These results admit an attractive
interpretation: each signal possesses an intrinsic degrees-of-freedom
parameter, and demixing can succeed if and only if the dimension of the
observation exceeds the total degrees of freedom present in the observation
- …