41 research outputs found
Practical sketching algorithms for low-rank matrix approximation
This paper describes a suite of algorithms for constructing low-rank
approximations of an input matrix from a random linear image of the matrix,
called a sketch. These methods can preserve structural properties of the input
matrix, such as positive-semidefiniteness, and they can produce approximations
with a user-specified rank. The algorithms are simple, accurate, numerically
stable, and provably correct. Moreover, each method is accompanied by an
informative error bound that allows users to select parameters a priori to
achieve a given approximation quality. These claims are supported by numerical
experiments with real and synthetic data
Fixed-Rank Approximation of a Positive-Semidefinite Matrix from Streaming Data
Several important applications, such as streaming PCA and semidefinite
programming, involve a large-scale positive-semidefinite (psd) matrix that is
presented as a sequence of linear updates. Because of storage limitations, it
may only be possible to retain a sketch of the psd matrix. This paper develops
a new algorithm for fixed-rank psd approximation from a sketch. The approach
combines the Nystrom approximation with a novel mechanism for rank truncation.
Theoretical analysis establishes that the proposed method can achieve any
prescribed relative error in the Schatten 1-norm and that it exploits the
spectral decay of the input matrix. Computer experiments show that the proposed
method dominates alternative techniques for fixed-rank psd matrix approximation
across a wide range of examples
Stochastic Frank-Wolfe for Composite Convex Minimization
A broad class of convex optimization problems can be formulated as a
semidefinite program (SDP), minimization of a convex function over the
positive-semidefinite cone subject to some affine constraints. The majority of
classical SDP solvers are designed for the deterministic setting where problem
data is readily available. In this setting, generalized conditional gradient
methods (aka Frank-Wolfe-type methods) provide scalable solutions by leveraging
the so-called linear minimization oracle instead of the projection onto the
semidefinite cone. Most problems in machine learning and modern engineering
applications, however, contain some degree of stochasticity. In this work, we
propose the first conditional-gradient-type method for solving stochastic
optimization problems under affine constraints. Our method guarantees
convergence rate in expectation on the objective
residual and on the feasibility gap
Randomized Single-View Algorithms for Low-Rank Matrix Approximation
This paper develops a suite of algorithms for constructing low-rank approximations of an input matrix from a random linear image of the matrix, called a sketch. These methods can preserve structural properties of the input matrix, such as positive-semidefiniteness, and they can produce approximations with a user-specified rank. The algorithms are simple, accurate, numerically stable, and provably correct. Moreover, each method is accompanied by an informative error bound that allows users to select parameters a priori to achieve a given approximation quality. These claims are supported by computer experiments
Scalable Semidefinite Programming
Semidefinite programming (SDP) is a powerful framework from convex optimization that has striking potential for data science applications. This paper develops a provably correct algorithm for solving large SDP problems by economizing on both the storage and the arithmetic costs. Numerical evidence shows that the method is effective for a range of applications, including relaxations of MaxCut, abstract phase retrieval, and quadratic assignment. Running on a laptop, the algorithm can handle SDP instances where the matrix variable has over 10¹³ entries