64,425 research outputs found
Low Rank Approximation of Binary Matrices: Column Subset Selection and Generalizations
Low rank matrix approximation is an important tool in machine learning. Given
a data matrix, low rank approximation helps to find factors, patterns and
provides concise representations for the data. Research on low rank
approximation usually focus on real matrices. However, in many applications
data are binary (categorical) rather than continuous. This leads to the problem
of low rank approximation of binary matrix. Here we are given a
binary matrix and a small integer . The goal is to find two binary
matrices and of sizes and respectively, so
that the Frobenius norm of is minimized. There are two models of this
problem, depending on the definition of the dot product of binary vectors: The
model and the Boolean semiring model. Unlike low rank
approximation of real matrix which can be efficiently solved by Singular Value
Decomposition, approximation of binary matrix is -hard even for .
In this paper, we consider the problem of Column Subset Selection (CSS), in
which one low rank matrix must be formed by columns of the data matrix. We
characterize the approximation ratio of CSS for binary matrices. For
model, we show the approximation ratio of CSS is bounded by
and this bound is asymptotically tight. For
Boolean model, it turns out that CSS is no longer sufficient to obtain a bound.
We then develop a Generalized CSS (GCSS) procedure in which the columns of one
low rank matrix are generated from Boolean formulas operating bitwise on
columns of the data matrix. We show the approximation ratio of GCSS is bounded
by , and the exponential dependency on is inherent.Comment: 38 page
Approximate Completely Positive Semidefinite Rank
In this paper we provide an approximation for completely positive
semidefinite (cpsd) matrices with cpsd-rank bounded above (almost)
independently from the cpsd-rank of the initial matrix. This is particularly
relevant since the cpsd-rank of a matrix cannot, in general, be upper bounded
by a function only depending on its size. For this purpose, we make use of the
Approximate Carath\'eodory Theorem in order to construct an approximate matrix
with a low-rank Gram representation. We then employ the Johnson-Lindenstrauss
Lemma to improve to a logarithmic dependence of the cpsd-rank on the size.Comment: v2: clarified and corrected some citation
Simple Heuristics Yield Provable Algorithms for Masked Low-Rank Approximation
In , one is given and binary mask matrix . The goal is to
find a rank- matrix for which:
where and is a given
error parameter. Depending on the choice of , this problem captures factor
analysis, low-rank plus diagonal decomposition, robust PCA, low-rank matrix
completion, low-rank plus block matrix approximation, and many problems. Many
of these problems are NP-hard, and while some algorithms with provable
guarantees are known, they either 1) run in time or
2) make strong assumptions, e.g., that is incoherent or that is random.
In this work, we show that a common polynomial time heuristic, which simply
sets to where is , and then finds a standard low-rank
approximation, yields bicriteria approximation guarantees for this problem. In
particular, for rank depending on the $public\ coin\ partition\
numberWk'L(L) \leq OPT +
\epsilon \|A\|_F^2randomized\ communication\ complexityWk' = k \cdot poly(\log n/\epsilon)$.
Further, we show that different models of communication yield algorithms for
natural variants of masked low-rank approximation. For example, multi-player
number-in-hand communication complexity connects to masked tensor decomposition
and non-deterministic communication complexity to masked Boolean low-rank
factorization.Comment: ITCS 202
Scalable and distributed constrained low rank approximations
Low rank approximation is the problem of finding two low rank factors W and H such that the rank(WH) << rank(A) and A β WH. These low rank factors W and H can be constrained for meaningful physical interpretation and referred as Constrained Low Rank Approximation (CLRA). Like most of the constrained optimization problem, performing CLRA can be computationally expensive than its unconstrained counterpart. A widely used CLRA is the Non-negative Matrix Factorization (NMF) which enforces non-negativity constraints in each of its low rank factors W and H. In this thesis, I focus on scalable/distributed CLRA algorithms for constraints such as boundedness and non-negativity for large real world matrices that includes text, High Definition (HD) video, social networks and recommender systems. First, I begin with the Bounded Matrix Low Rank Approximation (BMA) which imposes a lower and an upper bound on every element of the lower rank matrix. BMA is more challenging than NMF as it imposes bounds on the product WH rather than on each of the low rank factors W and H. For very large input matrices, we extend our BMA algorithm to Block BMA that can scale to a large number of processors. In applications, such as HD video, where the input matrix to be factored is extremely large, distributed computation is inevitable and the network communication becomes a major performance bottleneck. Towards this end, we propose a novel distributed Communication Avoiding NMF (CANMF) algorithm that communicates only the right low rank factor to its neighboring machine. Finally, a general distributed HPC- NMF framework that uses HPC techniques in communication intensive NMF operations and suitable for broader class of NMF algorithms.Ph.D
- β¦