64,425 research outputs found

    Low Rank Approximation of Binary Matrices: Column Subset Selection and Generalizations

    Get PDF
    Low rank matrix approximation is an important tool in machine learning. Given a data matrix, low rank approximation helps to find factors, patterns and provides concise representations for the data. Research on low rank approximation usually focus on real matrices. However, in many applications data are binary (categorical) rather than continuous. This leads to the problem of low rank approximation of binary matrix. Here we are given a dΓ—nd \times n binary matrix AA and a small integer kk. The goal is to find two binary matrices UU and VV of sizes dΓ—kd \times k and kΓ—nk \times n respectively, so that the Frobenius norm of Aβˆ’UVA - U V is minimized. There are two models of this problem, depending on the definition of the dot product of binary vectors: The GF(2)\mathrm{GF}(2) model and the Boolean semiring model. Unlike low rank approximation of real matrix which can be efficiently solved by Singular Value Decomposition, approximation of binary matrix is NPNP-hard even for k=1k=1. In this paper, we consider the problem of Column Subset Selection (CSS), in which one low rank matrix must be formed by kk columns of the data matrix. We characterize the approximation ratio of CSS for binary matrices. For GF(2)GF(2) model, we show the approximation ratio of CSS is bounded by k2+1+k2(2kβˆ’1)\frac{k}{2}+1+\frac{k}{2(2^k-1)} and this bound is asymptotically tight. For Boolean model, it turns out that CSS is no longer sufficient to obtain a bound. We then develop a Generalized CSS (GCSS) procedure in which the columns of one low rank matrix are generated from Boolean formulas operating bitwise on columns of the data matrix. We show the approximation ratio of GCSS is bounded by 2kβˆ’1+12^{k-1}+1, and the exponential dependency on kk is inherent.Comment: 38 page

    Approximate Completely Positive Semidefinite Rank

    Full text link
    In this paper we provide an approximation for completely positive semidefinite (cpsd) matrices with cpsd-rank bounded above (almost) independently from the cpsd-rank of the initial matrix. This is particularly relevant since the cpsd-rank of a matrix cannot, in general, be upper bounded by a function only depending on its size. For this purpose, we make use of the Approximate Carath\'eodory Theorem in order to construct an approximate matrix with a low-rank Gram representation. We then employ the Johnson-Lindenstrauss Lemma to improve to a logarithmic dependence of the cpsd-rank on the size.Comment: v2: clarified and corrected some citation

    Simple Heuristics Yield Provable Algorithms for Masked Low-Rank Approximation

    Get PDF
    In maskedΒ lowβˆ’rankΒ approximationmasked\ low-rank\ approximation, one is given A∈RnΓ—nA \in \mathbb{R}^{n \times n} and binary mask matrix W∈{0,1}nΓ—nW \in \{0,1\}^{n \times n}. The goal is to find a rank-kk matrix LL for which: cost(L)=βˆ‘i=1nβˆ‘j=1nWi,jβ‹…(Ai,jβˆ’Li,j)2≀OPT+Ο΅βˆ₯Aβˆ₯F2,cost(L) = \sum_{i=1}^{n} \sum_{j = 1}^{n} W_{i,j} \cdot (A_{i,j} - L_{i,j} )^2 \leq OPT + \epsilon \|A\|_F^2 , where OPT=min⁑rankβˆ’kΒ L^cost(L^)OPT = \min_{rank-k\ \hat{L}} cost(\hat L) and Ο΅\epsilon is a given error parameter. Depending on the choice of WW, this problem captures factor analysis, low-rank plus diagonal decomposition, robust PCA, low-rank matrix completion, low-rank plus block matrix approximation, and many problems. Many of these problems are NP-hard, and while some algorithms with provable guarantees are known, they either 1) run in time nΞ©(k2/Ο΅)n^{\Omega(k^2/\epsilon)} or 2) make strong assumptions, e.g., that AA is incoherent or that WW is random. In this work, we show that a common polynomial time heuristic, which simply sets AA to 00 where WW is 00, and then finds a standard low-rank approximation, yields bicriteria approximation guarantees for this problem. In particular, for rank kβ€²>kk' > k depending on the $public\ coin\ partition\ numberof of W,theheuristicoutputsrankβˆ’, the heuristic outputs rank-k' Lwithcost with cost(L) \leq OPT + \epsilon \|A\|_F^2.Thispartitionnumberisinturnboundedbythe. This partition number is in turn bounded by the randomized\ communication\ complexityof of W,wheninterpretedasatwoβˆ’playercommunicationmatrix.Formanyimportantexamplesofmaskedlowβˆ’rankapproximation,includingallthoselistedabove,thisresultyieldsbicriteriaapproximationguaranteeswith, when interpreted as a two-player communication matrix. For many important examples of masked low-rank approximation, including all those listed above, this result yields bicriteria approximation guarantees with k' = k \cdot poly(\log n/\epsilon)$. Further, we show that different models of communication yield algorithms for natural variants of masked low-rank approximation. For example, multi-player number-in-hand communication complexity connects to masked tensor decomposition and non-deterministic communication complexity to masked Boolean low-rank factorization.Comment: ITCS 202

    Scalable and distributed constrained low rank approximations

    Get PDF
    Low rank approximation is the problem of finding two low rank factors W and H such that the rank(WH) << rank(A) and A β‰ˆ WH. These low rank factors W and H can be constrained for meaningful physical interpretation and referred as Constrained Low Rank Approximation (CLRA). Like most of the constrained optimization problem, performing CLRA can be computationally expensive than its unconstrained counterpart. A widely used CLRA is the Non-negative Matrix Factorization (NMF) which enforces non-negativity constraints in each of its low rank factors W and H. In this thesis, I focus on scalable/distributed CLRA algorithms for constraints such as boundedness and non-negativity for large real world matrices that includes text, High Definition (HD) video, social networks and recommender systems. First, I begin with the Bounded Matrix Low Rank Approximation (BMA) which imposes a lower and an upper bound on every element of the lower rank matrix. BMA is more challenging than NMF as it imposes bounds on the product WH rather than on each of the low rank factors W and H. For very large input matrices, we extend our BMA algorithm to Block BMA that can scale to a large number of processors. In applications, such as HD video, where the input matrix to be factored is extremely large, distributed computation is inevitable and the network communication becomes a major performance bottleneck. Towards this end, we propose a novel distributed Communication Avoiding NMF (CANMF) algorithm that communicates only the right low rank factor to its neighboring machine. Finally, a general distributed HPC- NMF framework that uses HPC techniques in communication intensive NMF operations and suitable for broader class of NMF algorithms.Ph.D
    • …
    corecore