163,971 research outputs found

    On the Power of Adaptivity in Matrix Completion and Approximation

    Full text link
    We consider the related tasks of matrix completion and matrix approximation from missing data and propose adaptive sampling procedures for both problems. We show that adaptive sampling allows one to eliminate standard incoherence assumptions on the matrix row space that are necessary for passive sampling procedures. For exact recovery of a low-rank matrix, our algorithm judiciously selects a few columns to observe in full and, with few additional measurements, projects the remaining columns onto their span. This algorithm exactly recovers an n×nn \times n rank rr matrix using O(nrμ0log2(r))O(nr\mu_0 \log^2(r)) observations, where μ0\mu_0 is a coherence parameter on the column space of the matrix. In addition to completely eliminating any row space assumptions that have pervaded the literature, this algorithm enjoys a better sample complexity than any existing matrix completion algorithm. To certify that this improvement is due to adaptive sampling, we establish that row space coherence is necessary for passive sampling algorithms to achieve non-trivial sample complexity bounds. For constructing a low-rank approximation to a high-rank input matrix, we propose a simple algorithm that thresholds the singular values of a zero-filled version of the input matrix. The algorithm computes an approximation that is nearly as good as the best rank-rr approximation using O(nrμlog2(n))O(nr\mu \log^2(n)) samples, where μ\mu is a slightly different coherence parameter on the matrix columns. Again we eliminate assumptions on the row space

    Convergence of Alternating Least Squares Optimisation for Rank-One Approximation to High Order Tensors

    Full text link
    The approximation of tensors has important applications in various disciplines, but it remains an extremely challenging task. It is well known that tensors of higher order can fail to have best low-rank approximations, but with an important exception that best rank-one approximations always exists. The most popular approach to low-rank approximation is the alternating least squares (ALS) method. The convergence of the alternating least squares algorithm for the rank-one approximation problem is analysed in this paper. In our analysis we are focusing on the global convergence and the rate of convergence of the ALS algorithm. It is shown that the ALS method can converge sublinearly, Q-linearly, and even Q-superlinearly. Our theoretical results are illustrated on explicit examples.Comment: tensor format, tensor representation, alternating least squares optimisation, orthogonal projection metho
    corecore