36 research outputs found

    CUR Algorithm with Incomplete Matrix Observation

    Full text link
    CUR matrix decomposition is a randomized algorithm that can efficiently compute the low rank approximation for a given rectangle matrix. One limitation with the existing CUR algorithms is that they require an access to the full matrix A for computing U. In this work, we aim to alleviate this limitation. In particular, we assume that besides having an access to randomly sampled d rows and d columns from A, we only observe a subset of randomly sampled entries from A. Our goal is to develop a low rank approximation algorithm, similar to CUR, based on (i) randomly sampled rows and columns from A, and (ii) randomly sampled entries from A. The proposed algorithm is able to perfectly recover the target matrix A with only O(rn log n) number of observed entries. In addition, instead of having to solve an optimization problem involved trace norm regularization, the proposed algorithm only needs to solve a standard regression problem. Finally, unlike most matrix completion theories that hold only when the target matrix is of low rank, we show a strong guarantee for the proposed algorithm even when the target matrix is not low rank

    Scaled Nuclear Norm Minimization for Low-Rank Tensor Completion

    Full text link
    Minimizing the nuclear norm of a matrix has been shown to be very efficient in reconstructing a low-rank sampled matrix. Furthermore, minimizing the sum of nuclear norms of matricizations of a tensor has been shown to be very efficient in recovering a low-Tucker-rank sampled tensor. In this paper, we propose to recover a low-TT-rank sampled tensor by minimizing a weighted sum of nuclear norms of unfoldings of the tensor. We provide numerical results to show that our proposed method requires significantly less number of samples to recover to the original tensor in comparison with simply minimizing the sum of nuclear norms since the structure of the unfoldings in the TT tensor model is fundamentally different from that of matricizations in the Tucker tensor model

    Subspace Learning from Extremely Compressed Measurements

    Full text link
    We consider learning the principal subspace of a large set of vectors from an extremely small number of compressive measurements of each vector. Our theoretical results show that even a constant number of measurements per column suffices to approximate the principal subspace to arbitrary precision, provided that the number of vectors is large. This result is achieved by a simple algorithm that computes the eigenvectors of an estimate of the covariance matrix. The main insight is to exploit an averaging effect that arises from applying a different random projection to each vector. We provide a number of simulations confirming our theoretical results

    Matrix Completion with Sparse Noisy Rows

    Full text link
    Exact matrix completion and low rank matrix estimation problems has been studied in different underlying conditions. In this work we study exact low-rank completion under non-degenerate noise model. Non-degenerate random noise model has been previously studied by many researchers under given condition that the noise is sparse and existing in some of the columns. In this paper, we assume that each row can receive random noise instead of columns and propose an interactive algorithm that is robust to this noise. We show that we use a parametrization technique to give a condition when the underlying matrix could be recoverable and suggest an algorithm which recovers the underlying matrix

    Tensor Matched Kronecker-Structured Subspace Detection for Missing Information

    Full text link
    We consider the problem of detecting whether a tensor signal having many missing entities lies within a given low dimensional Kronecker-Structured (KS) subspace. This is a matched subspace detection problem. Tensor matched subspace detection problem is more challenging because of the intertwined signal dimensions. We solve this problem by projecting the signal onto the Kronecker structured subspace, which is a Kronecker product of different subspaces corresponding to each signal dimension. Under this framework, we define the KS subspaces and the orthogonal projection of the signal onto the KS subspace. We prove that reliable detection is possible as long as the cardinality of the missing signal is greater than the dimensions of the KS subspace by bounding the residual energy of the sampling signal with high probability

    Matrix Completion from Non-Uniformly Sampled Entries

    Full text link
    In this paper, we consider matrix completion from non-uniformly sampled entries including fully observed and partially observed columns. Specifically, we assume that a small number of columns are randomly selected and fully observed, and each remaining column is partially observed with uniform sampling. To recover the unknown matrix, we first recover its column space from the fully observed columns. Then, for each partially observed column, we recover it by finding a vector which lies in the recovered column space and consists of the observed entries. When the unknown m×nm\times n matrix is low-rank, we show that our algorithm can exactly recover it from merely Ω(rnlnn)\Omega(rn\ln n) entries, where rr is the rank of the matrix. Furthermore, for a noisy low-rank matrix, our algorithm computes a low-rank approximation of the unknown matrix and enjoys an additive error bound measured by Frobenius norm. Experimental results on synthetic datasets verify our theoretical claims and demonstrate the effectiveness of our proposed algorithm

    An algorithm for online tensor prediction

    Full text link
    We present a new method for online prediction and learning of tensors (NN-way arrays, N>2N >2) from sequential measurements. We focus on the specific case of 3-D tensors and exploit a recently developed framework of structured tensor decompositions proposed in [1]. In this framework it is possible to treat 3-D tensors as linear operators and appropriately generalize notions of rank and positive definiteness to tensors in a natural way. Using these notions we propose a generalization of the matrix exponentiated gradient descent algorithm [2] to a tensor exponentiated gradient descent algorithm using an extension of the notion of von-Neumann divergence to tensors. Then following a similar construction as in [3], we exploit this algorithm to propose an online algorithm for learning and prediction of tensors with provable regret guarantees. Simulations results are presented on semi-synthetic data sets of ratings evolving in time under local influence over a social network. The result indicate superior performance compared to other (online) convex tensor completion methods

    Tensor Matched Subspace Detection

    Full text link
    The problem of testing whether a signal lies within a given subspace, also named matched subspace detection, has been well studied when the signal is represented as a vector. However, the matched subspace detection methods based on vectors can not be applied to the situations that signals are naturally represented as multi-dimensional data arrays or tensors. Considering that tensor subspaces and orthogonal projections onto these subspaces are well defined in the recently proposed transform-based tensor model, which motivates us to investigate the problem of matched subspace detection in high dimensional case. In this paper, we propose an approach for tensor matched subspace detection based on the transform-based tensor model with tubal-sampling and elementwise-sampling, respectively. First, we construct estimators based on tubal-sampling and elementwise-sampling to estimate the energy of a signal outside a given subspace of a third-order tensor and then give the probability bounds of our estimators, which show that our estimators work effectively when the sample size is greater than a constant. Secondly, the detectors both for noiseless data and noisy data are given, and the corresponding detection performance analyses are also provided. Finally, based on discrete Fourier transform (DFT) and discrete cosine transform (DCT), the performance of our estimators and detectors are evaluated by several simulations, and simulation results verify the effectiveness of our approach

    Compact Factorization of Matrices Using Generalized Round-Rank

    Full text link
    Matrix factorization is a well-studied task in machine learning for compactly representing large, noisy data. In our approach, instead of using the traditional concept of matrix rank, we define a new notion of link-rank based on a non-linear link function used within factorization. In particular, by applying the round function on a factorization to obtain ordinal-valued matrices, we introduce generalized round-rank (GRR). We show that not only are there many full-rank matrices that are low GRR, but further, that these matrices cannot be approximated well by low-rank linear factorization. We provide uniqueness conditions of this formulation and provide gradient descent-based algorithms. Finally, we present experiments on real-world datasets to demonstrate that the GRR-based factorization is significantly more accurate than linear factorization, while converging faster and using lower rank representations

    Relaxed Leverage Sampling for Low-rank Matrix Completion

    Full text link
    We consider the problem of exact recovery of any m×nm\times n matrix of rank ϱ\varrho from a small number of observed entries via the standard nuclear norm minimization framework. Such low-rank matrices have degrees of freedom (m+n)ϱϱ2(m+n)\varrho - \varrho^2. We show that any arbitrary low-rank matrices can be recovered exactly from a Θ(((m+n)ϱϱ2)log2(m+n))\Theta\left(((m+n)\varrho - \varrho^2)\log^2(m+n)\right) randomly sampled entries, thus matching the lower bound on the required number of entries (in terms of degrees of freedom), with an additional factor of O(log2(m+n))O(\log^2(m+n)). To achieve this bound on sample size we observe each entry with probabilities proportional to the sum of corresponding row and column leverage scores, minus their product. We show that this relaxation in sampling probabilities (as opposed to sum of leverage scores in Chen et al, 2014) can give us an O(ϱ2log2(m+n))O(\varrho^2\log^2(m+n)) additive improvement on the (best known) sample size obtained by Chen et al, 2014, for the nuclear norm minimization. Experiments on real data corroborate the theoretical improvement on sample size. Further, exact recovery of (a)(a) incoherent matrices (with restricted leverage scores), and (b)(b) matrices with only one of the row or column spaces to be incoherent, can be performed using our relaxed leverage score sampling, via nuclear norm minimization, without knowing the leverage scores a priori. In such settings also we can achieve improvement on sample size
    corecore