4,596 research outputs found

    A Singular Value Thresholding Algorithm for Matrix Completion

    Get PDF
    This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Off-the-shelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative, produces a sequence of matrices {X^k,Y^k}, and at each step mainly performs a soft-thresholding operation on the singular values of the matrix Y^k. There are two remarkable features making this attractive for low-rank matrix completion problems. The first is that the soft-thresholding operation is applied to a sparse matrix; the second is that the rank of the iterates {X^k} is empirically nondecreasing. Both these facts allow the algorithm to make use of very minimal storage space and keep the computational cost of each iteration low. On the theoretical side, we provide a convergence analysis showing that the sequence of iterates converges. On the practical side, we provide numerical examples in which 1,000 × 1,000 matrices are recovered in less than a minute on a modest desktop computer. We also demonstrate that our approach is amenable to very large scale problems by recovering matrices of rank about 10 with nearly a billion unknowns from just about 0.4% of their sampled entries. Our methods are connected with the recent literature on linearized Bregman iterations for ℓ_1 minimization, and we develop a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms

    Calibrated Elastic Regularization in Matrix Completion

    Full text link
    This paper concerns the problem of matrix completion, which is to estimate a matrix from observations in a small subset of indices. We propose a calibrated spectrum elastic net method with a sum of the nuclear and Frobenius penalties and develop an iterative algorithm to solve the convex minimization problem. The iterative algorithm alternates between imputing the missing entries in the incomplete matrix by the current guess and estimating the matrix by a scaled soft-thresholding singular value decomposition of the imputed matrix until the resulting matrix converges. A calibration step follows to correct the bias caused by the Frobenius penalty. Under proper coherence conditions and for suitable penalties levels, we prove that the proposed estimator achieves an error bound of nearly optimal order and in proportion to the noise level. This provides a unified analysis of the noisy and noiseless matrix completion problems. Simulation results are presented to compare our proposal with previous ones.Comment: 9 pages; Advances in Neural Information Processing Systems, NIPS 201

    Low rank tensor recovery via iterative hard thresholding

    Full text link
    We study extensions of compressive sensing and low rank matrix recovery (matrix completion) to the recovery of low rank tensors of higher order from a small number of linear measurements. While the theoretical understanding of low rank matrix recovery is already well-developed, only few contributions on the low rank tensor recovery problem are available so far. In this paper, we introduce versions of the iterative hard thresholding algorithm for several tensor decompositions, namely the higher order singular value decomposition (HOSVD), the tensor train format (TT), and the general hierarchical Tucker decomposition (HT). We provide a partial convergence result for these algorithms which is based on a variant of the restricted isometry property of the measurement operator adapted to the tensor decomposition at hand that induces a corresponding notion of tensor rank. We show that subgaussian measurement ensembles satisfy the tensor restricted isometry property with high probability under a certain almost optimal bound on the number of measurements which depends on the corresponding tensor format. These bounds are extended to partial Fourier maps combined with random sign flips of the tensor entries. Finally, we illustrate the performance of iterative hard thresholding methods for tensor recovery via numerical experiments where we consider recovery from Gaussian random measurements, tensor completion (recovery of missing entries), and Fourier measurements for third order tensors.Comment: 34 page

    Resolving single molecule structures with Nitrogen-vacancy centers in diamond.

    Get PDF
    We present theoretical proposals for two-dimensional nuclear magnetic resonance spectroscopy protocols based on Nitrogen-vacancy (NV) centers in diamond that are strongly coupled to the target nuclei. Continuous microwave and radio-frequency driving fields together with magnetic field gradients achieve Hartmann-Hahn resonances between NV spin sensor and selected nuclei for control of nuclear spins and subsequent measurement of their polarization dynamics. The strong coupling between the NV sensor and the nuclei facilitates coherence control of nuclear spins and relaxes the requirement of nuclear spin polarization to achieve strong signals and therefore reduced measurement times. Additionally, we employ a singular value thresholding matrix completion algorithm to further reduce the amount of data required to permit the identification of key features in the spectra of strongly sub-sampled data. We illustrate the potential of this combined approach by applying the protocol to a shallowly implanted NV center addressing a small amino acid, alanine, to target specific hydrogen nuclei and to identify the corresponding peaks in their spectra
    corecore