61,413 research outputs found

    Matrix Completion via Max-Norm Constrained Optimization

    Get PDF
    Matrix completion has been well studied under the uniform sampling model and the trace-norm regularized methods perform well both theoretically and numerically in such a setting. However, the uniform sampling model is unrealistic for a range of applications and the standard trace-norm relaxation can behave very poorly when the underlying sampling scheme is non-uniform. In this paper we propose and analyze a max-norm constrained empirical risk minimization method for noisy matrix completion under a general sampling model. The optimal rate of convergence is established under the Frobenius norm loss in the context of approximately low-rank matrix reconstruction. It is shown that the max-norm constrained method is minimax rate-optimal and yields a unified and robust approximate recovery guarantee, with respect to the sampling distributions. The computational effectiveness of this method is also discussed, based on first-order algorithms for solving convex optimizations involving max-norm regularization.Comment: 33 page

    Area law for the maximally mixed ground state in degenerate 1D gapped systems

    Full text link
    We show an area law with logarithmic correction for the maximally mixed state Ω\Omega in the (degenerate) ground space of a 1D gapped local Hamiltonian HH, which is independent of the underlying ground space degeneracy. Formally, for ε>0\varepsilon>0 and a bi-partition LLcL\cup L^c of the 1D lattice, we show that Imaxε(L:Lc)ΩO(log(L)+log(1/ε)),\mathrm{I}^{\varepsilon}_{\max}(L:L^c)_{\Omega} \leq O(\log(|L|)+\log(1/\varepsilon)), where L|L| represents the number of qudits in LL and Imaxϵ(L:Lc)Ω\mathrm{I}^{\epsilon}_{\max}(L:L^c)_{\Omega} represents the ε\varepsilon- 'smoothed maximum mutual information' with respect to the L:LcL:L^c partition in Ω\Omega. As a corollary, we get an area law for the mutual information of the form I(L:R)ΩO(logL)\mathrm{I}(L:R)_\Omega \leq O(\log |L|). In addition, we show that Ω\Omega can be approximated up to an ε\varepsilon in trace norm with a state of Schmidt rank of at most poly(L/ε)\mathrm{poly}(|L|/\varepsilon).Comment: 23 pages, version

    A Max-Norm Constrained Minimization Approach to 1-Bit Matrix Completion

    Get PDF
    We consider in this paper the problem of noisy 1-bit matrix completion under a general non-uniform sampling distribution using the max-norm as a convex relaxation for the rank. A max-norm constrained maximum likelihood estimate is introduced and studied. The rate of convergence for the estimate is obtained. Information-theoretical methods are used to establish a minimax lower bound under the general sampling model. The minimax upper and lower bounds together yield the optimal rate of convergence for the Frobenius norm loss. Computational algorithms and numerical performance are also discussed.Comment: 33 pages, 3 figure

    Low-Rank Inducing Norms with Optimality Interpretations

    Full text link
    Optimization problems with rank constraints appear in many diverse fields such as control, machine learning and image analysis. Since the rank constraint is non-convex, these problems are often approximately solved via convex relaxations. Nuclear norm regularization is the prevailing convexifying technique for dealing with these types of problem. This paper introduces a family of low-rank inducing norms and regularizers which includes the nuclear norm as a special case. A posteriori guarantees on solving an underlying rank constrained optimization problem with these convex relaxations are provided. We evaluate the performance of the low-rank inducing norms on three matrix completion problems. In all examples, the nuclear norm heuristic is outperformed by convex relaxations based on other low-rank inducing norms. For two of the problems there exist low-rank inducing norms that succeed in recovering the partially unknown matrix, while the nuclear norm fails. These low-rank inducing norms are shown to be representable as semi-definite programs. Moreover, these norms have cheaply computable proximal mappings, which makes it possible to also solve problems of large size using first-order methods

    Learning with the Weighted Trace-norm under Arbitrary Sampling Distributions

    Full text link
    We provide rigorous guarantees on learning with the weighted trace-norm under arbitrary sampling distributions. We show that the standard weighted trace-norm might fail when the sampling distribution is not a product distribution (i.e. when row and column indexes are not selected independently), present a corrected variant for which we establish strong learning guarantees, and demonstrate that it works better in practice. We provide guarantees when weighting by either the true or empirical sampling distribution, and suggest that even if the true distribution is known (or is uniform), weighting by the empirical distribution may be beneficial
    corecore