1,249 research outputs found

    Progress in the mathematical theory of quantum disordered systems

    Full text link
    We review recent progress in the mathematical theory of quantum disordered systems: the Anderson transition (joint work with Domingos Marchetti), the (quantum and classical) Edwards-Anderson (EA) spin glass model and return to equilibrium for a class of spin glass models, which includes the EA model initially in a very large transverse magnetic field.Comment: 25 pages, 1 figure, based on lectures at Bressanone and G\"ottinge

    Dictionary Learning and Tensor Decomposition via the Sum-of-Squares Method

    Full text link
    We give a new approach to the dictionary learning (also known as "sparse coding") problem of recovering an unknown n×mn\times m matrix AA (for mnm \geq n) from examples of the form y=Ax+e, y = Ax + e, where xx is a random vector in Rm\mathbb R^m with at most τm\tau m nonzero coordinates, and ee is a random noise vector in Rn\mathbb R^n with bounded magnitude. For the case m=O(n)m=O(n), our algorithm recovers every column of AA within arbitrarily good constant accuracy in time mO(logm/log(τ1))m^{O(\log m/\log(\tau^{-1}))}, in particular achieving polynomial time if τ=mδ\tau = m^{-\delta} for any δ>0\delta>0, and time mO(logm)m^{O(\log m)} if τ\tau is (a sufficiently small) constant. Prior algorithms with comparable assumptions on the distribution required the vector xx to be much sparser---at most n\sqrt{n} nonzero coordinates---and there were intrinsic barriers preventing these algorithms from applying for denser xx. We achieve this by designing an algorithm for noisy tensor decomposition that can recover, under quite general conditions, an approximate rank-one decomposition of a tensor TT, given access to a tensor TT' that is τ\tau-close to TT in the spectral norm (when considered as a matrix). To our knowledge, this is the first algorithm for tensor decomposition that works in the constant spectral-norm noise regime, where there is no guarantee that the local optima of TT and TT' have similar structures. Our algorithm is based on a novel approach to using and analyzing the Sum of Squares semidefinite programming hierarchy (Parrilo 2000, Lasserre 2001), and it can be viewed as an indication of the utility of this very general and powerful tool for unsupervised learning problems

    Efficient Algorithms for Sparse Moment Problems without Separation

    Full text link
    We consider the sparse moment problem of learning a kk-spike mixture in high-dimensional space from its noisy moment information in any dimension. We measure the accuracy of the learned mixtures using transportation distance. Previous algorithms either assume certain separation assumptions, use more recovery moments, or run in (super) exponential time. Our algorithm for the one-dimensional problem (also called the sparse Hausdorff moment problem) is a robust version of the classic Prony's method, and our contribution mainly lies in the analysis. We adopt a global and much tighter analysis than previous work (which analyzes the perturbation of the intermediate results of Prony's method). A useful technical ingredient is a connection between the linear system defined by the Vandermonde matrix and the Schur polynomial, which allows us to provide tight perturbation bound independent of the separation and may be useful in other contexts. To tackle the high-dimensional problem, we first solve the two-dimensional problem by extending the one-dimensional algorithm and analysis to complex numbers. Our algorithm for the high-dimensional case determines the coordinates of each spike by aligning a 1d projection of the mixture to a random vector and a set of 2d projections of the mixture. Our results have applications to learning topic models and Gaussian mixtures, implying improved sample complexity results or running time over prior work

    Polynomial-time Tensor Decompositions with Sum-of-Squares

    Full text link
    We give new algorithms based on the sum-of-squares method for tensor decomposition. Our results improve the best known running times from quasi-polynomial to polynomial for several problems, including decomposing random overcomplete 3-tensors and learning overcomplete dictionaries with constant relative sparsity. We also give the first robust analysis for decomposing overcomplete 4-tensors in the smoothed analysis model. A key ingredient of our analysis is to establish small spectral gaps in moment matrices derived from solutions to sum-of-squares relaxations. To enable this analysis we augment sum-of-squares relaxations with spectral analogs of maximum entropy constraints.Comment: to appear in FOCS 201
    corecore