7,117 research outputs found

    Smoothed Analysis of the Condition Number Under Low-Rank Perturbations

    Get PDF
    Let MM be an arbitrary nn by nn matrix of rank n−kn-k. We study the condition number of MM plus a \emph{low-rank} perturbation UVTUV^T where U,VU, V are nn by kk random Gaussian matrices. Under some necessary assumptions, it is shown that M+UVTM+UV^T is unlikely to have a large condition number. The main advantages of this kind of perturbation over the well-studied dense Gaussian perturbation, where every entry is independently perturbed, is the O(nk)O(nk) cost to store U,VU,V and the O(nk)O(nk) increase in time complexity for performing the matrix-vector multiplication (M+UVT)x(M+UV^T)x. This improves the Ω(n2)\Omega(n^2) space and time complexity increase required by a dense perturbation, which is especially burdensome if MM is originally sparse. Our results also extend to the case where UU and VV have rank larger than kk and to symmetric and complex settings. We also give an application to linear systems solving and perform some numerical experiments. Lastly, barriers in applying low-rank noise to other problems studied in the smoothed analysis framework are discussed

    Smoothed Analysis in Unsupervised Learning via Decoupling

    Full text link
    Smoothed analysis is a powerful paradigm in overcoming worst-case intractability in unsupervised learning and high-dimensional data analysis. While polynomial time smoothed analysis guarantees have been obtained for worst-case intractable problems like tensor decompositions and learning mixtures of Gaussians, such guarantees have been hard to obtain for several other important problems in unsupervised learning. A core technical challenge in analyzing algorithms is obtaining lower bounds on the least singular value for random matrix ensembles with dependent entries, that are given by low-degree polynomials of a few base underlying random variables. In this work, we address this challenge by obtaining high-confidence lower bounds on the least singular value of new classes of structured random matrix ensembles of the above kind. We then use these bounds to design algorithms with polynomial time smoothed analysis guarantees for the following three important problems in unsupervised learning: 1. Robust subspace recovery, when the fraction α\alpha of inliers in the d-dimensional subspace T⊂RnT \subset \mathbb{R}^n is at least α>(d/n)ℓ\alpha > (d/n)^\ell for any constant integer ℓ>0\ell>0. This contrasts with the known worst-case intractability when α<d/n\alpha< d/n, and the previous smoothed analysis result which needed α>d/n\alpha > d/n (Hardt and Moitra, 2013). 2. Learning overcomplete hidden markov models, where the size of the state space is any polynomial in the dimension of the observations. This gives the first polynomial time guarantees for learning overcomplete HMMs in a smoothed analysis model. 3. Higher order tensor decompositions, where we generalize the so-called FOOBI algorithm of Cardoso to find order-ℓ\ell rank-one tensors in a subspace. This allows us to obtain polynomially robust decomposition algorithms for 2ℓ2\ell'th order tensors with rank O(nℓ)O(n^{\ell}).Comment: 44 page

    Smoothed Analysis of Tensor Decompositions

    Full text link
    Low rank tensor decompositions are a powerful tool for learning generative models, and uniqueness results give them a significant advantage over matrix decomposition methods. However, tensors pose significant algorithmic challenges and tensors analogs of much of the matrix algebra toolkit are unlikely to exist because of hardness results. Efficient decomposition in the overcomplete case (where rank exceeds dimension) is particularly challenging. We introduce a smoothed analysis model for studying these questions and develop an efficient algorithm for tensor decomposition in the highly overcomplete case (rank polynomial in the dimension). In this setting, we show that our algorithm is robust to inverse polynomial error -- a crucial property for applications in learning since we are only allowed a polynomial number of samples. While algorithms are known for exact tensor decomposition in some overcomplete settings, our main contribution is in analyzing their stability in the framework of smoothed analysis. Our main technical contribution is to show that tensor products of perturbed vectors are linearly independent in a robust sense (i.e. the associated matrix has singular values that are at least an inverse polynomial). This key result paves the way for applying tensor methods to learning problems in the smoothed setting. In particular, we use it to obtain results for learning multi-view models and mixtures of axis-aligned Gaussians where there are many more "components" than dimensions. The assumption here is that the model is not adversarially chosen, formalized by a perturbation of model parameters. We believe this an appealing way to analyze realistic instances of learning problems, since this framework allows us to overcome many of the usual limitations of using tensor methods.Comment: 32 pages (including appendix

    Semi-blind Eigen-analyses of Recombination Histories Using CMB Data

    Full text link
    Cosmological parameter measurements from CMB experiments such as Planck, ACTpol, SPTpol and other high resolution follow-ons fundamentally rely on the accuracy of the assumed recombination model, or one with well prescribed uncertainties. Deviations from the standard recombination history might suggest new particle physics or modified atomic physics. Here we treat possible perturbative fluctuations in the free electron fraction, \Xe(z), by a semi-blind expansion in densely-packed modes in redshift. From these we construct parameter eigenmodes, which we rank order so that the lowest modes provide the most power to probe the \Xe(z) with CMB measurements. Since the eigenmodes are effectively weighed by the fiducial \Xe history, they are localized around the differential visibility peak, allowing for an excellent probe of hydrogen recombination, but a weaker probe of the higher redshift helium recombination and the lower redshift highly neutral freeze-out tail. We use an information-based criterion to truncate the mode hierarchy, and show that with even a few modes the method goes a long way towards morphing a fiducial older {\sc Recfast} Xe,i(z)X_{\rm e,i} (z) into the new and improved {\sc CosmoRec} and {\sc HyRec} Xe,f(z)X_{\rm e,f} (z) in the hydrogen recombination regime, though not well in the helium regime. Without such a correction, the derived cosmic parameters are biased. We discuss an iterative approach for updating the eigenmodes to further hone in on Xe,f(z)X_{\rm e,f} (z) if large deviations are indeed found. We also introduce control parameters that downweight the attention on the visibility peak structure, e.g., focusing the eigenmode probes more strongly on the \Xe (z) freeze-out tail, as would be appropriate when looking for the \Xe signature of annihilating or decaying elementary particles.Comment: 28 pages, 26 Fig

    Polynomial-time Tensor Decompositions with Sum-of-Squares

    Full text link
    We give new algorithms based on the sum-of-squares method for tensor decomposition. Our results improve the best known running times from quasi-polynomial to polynomial for several problems, including decomposing random overcomplete 3-tensors and learning overcomplete dictionaries with constant relative sparsity. We also give the first robust analysis for decomposing overcomplete 4-tensors in the smoothed analysis model. A key ingredient of our analysis is to establish small spectral gaps in moment matrices derived from solutions to sum-of-squares relaxations. To enable this analysis we augment sum-of-squares relaxations with spectral analogs of maximum entropy constraints.Comment: to appear in FOCS 201
    • …
    corecore