1,075 research outputs found

    Learning Generalized Depth Three Arithmetic Circuits in the Non-Degenerate Case

    Get PDF

    Gaussian Mixture Identifiability from degree 6 Moments

    Full text link
    We resolve most cases of identifiability from sixth-order moments for Gaussian mixtures on spaces of large dimensions. Our results imply that the parameters of a generic mixture of mO(n4) m\leq\mathcal{O}(n^4) Gaussians on Rn \mathbb R^n can be uniquely recovered from the mixture moments of degree 6. The constant hidden in the O \mathcal{O} -notation is optimal and equals the one in the upper bound from counting parameters. We give an argument that degree-4 moments never suffice in any nontrivial case, and we conduct some numerical experiments indicating that degree 5 is minimal for identifiability.Comment: 22 page

    Low-Depth Arithmetic Circuit Lower Bounds: Bypassing Set-Multilinearization

    Get PDF

    Dictionary Learning and Tensor Decomposition via the Sum-of-Squares Method

    Full text link
    We give a new approach to the dictionary learning (also known as "sparse coding") problem of recovering an unknown n×mn\times m matrix AA (for mnm \geq n) from examples of the form y=Ax+e, y = Ax + e, where xx is a random vector in Rm\mathbb R^m with at most τm\tau m nonzero coordinates, and ee is a random noise vector in Rn\mathbb R^n with bounded magnitude. For the case m=O(n)m=O(n), our algorithm recovers every column of AA within arbitrarily good constant accuracy in time mO(logm/log(τ1))m^{O(\log m/\log(\tau^{-1}))}, in particular achieving polynomial time if τ=mδ\tau = m^{-\delta} for any δ>0\delta>0, and time mO(logm)m^{O(\log m)} if τ\tau is (a sufficiently small) constant. Prior algorithms with comparable assumptions on the distribution required the vector xx to be much sparser---at most n\sqrt{n} nonzero coordinates---and there were intrinsic barriers preventing these algorithms from applying for denser xx. We achieve this by designing an algorithm for noisy tensor decomposition that can recover, under quite general conditions, an approximate rank-one decomposition of a tensor TT, given access to a tensor TT' that is τ\tau-close to TT in the spectral norm (when considered as a matrix). To our knowledge, this is the first algorithm for tensor decomposition that works in the constant spectral-norm noise regime, where there is no guarantee that the local optima of TT and TT' have similar structures. Our algorithm is based on a novel approach to using and analyzing the Sum of Squares semidefinite programming hierarchy (Parrilo 2000, Lasserre 2001), and it can be viewed as an indication of the utility of this very general and powerful tool for unsupervised learning problems

    A computer algebra user interface manifesto

    Full text link
    Many computer algebra systems have more than 1000 built-in functions, making expertise difficult. Using mock dialog boxes, this article describes a proposed interactive general-purpose wizard for organizing optional transformations and allowing easy fine grain control over the form of the result even by amateurs. This wizard integrates ideas including: * flexible subexpression selection; * complete control over the ordering of variables and commutative operands, with well-chosen defaults; * interleaving the choice of successively less main variables with applicable function choices to provide detailed control without incurring a combinatorial number of applicable alternatives at any one level; * quick applicability tests to reduce the listing of inapplicable transformations; * using an organizing principle to order the alternatives in a helpful manner; * labeling quickly-computed alternatives in dialog boxes with a preview of their results, * using ellipsis elisions if necessary or helpful; * allowing the user to retreat from a sequence of choices to explore other branches of the tree of alternatives or to return quickly to branches already visited; * allowing the user to accumulate more than one of the alternative forms; * integrating direct manipulation into the wizard; and * supporting not only the usual input-result pair mode, but also the useful alternative derivational and in situ replacement modes in a unified window.Comment: 38 pages, 12 figures, to be published in Communications in Computer Algebr

    Decomposability of Tensors

    Get PDF
    Tensor decomposition is a relevant topic, both for theoretical and applied mathematics, due to its interdisciplinary nature, which ranges from multilinear algebra and algebraic geometry to numerical analysis, algebraic statistics, quantum physics, signal processing, artificial intelligence, etc. The starting point behind the study of a decomposition relies on the idea that knowledge of elementary components of a tensor is fundamental to implement procedures that are able to understand and efficiently handle the information that a tensor encodes. Recent advances were obtained with a systematic application of geometric methods: secant varieties, symmetries of special decompositions, and an analysis of the geometry of finite sets. Thanks to new applications of theoretic results, criteria for understanding when a given decomposition is minimal or unique have been introduced or significantly improved. New types of decompositions, whose elementary blocks can be chosen in a range of different possible models (e.g., Chow decompositions or mixed decompositions), are now systematically studied and produce deeper insights into this topic. The aim of this Special Issue is to collect papers that illustrate some directions in which recent researches move, as well as to provide a wide overview of several new approaches to the problem of tensor decomposition
    corecore