9,537 research outputs found

    Dual-to-kernel learning with ideals

    Get PDF
    In this paper, we propose a theory which unifies kernel learning and symbolic algebraic methods. We show that both worlds are inherently dual to each other, and we use this duality to combine the structure-awareness of algebraic methods with the efficiency and generality of kernels. The main idea lies in relating polynomial rings to feature space, and ideals to manifolds, then exploiting this generative-discriminative duality on kernel matrices. We illustrate this by proposing two algorithms, IPCA and AVICA, for simultaneous manifold and feature learning, and test their accuracy on synthetic and real world data.Comment: 15 pages, 1 figur

    Polynomial and rational solutions of holonomic systems

    Get PDF
    The aim of this paper is to give two new algorithms, which are elimination free, to find polynomial and rational solutions for a given holonomic system associated to a set of linear differential operators in the Weyl algebra D = k where k is a subfield of the complex numbers.Comment: 20 page

    Algebraic Signal Processing Theory: Cooley-Tukey Type Algorithms for DCTs and DSTs

    Full text link
    This paper presents a systematic methodology based on the algebraic theory of signal processing to classify and derive fast algorithms for linear transforms. Instead of manipulating the entries of transform matrices, our approach derives the algorithms by stepwise decomposition of the associated signal models, or polynomial algebras. This decomposition is based on two generic methods or algebraic principles that generalize the well-known Cooley-Tukey FFT and make the algorithms' derivations concise and transparent. Application to the 16 discrete cosine and sine transforms yields a large class of fast algorithms, many of which have not been found before.Comment: 31 pages, more information at http://www.ece.cmu.edu/~smar

    Convex and Network Flow Optimization for Structured Sparsity

    Get PDF
    We consider a class of learning problems regularized by a structured sparsity-inducing norm defined as the sum of l_2- or l_infinity-norms over groups of variables. Whereas much effort has been put in developing fast optimization techniques when the groups are disjoint or embedded in a hierarchy, we address here the case of general overlapping groups. To this end, we present two different strategies: On the one hand, we show that the proximal operator associated with a sum of l_infinity-norms can be computed exactly in polynomial time by solving a quadratic min-cost flow problem, allowing the use of accelerated proximal gradient methods. On the other hand, we use proximal splitting techniques, and address an equivalent formulation with non-overlapping groups, but in higher dimension and with additional constraints. We propose efficient and scalable algorithms exploiting these two strategies, which are significantly faster than alternative approaches. We illustrate these methods with several problems such as CUR matrix factorization, multi-task learning of tree-structured dictionaries, background subtraction in video sequences, image denoising with wavelets, and topographic dictionary learning of natural image patches.Comment: to appear in the Journal of Machine Learning Research (JMLR

    Computation with Polynomial Equations and Inequalities arising in Combinatorial Optimization

    Full text link
    The purpose of this note is to survey a methodology to solve systems of polynomial equations and inequalities. The techniques we discuss use the algebra of multivariate polynomials with coefficients over a field to create large-scale linear algebra or semidefinite programming relaxations of many kinds of feasibility or optimization questions. We are particularly interested in problems arising in combinatorial optimization.Comment: 28 pages, survey pape

    Change of basis for m-primary ideals in one and two variables

    Full text link
    Following recent work by van der Hoeven and Lecerf (ISSAC 2017), we discuss the complexity of linear mappings, called untangling and tangling by those authors, that arise in the context of computations with univariate polynomials. We give a slightly faster tangling algorithm and discuss new applications of these techniques. We show how to extend these ideas to bivariate settings, and use them to give bounds on the arithmetic complexity of certain algebras.Comment: In Proceedings ISSAC'19, ACM, New York, USA. See proceedings version for final formattin
    • …
    corecore