46 research outputs found

    Dual-to-kernel learning with ideals

    Get PDF
    In this paper, we propose a theory which unifies kernel learning and symbolic algebraic methods. We show that both worlds are inherently dual to each other, and we use this duality to combine the structure-awareness of algebraic methods with the efficiency and generality of kernels. The main idea lies in relating polynomial rings to feature space, and ideals to manifolds, then exploiting this generative-discriminative duality on kernel matrices. We illustrate this by proposing two algorithms, IPCA and AVICA, for simultaneous manifold and feature learning, and test their accuracy on synthetic and real world data.Comment: 15 pages, 1 figur

    Learning with Algebraic Invariances, and the Invariant Kernel Trick

    Get PDF
    When solving data analysis problems it is important to integrate prior knowledge and/or structural invariances. This paper contributes by a novel framework for incorporating algebraic invariance structure into kernels. In particular, we show that algebraic properties such as sign symmetries in data, phase independence, scaling etc. can be included easily by essentially performing the kernel trick twice. We demonstrate the usefulness of our theory in simulations on selected applications such as sign-invariant spectral clustering and underdetermined ICA

    Revisiting Complex Moments For 2D Shape Representation and Image Normalization

    Full text link
    When comparing 2D shapes, a key issue is their normalization. Translation and scale are easily taken care of by removing the mean and normalizing the energy. However, defining and computing the orientation of a 2D shape is not so simple. In fact, although for elongated shapes the principal axis can be used to define one of two possible orientations, there is no such tool for general shapes. As we show in the paper, previous approaches fail to compute the orientation of even noiseless observations of simple shapes. We address this problem. In the paper, we show how to uniquely define the orientation of an arbitrary 2D shape, in terms of what we call its Principal Moments. We show that a small subset of these moments suffice to represent the underlying 2D shape and propose a new method to efficiently compute the shape orientation: Principal Moment Analysis. Finally, we discuss how this method can further be applied to normalize grey-level images. Besides the theoretical proof of correctness, we describe experiments demonstrating robustness to noise and illustrating the method with real images.Comment: 69 pages, 20 figure
    corecore