3 research outputs found

    2D Shape Recognition Using Information Theoretic Kernels

    Full text link
    In this paper, a novel approach for contour-based 2D shape recognition is proposed, using a recently intro-duced class of information theoretic kernels. This kind of kernels, based on a non-extensive generalization of the classical Shannon information theory, are defined on probability measures. In the proposed approach, chain code representations are first extracted from the contours; then n-gram statistics are computed and used as input to the information theoretic kernels. We tested different versions of such kernels, using support vector machine and nearest neighbor classifiers. An experi-mental evaluation on the chicken pieces dataset shows that the proposed approach outperforms the current state-of-the-art methods. 1

    Signature features with the visibility transformation

    Get PDF
    The signature in rough path theory provides a graduated summary of a path through an examination of the effects of its increments. Inspired by recent developments of signature features in the context of machine learning, we explore a transformation that is able to embed the effect of the absolute position of the data stream into signature features. This unified feature is particularly effective for its simplifying role in allowing the signature feature set to accommodate nonlinear functions of absolute and relative values

    A new generative feature set based on entropy distance for discriminative classification

    No full text
    Score functions induced by generative models extract fixed-dimensions feature vectors from different-length data observations by subsuming the process of data generation, projecting them in highly informative spaces called score spaces. In this way, standard discriminative classifiers such as support vector machines, or logistic regressors are proved to achieve higher performances than a solely generative or discriminative approach. In this paper, we present a novel score space that capture the generative process encoding it in an entropic feature vector. In this way, both uncertainty in the generative model learning step and \u201clocal\u201d compliance of data observations with respect to the generative process can be represented. The proposed score space is presented for hidden Markov models and mixture of gaussian and is experimentally validated on standard benchmark datasets; moreover it can be applied to any generative model. Results show how it achieves compelling classification accuracies
    corecore