510 research outputs found

    Kernel Multivariate Analysis Framework for Supervised Subspace Learning: A Tutorial on Linear and Kernel Multivariate Methods

    Full text link
    Feature extraction and dimensionality reduction are important tasks in many fields of science dealing with signal processing and analysis. The relevance of these techniques is increasing as current sensory devices are developed with ever higher resolution, and problems involving multimodal data sources become more common. A plethora of feature extraction methods are available in the literature collectively grouped under the field of Multivariate Analysis (MVA). This paper provides a uniform treatment of several methods: Principal Component Analysis (PCA), Partial Least Squares (PLS), Canonical Correlation Analysis (CCA) and Orthonormalized PLS (OPLS), as well as their non-linear extensions derived by means of the theory of reproducing kernel Hilbert spaces. We also review their connections to other methods for classification and statistical dependence estimation, and introduce some recent developments to deal with the extreme cases of large-scale and low-sized problems. To illustrate the wide applicability of these methods in both classification and regression problems, we analyze their performance in a benchmark of publicly available data sets, and pay special attention to specific real applications involving audio processing for music genre prediction and hyperspectral satellite images for Earth and climate monitoring

    A fractional Brownian field indexed by L2L^2 and a varying Hurst parameter

    Get PDF
    Using structures of Abstract Wiener Spaces, we define a fractional Brownian field indexed by a product space (0,1/2]×L2(T,m)(0,1/2] \times L^2(T,m), (T,m)(T,m) a separable measure space, where the first coordinate corresponds to the Hurst parameter of fractional Brownian motion. This field encompasses a large class of existing fractional Brownian processes, such as L\'evy fractional Brownian motions and multiparameter fractional Brownian motions, and provides a setup for new ones. We prove that it has satisfactory incremental variance in both coordinates and derive certain continuity and H\"older regularity properties in relation with metric entropy. Also, a sharp estimate of the small ball probabilities is provided, generalizing a result on L\'evy fractional Brownian motion. Then, we apply these general results to multiparameter and set-indexed processes, proving the existence of processes with prescribed local H\"older regularity on general indexing collections

    When Kernel Methods meet Feature Learning: Log-Covariance Network for Action Recognition from Skeletal Data

    Full text link
    Human action recognition from skeletal data is a hot research topic and important in many open domain applications of computer vision, thanks to recently introduced 3D sensors. In the literature, naive methods simply transfer off-the-shelf techniques from video to the skeletal representation. However, the current state-of-the-art is contended between to different paradigms: kernel-based methods and feature learning with (recurrent) neural networks. Both approaches show strong performances, yet they exhibit heavy, but complementary, drawbacks. Motivated by this fact, our work aims at combining together the best of the two paradigms, by proposing an approach where a shallow network is fed with a covariance representation. Our intuition is that, as long as the dynamics is effectively modeled, there is no need for the classification network to be deep nor recurrent in order to score favorably. We validate this hypothesis in a broad experimental analysis over 6 publicly available datasets.Comment: 2017 IEEE Computer Vision and Pattern Recognition (CVPR) Workshop

    Surface energy and boundary layers for a chain of atoms at low temperature

    Get PDF
    We analyze the surface energy and boundary layers for a chain of atoms at low temperature for an interaction potential of Lennard-Jones type. The pressure (stress) is assumed small but positive and bounded away from zero, while the temperature β−1\beta^{-1} goes to zero. Our main results are: (1) As β→∞\beta \to \infty at fixed positive pressure p>0p>0, the Gibbs measures μβ\mu_\beta and νβ\nu_\beta for infinite chains and semi-infinite chains satisfy path large deviations principles. The rate functions are bulk and surface energy functionals E‾bulk\overline{\mathcal{E}}_{\mathrm{bulk}} and E‾surf\overline{\mathcal{E}}_\mathrm{surf}. The minimizer of the surface functional corresponds to zero temperature boundary layers. (2) The surface correction to the Gibbs free energy converges to the zero temperature surface energy, characterized with the help of the minimum of E‾surf\overline{\mathcal{E}}_\mathrm{surf}. (3) The bulk Gibbs measure and Gibbs free energy can be approximated by their Gaussian counterparts. (4) Bounds on the decay of correlations are provided, some of them uniform in β\beta

    The Role of Riemannian Manifolds in Computer Vision: From Coding to Deep Metric Learning

    Get PDF
    A diverse number of tasks in computer vision and machine learning enjoy from representations of data that are compact yet discriminative, informative and robust to critical measurements. Two notable representations are offered by Region Covariance Descriptors (RCovD) and linear subspaces which are naturally analyzed through the manifold of Symmetric Positive Definite (SPD) matrices and the Grassmann manifold, respectively, two widely used types of Riemannian manifolds in computer vision. As our first objective, we examine image and video-based recognition applications where the local descriptors have the aforementioned Riemannian structures, namely the SPD or linear subspace structure. Initially, we provide a solution to compute Riemannian version of the conventional Vector of Locally aggregated Descriptors (VLAD), using geodesic distance of the underlying manifold as the nearness measure. Next, by having a closer look at the resulting codes, we formulate a new concept which we name Local Difference Vectors (LDV). LDVs enable us to elegantly expand our Riemannian coding techniques to any arbitrary metric as well as provide intrinsic solutions to Riemannian sparse coding and its variants when local structured descriptors are considered. We then turn our attention to two special types of covariance descriptors namely infinite-dimensional RCovDs and rank-deficient covariance matrices for which the underlying Riemannian structure, i.e. the manifold of SPD matrices is out of reach to great extent. %Generally speaking, infinite-dimensional RCovDs offer better discriminatory power over their low-dimensional counterparts. To overcome this difficulty, we propose to approximate the infinite-dimensional RCovDs by making use of two feature mappings, namely random Fourier features and the Nystrom method. As for the rank-deficient covariance matrices, unlike most existing approaches that employ inference tools by predefined regularizers, we derive positive definite kernels that can be decomposed into the kernels on the cone of SPD matrices and kernels on the Grassmann manifolds and show their effectiveness for image set classification task. Furthermore, inspired by attractive properties of Riemannian optimization techniques, we extend the recently introduced Keep It Simple and Straightforward MEtric learning (KISSME) method to the scenarios where input data is non-linearly distributed. To this end, we make use of the infinite dimensional covariance matrices and propose techniques towards projecting on the positive cone in a Reproducing Kernel Hilbert Space (RKHS). We also address the sensitivity issue of the KISSME to the input dimensionality. The KISSME algorithm is greatly dependent on Principal Component Analysis (PCA) as a preprocessing step which can lead to difficulties, especially when the dimensionality is not meticulously set. To address this issue, based on the KISSME algorithm, we develop a Riemannian framework to jointly learn a mapping performing dimensionality reduction and a metric in the induced space. Lastly, in line with the recent trend in metric learning, we devise end-to-end learning of a generic deep network for metric learning using our derivation
    • …
    corecore