6,035 research outputs found

    Kernel density estimation on spaces of Gaussian distributions and symmetric positive definite matrices

    No full text
    This paper analyses the kernel density estimation on spaces of Gaussian distributions endowed with different metrics. Explicit expressions of kernels are provided for the case of the 2-Wasserstein metric on multivariate Gaussian distributions and for the Fisher metric on multivariate centred distributions. Under the Fisher metric, the space of multivariate centred Gaussian distributions is isometric to the space of symmetric positive definite matrices under the affine-invariant metric and the space of univariate Gaussian distributions is isometric to the hyperbolic space. Thus kernel are also valid on these spaces. The density estimation is successfully applied to a classification problem of electro-encephalographic signals

    Optimal Recovery of Local Truth

    Get PDF
    Probability mass curves the data space with horizons. Let f be a multivariate probability density function with continuous second order partial derivatives. Consider the problem of estimating the true value of f(z) > 0 at a single point z, from n independent observations. It is shown that, the fastest possible estimators (like the k-nearest neighbor and kernel) have minimum asymptotic mean square errors when the space of observations is thought as conformally curved. The optimal metric is shown to be generated by the Hessian of f in the regions where the Hessian is definite. Thus, the peaks and valleys of f are surrounded by singular horizons when the Hessian changes signature from Riemannian to pseudo-Riemannian. Adaptive estimators based on the optimal variable metric show considerable theoretical and practical improvements over traditional methods. The formulas simplify dramatically when the dimension of the data space is 4. The similarities with General Relativity are striking but possibly illusory at this point. However, these results suggest that nonparametric density estimation may have something new to say about current physical theory.Comment: To appear in Proceedings of Maximum Entropy and Bayesian Methods 1999. Check also: http://omega.albany.edu:8008

    On the Projective Geometry of Kalman Filter

    Full text link
    Convergence of the Kalman filter is best analyzed by studying the contraction of the Riccati map in the space of positive definite (covariance) matrices. In this paper, we explore how this contraction property relates to a more fundamental non-expansiveness property of filtering maps in the space of probability distributions endowed with the Hilbert metric. This is viewed as a preliminary step towards improving the convergence analysis of filtering algorithms over general graphical models.Comment: 6 page

    Density estimation and modeling on symmetric spaces

    Full text link
    In many applications, data and/or parameters are supported on non-Euclidean manifolds. It is important to take into account the geometric structure of manifolds in statistical analysis to avoid misleading results. Although there has been a considerable focus on simple and specific manifolds, there is a lack of general and easy-to-implement statistical methods for density estimation and modeling on manifolds. In this article, we consider a very broad class of manifolds: non-compact Riemannian symmetric spaces. For this class, we provide a very general mathematical result for easily calculating volume changes of the exponential and logarithm map between the tangent space and the manifold. This allows one to define statistical models on the tangent space, push these models forward onto the manifold, and easily calculate induced distributions by Jacobians. To illustrate the statistical utility of this theoretical result, we provide a general method to construct distributions on symmetric spaces. In particular, we define the log-Gaussian distribution as an analogue of the multivariate Gaussian distribution in Euclidean space. With these new kernels on symmetric spaces, we also consider the problem of density estimation. Our proposed approach can use any existing density estimation approach designed for Euclidean spaces and push it forward to the manifold with an easy-to-calculate adjustment. We provide theorems showing that the induced density estimators on the manifold inherit the statistical optimality properties of the parent Euclidean density estimator; this holds for both frequentist and Bayesian nonparametric methods. We illustrate the theory and practical utility of the proposed approach on the space of positive definite matrices

    Positive Definite Kernels in Machine Learning

    Full text link
    This survey is an introduction to positive definite kernels and the set of methods they have inspired in the machine learning literature, namely kernel methods. We first discuss some properties of positive definite kernels as well as reproducing kernel Hibert spaces, the natural extension of the set of functions {k(x,⋅),x∈X}\{k(x,\cdot),x\in\mathcal{X}\} associated with a kernel kk defined on a space X\mathcal{X}. We discuss at length the construction of kernel functions that take advantage of well-known statistical models. We provide an overview of numerous data-analysis methods which take advantage of reproducing kernel Hilbert spaces and discuss the idea of combining several kernels to improve the performance on certain tasks. We also provide a short cookbook of different kernels which are particularly useful for certain data-types such as images, graphs or speech segments.Comment: draft. corrected a typo in figure
    • …
    corecore