67 research outputs found

    Geodesics on the manifold of multivariate generalized Gaussian distributions with an application to multicomponent texture discrimination

    Get PDF
    We consider the Rao geodesic distance (GD) based on the Fisher information as a similarity measure on the manifold of zero-mean multivariate generalized Gaussian distributions (MGGD). The MGGD is shown to be an adequate model for the heavy-tailed wavelet statistics in multicomponent images, such as color or multispectral images. We discuss the estimation of MGGD parameters using various methods. We apply the GD between MGGDs to color texture discrimination in several classification experiments, taking into account the correlation structure between the spectral bands in the wavelet domain. We compare the performance, both in terms of texture discrimination capability and computational load, of the GD and the Kullback-Leibler divergence (KLD). Likewise, both uni- and multivariate generalized Gaussian models are evaluated, characterized by a fixed or a variable shape parameter. The modeling of the interband correlation significantly improves classification efficiency, while the GD is shown to consistently outperform the KLD as a similarity measure

    Decision-Making with Heterogeneous Sensors - A Copula Based Approach

    Get PDF
    Statistical decision making has wide ranging applications, from communications and signal processing to econometrics and finance. In contrast to the classical one source-one receiver paradigm, several applications have been identified in the recent past that require acquiring data from multiple sources or sensors. Information from the multiple sensors are transmitted to a remotely located receiver known as the fusion center which makes a global decision. Past work has largely focused on fusion of information from homogeneous sensors. This dissertation extends the formulation to the case when the local sensors may possess disparate sensing modalities. Both the theoretical and practical aspects of multimodal signal processing are considered. The first and foremost challenge is to \u27adequately\u27 model the joint statistics of such heterogeneous sensors. We propose the use of copula theory for this purpose. Copula models are general descriptors of dependence. They provide a way to characterize the nonlinear functional relationships between the multiple modalities, which are otherwise difficult to formalize. The important problem of selecting the `best\u27 copula function from a given set of valid copula densities is addressed, especially in the context of binary hypothesis testing problems. Both, the training-testing paradigm, where a training set is assumed to be available for learning the copula models prior to system deployment, as well as generalized likelihood ratio test (GLRT) based fusion rule for the online selection and estimation of copula parameters are considered. The developed theory is corroborated with extensive computer simulations as well as results on real-world data. Sensor observations (or features extracted thereof) are most often quantized before their transmission to the fusion center for bandwidth and power conservation. A detection scheme is proposed for this problem assuming unifom scalar quantizers at each sensor. The designed rule is applicable for both binary and multibit local sensor decisions. An alternative suboptimal but computationally efficient fusion rule is also designed which involves injecting a deliberate disturbance to the local sensor decisions before fusion. The rule is based on Widrow\u27s statistical theory of quantization. Addition of controlled noise helps to \u27linearize\u27 the higly nonlinear quantization process thus resulting in computational savings. It is shown that although the introduction of external noise does cause a reduction in the received signal to noise ratio, the proposed approach can be highly accurate when the input signals have bandlimited characteristic functions, and the number of quantization levels is large. The problem of quantifying neural synchrony using copula functions is also investigated. It has been widely accepted that multiple simultaneously recorded electroencephalographic signals exhibit nonlinear and non-Gaussian statistics. While the existing and popular measures such as correlation coefficient, corr-entropy coefficient, coh-entropy and mutual information are limited to being bivariate and hence applicable only to pairs of channels, measures such as Granger causality, even though multivariate, fail to account for any nonlinear inter-channel dependence. The application of copula theory helps alleviate both these limitations. The problem of distinguishing patients with mild cognitive impairment from the age-matched control subjects is also considered. Results show that the copula derived synchrony measures when used in conjunction with other synchrony measures improve the detection of Alzheimer\u27s disease onset

    Uncertainty-based image segmentation with unsupervised mixture models

    Get PDF
    In this thesis, a contribution to explainable artificial intelligence is made. More specifically, the aspect of artificial intelligence which focusses on recreating the human perception is tackled from a previously neglected direction. A variant of human perception is building a mental model of the extents of semantic objects which appear in the field of view. If this task is performed by an algorithm, it is termed image segmentation. Recent methods in this area are mostly trained in a supervised fashion by exploiting an as extensive as possible data set of ground truth segmentations. Further, semantic segmentation is almost exclusively tackled by Deep Neural Networks (DNNs). Both trends pose several issues. First, the annotations have to be acquired somehow. This is especially inconvenient if, for instance, a new sensor becomes available, new domains are explored, or different quantities become of interest. In each case, the cumbersome and potentially costly labelling of the raw data has to be redone. While annotating keywords to an image can be achieved in a reasonable amount of time, annotating every pixel of an image with its respective ground truth class is an order of magnitudes more time-consuming. Unfortunately, the quality of the labels is an issue as well because fine-grained structures like hair, grass, or the boundaries of biological cells have to be outlined exactly in image segmentation in order to derive meaningful conclusions. Second, DNNs are discriminative models. They simply learn to separate the features of the respective classes. While this works exceptionally well if enough data is provided, quantifying the uncertainty with which a prediction is made is then not directly possible. In order to allow this, the models have to be designed differently. This is achieved through generatively modelling the distribution of the features instead of learning the boundaries between classes. Hence, image segmentation is tackled from a generative perspective in this thesis. By utilizing mixture models which belong to the set of generative models, the quantification of uncertainty is an implicit property. Additionally, the dire need of annotations can be reduced because mixture models are conveniently estimated in the unsupervised setting. Starting with the computation of the upper bounds of commonly used probability distributions, this knowledge is used to build a novel probability distribution. It is based on flexible marginal distributions and a copula which models the dependence structure of multiple features. This modular approach allows great flexibility and shows excellent performance at image segmentation. After deriving the upper bounds, different ways to reach them in an unsupervised fashion are presented. Including the probable locations of edges in the unsupervised model estimation greatly increases the performance. The proposed models surpass state-of-the-art accuracies in the generative and unsupervised setting and are on-par with many discriminative models. The analyses are conducted following the Bayesian paradigm which allows computing uncertainty estimates of the model parameters. Finally, a novel approach combining a discriminative DNN and a local appearance model in a weakly supervised setting is presented. This combination yields a generative semantic segmentation model with minimal annotation effort

    Information Geometry

    Get PDF
    This Special Issue of the journal Entropy, titled “Information Geometry I”, contains a collection of 17 papers concerning the foundations and applications of information geometry. Based on a geometrical interpretation of probability, information geometry has become a rich mathematical field employing the methods of differential geometry. It has numerous applications to data science, physics, and neuroscience. Presenting original research, yet written in an accessible, tutorial style, this collection of papers will be useful for scientists who are new to the field, while providing an excellent reference for the more experienced researcher. Several papers are written by authorities in the field, and topics cover the foundations of information geometry, as well as applications to statistics, Bayesian inference, machine learning, complex systems, physics, and neuroscience

    Information geometry

    Get PDF
    This Special Issue of the journal Entropy, titled “Information Geometry I”, contains a collection of 17 papers concerning the foundations and applications of information geometry. Based on a geometrical interpretation of probability, information geometry has become a rich mathematical field employing the methods of differential geometry. It has numerous applications to data science, physics, and neuroscience. Presenting original research, yet written in an accessible, tutorial style, this collection of papers will be useful for scientists who are new to the field, while providing an excellent reference for the more experienced researcher. Several papers are written by authorities in the field, and topics cover the foundations of information geometry, as well as applications to statistics, Bayesian inference, machine learning, complex systems, physics, and neuroscience

    Statistical distances and probability metrics for multivariate data, ensembles and probability distributions

    Get PDF
    The use of distance measures in Statistics is of fundamental importance in solving practical problems, such us hypothesis testing, independence contrast, goodness of fit tests, classification tasks, outlier detection and density estimation methods, to name just a few. The Mahalanobis distance was originally developed to compute the distance from a point to the center of a distribution taking into account the distribution of the data, in this case the normal distribution. This is the only distance measure in the statistical literature that takes into account the probabilistic information of the data. In this thesis we address the study of different distance measures that share a fundamental characteristic: all the proposed distances incorporate probabilistic information. The thesis is organized as follows: In Chapter 1 we motivate the problems addressed in this thesis. In Chapter 2 we present the usual definitions and properties of the different distance measures for multivariate data and for probability distributions treated in the statistical literature. In Chapter 3 we propose a distance that generalizes the Mahalanobis distance to the case where the distribution of the data is not Gaussian. To this aim, we introduce a Mercer Kernel based on the distribution of the data at hand. The Mercer Kernel induces distances from a point to the center of a distribution. In this chapter we also present a plug-in estimator of the distance that allows us to solve classification and outlier detection problems in an efficient way. In Chapter 4 of this thesis, we present two new distance measures for multivariate data that incorporate the probabilistic information contained in the sample. In this chapter we also introduce two estimation methods for the proposed distances and we study empirically their convergence. In the experimental section of Chapter 4 we solve classification problems and obtain better results than several standard classification methods in the literature of discriminant analysis. In Chapter 5 we propose a new family of probability metrics and we study its theoretical properties. We introduce an estimation method to compute the proposed distances that is based on the estimation of the level sets, avoiding in this way the difficult task of density estimation. In this chapter we show that the proposed distance is able to solve hypothesis tests and classification problems in general contexts, obtaining better results than other standard methods in statistics. In Chapter 6 we introduce a new distance for sets of points. To this end, we define a dissimilarity measure for points by using a Mercer Kernel that is extended later to a Mercer Kernel for sets of points. In this way, we are able to induce a dissimilarity index for sets of points that it is used as an input for an adaptive k-mean clustering algorithm. The proposed clustering algorithm considers an alignment of the sets of points by taking into account a wide range of possible wrapping functions. This chapter presents an application to clustering neuronal spike trains, a relevant problem in neural coding. Finally, in Chapter 7, we present the general conclusions of this thesis and the future research lines.En Estadística el uso de medidas de distancia resulta de vital importancia a la hora de resolver problemas de índole práctica. Algunos métodos que hacen uso de distancias en estadística son: Contrastes de hipótesis, de independencia, de bondad de ajuste, métodos de clasificación, detección de atípicos y estimación de densidad, entre otros. La distancia de Mahalanobis, que fue diseñada originalmente para hallar la distancia de un punto al centro de una distribución usando información de la distribución ambiente, en este caso la normal. Constituye el único ejemplo existente en estadística de distancia que considera información probabilística. En esta tesis abordamos el estudio de diferentes medidas de distancia que comparten una característica en común: todas ellas incorporan información probabilística. El trabajo se encuentra organizado de la siguiente manera: En el Capítulo 1 motivamos los problemas abordados en esta tesis. En el Capítulo 2 de este trabajo presentamos las definiciones y propiedades de las diferentes medidas de distancias para datos multivariantes y para medidas de probabilidad existentes en la literatura. En el Capítulo 3 se propone una distancia que generaliza la distancia de Mahalanobis al caso en que la distribución de los datos no es Gaussiana. Para ello se propone un Núcleo (kernel) de Mercer basado en la densidad (muestral) de los datos que nos confiere la posibilidad de inducir distancias de un punto a una distribución. En este capítulo presentamos además un estimador plug-in de la distancia que nos permite resolver, de manera práctica y eficiente, problemas de detección de atípicos y problemas de clasificación mejorando los resultados obtenidos al utilizar otros métodos de la literatura. Continuando con el estudio de medidas de distancia, en el Capítulo 4 de esta tesis se proponen dos nuevas medidas de distancia para datos multivariantes incorporando información probabilística contenida en la muestra. En este capítulo proponemos también dos métodos de estimación eficientes para las distancias propuestas y estudiamos de manera empírica su convergencia. En la sección experimental del Capítulo 4 se resuelven problemas de clasificación con las medidas de distancia propuestas, mejorando los resultados obtenidos con procedimientos habitualmente utilizados en la literatura de análisis discriminante. En el Capítulo 5 proponemos una familia de distancias entre medidas de probabilidad. Se estudian también las propiedades teóricas de la familia de métricas propuesta y se establece un método de estimación de las distancias basado en la estimación de los conjuntos de nivel (definidos en este capítulo), evitando así la estimación directa de la densidad. En este capítulo se resuelven diferentes problemas de índole práctica con las métricas propuestas: Contraste de hipótesis y problemas de clasificación en diferentes contextos. Los resultados empíricos de este capítulo demuestran que la distancia propuesta es superior a otros métodos habituales de la literatura. Para finalizar con el estudio de distancias, en el Capítulo 6 se propone una medida de distancia entre conjuntos de puntos. Para ello, se define una medida de similaridad entre puntos a través de un kernel de Mercer. A continuación se extiende el kernel para puntos a un kernel de Mercer para conjuntos de puntos. De esta forma, el Núcleo de Mercer para conjuntos de puntos es utilizado para inducir una métrica (un índice de disimilaridad) entre conjuntos de puntos. En este capítulo se propone un método de clasificación por k-medias que utiliza la métrica propuesta y que contempla, además, la posibilidad de alinear los conjuntos de puntos en cada etapa de la construcción de los clusters. En este capítulo presentamos una aplicación relativa al estudio de la decodificación neuronal, donde utilizamos el método propuesto para encontrar clusters de neuronas con patrones de funcionamiento similares. Finalmente en el Capítulo 7 se presentan las conclusiones generales de este trabajo y las futuras líneas de investigación.Programa Oficial de Doctorado en Ingeniería MatemáticaPresidente: Santiago Velilla Cerdán.- Secretario: Verónica Vinciotti.- Vocal: Emilio Carrizosa Prieg

    Determining the Points of Change in Time Series of Polarimetric SAR Data

    Get PDF

    Learning and inference with Wasserstein metrics

    Get PDF
    Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2018.Cataloged from PDF version of thesis.Includes bibliographical references (pages 131-143).This thesis develops new approaches for three problems in machine learning, using tools from the study of optimal transport (or Wasserstein) distances between probability distributions. Optimal transport distances capture an intuitive notion of similarity between distributions, by incorporating the underlying geometry of the domain of the distributions. Despite their intuitive appeal, optimal transport distances are often difficult to apply in practice, as computing them requires solving a costly optimization problem. In each setting studied here, we describe a numerical method that overcomes this computational bottleneck and enables scaling to real data. In the first part, we consider the problem of multi-output learning in the presence of a metric on the output domain. We develop a loss function that measures the Wasserstein distance between the prediction and ground truth, and describe an efficient learning algorithm based on entropic regularization of the optimal transport problem. We additionally propose a novel extension of the Wasserstein distance from probability measures to unnormalized measures, which is applicable in settings where the ground truth is not naturally expressed as a probability distribution. We show statistical learning bounds for both the Wasserstein loss and its unnormalized counterpart. The Wasserstein loss can encourage smoothness of the predictions with respect to a chosen metric on the output space. We demonstrate this property on a real-data image tagging problem, outperforming a baseline that doesn't use the metric. In the second part, we consider the probabilistic inference problem for diffusion processes. Such processes model a variety of stochastic phenomena and appear often in continuous-time state space models. Exact inference for diffusion processes is generally intractable. In this work, we describe a novel approximate inference method, which is based on a characterization of the diffusion as following a gradient flow in a space of probability densities endowed with a Wasserstein metric. Existing methods for computing this Wasserstein gradient flow rely on discretizing the underlying domain of the diffusion, prohibiting their application to problems in more than several dimensions. In the current work, we propose a novel algorithm for computing a Wasserstein gradient flow that operates directly in a space of continuous functions, free of any underlying mesh. We apply our approximate gradient flow to the problem of filtering a diffusion, showing superior performance where standard filters struggle. Finally, we study the ecological inference problem, which is that of reasoning from aggregate measurements of a population to inferences about the individual behaviors of its members. This problem arises often when dealing with data from economics and political sciences, such as when attempting to infer the demographic breakdown of votes for each political party, given only the aggregate demographic and vote counts separately. Ecological inference is generally ill-posed, and requires prior information to distinguish a unique solution. We propose a novel, general framework for ecological inference that allows for a variety of priors and enables efficient computation of the most probable solution. Unlike previous methods, which rely on Monte Carlo estimates of the posterior, our inference procedure uses an efficient fixed point iteration that is linearly convergent. Given suitable prior information, our method can achieve more accurate inferences than existing methods. We additionally explore a sampling algorithm for estimating credible regions.by Charles Frogner.Ph. D

    Probabilistic Graphical Models for Medical Image Segmentation

    Get PDF
    Image segmentation constitutes one of the elementary tasks in computer vision. Various variations exists, one of them being the segmentation of layers that entail a natural ordering constraint. One instance of that problem class are the cell layers in the human retina. In this thesis we study a segmentation approach for this problem class, that applies the machinery of probabilistic graphical models. Linked to probabilistic graphical models is the task of inference, that is, given an input scan of the retina, how to obtain an individual prediction or, if possible, a distribution over potential segmentations of that scan. In general, exact inference is unfeasible which is why we study an approximative approach based on variational inference, that allows to efficiently approximate the full posterior distribution. A distinguishing feature of our approach is the incorporation of a prior shape model, which is not restricted to local information. We evaluate our approach for different data sets, including pathological scans, and demonstrate how global shape information yields state-of-the-art segmentation results. Moreover, since we approximatively infer the full posterior distribution, we are able to assess the quality of our prediction as well as rate the scan in terms of its abnormality. Motivated by our problem we also investigate non-parametric density estimation with a log-concavity constraint. This class of density functions is restricted to the convex hull of the empirical data, which naturally leads to shape distributions that comply with the ordering constraint of retina layers, by not assigning any probability mass to invalid shape configurations. We investigate a prominent approach from the literature, show its extensions from 2-D to N-D and apply it to retina boundary data

    Information theoretic refinement criteria for image synthesis

    Get PDF
    Aquest treball està enmarcat en el context de gràfics per computador partint de la intersecció de tres camps: rendering, teoria de la informació, i complexitat.Inicialment, el concepte de complexitat d'una escena es analitzat considerant tres perspectives des d'un punt de vista de la visibilitat geomètrica: complexitat en un punt interior, complexitat d'una animació, i complexitat d'una regió. L'enfoc principal d'aquesta tesi és l'exploració i desenvolupament de nous criteris de refinament pel problema de la il·luminació global. Mesures de la teoria de la informació basades en la entropia de Shannon i en la entropia generalitzada de Harvda-Charvát-Tsallis, conjuntament amb les f-divergències, són analitzades com a nuclis del refinement. Mostrem com ens aporten una rica varietat d'eficients i altament discriminatòries mesures que són aplicables al rendering en els seus enfocs de pixel-driven (ray-tracing) i object-space (radiositat jeràrquica).Primerament, basat en la entropia de Shannon, es defineixen un conjunt de mesures de qualitat i contrast del pixel. S'apliquen al supersampling en ray-tracing com a criteris de refinement, obtenint un algorisme nou de sampleig adaptatiu basat en entropia, amb un alt rati de qualitat versus cost. En segon lloc, basat en la entropia generalitzada de Harvda-Charvát-Tsallis, i en la informació mutua generalitzada, es defineixen tres nous criteris de refinament per la radiositat jeràrquica. En correspondencia amb tres enfocs clàssics, es presenten els oracles basats en la informació transportada, el suavitzat de la informació, i la informació mutua, amb resultats molt significatius per aquest darrer. Finalment, tres membres de la familia de les f-divergències de Csiszár's (divergències de Kullback-Leibler, chi-square, and Hellinger) son analitzats com a criteris de refinament mostrant bons resultats tant pel ray-tracing com per la radiositat jeràrquica.This work is framed within the context of computer graphics starting out from the intersection of three fields: rendering, information theory, and complexity.Initially, the concept of scene complexity is analysed considering three perspectives from a geometric visibility point of view: complexity at an interior point, complexity of an animation, and complexity of a region. The main focus of this dissertation is the exploration and development of new refinement criteria for the global illumination problem. Information-theoretic measures based on Shannon entropy and Harvda-Charvát-Tsallis generalised entropy, together with f-divergences, are analysed as kernels of refinement. We show how they give us a rich variety of efficient and highly discriminative measures which are applicable to rendering in its pixel-driven (ray-tracing) and object-space (hierarchical radiosity) approaches.Firstly, based on Shannon entropy, a set of pixel quality and pixel contrast measures are defined. They are applied to supersampling in ray-tracing as refinement criteria, obtaining a new entropy-based adaptive sampling algorithm with a high rate quality versus cost. Secondly, based on Harvda-Charvát-Tsallis generalised entropy, and generalised mutual information, three new refinement criteria are defined for hierarchical radiosity. In correspondence with three classic approaches, oracles based on transported information, information smoothness, and mutual information are presented, with very significant results for the latter. And finally, three members of the family of Csiszár's f-divergences (Kullback-Leibler, chi-square, and Hellinger divergences) are analysed as refinement criteria showing good results for both ray-tracing and hierarchical radiosity
    corecore