12 research outputs found

    Geometrically Enriched Latent Spaces

    Full text link
    A common assumption in generative models is that the generator immerses the latent space into a Euclidean ambient space. Instead, we consider the ambient space to be a Riemannian manifold, which allows for encoding domain knowledge through the associated Riemannian metric. Shortest paths can then be defined accordingly in the latent space to both follow the learned manifold and respect the ambient geometry. Through careful design of the ambient metric we can ensure that shortest paths are well-behaved even for deterministic generators that otherwise would exhibit a misleading bias. Experimentally we show that our approach improves interpretability of learned representations both using stochastic and deterministic generators

    Natural-gradient learning for spiking neurons.

    Get PDF
    In many normative theories of synaptic plasticity, weight updates implicitly depend on the chosen parametrization of the weights. This problem relates, for example, to neuronal morphology: synapses which are functionally equivalent in terms of their impact on somatic firing can differ substantially in spine size due to their different positions along the dendritic tree. Classical theories based on Euclidean-gradient descent can easily lead to inconsistencies due to such parametrization dependence. The issues are solved in the framework of Riemannian geometry, in which we propose that plasticity instead follows natural-gradient descent. Under this hypothesis, we derive a synaptic learning rule for spiking neurons that couples functional efficiency with the explanation of several well-documented biological phenomena such as dendritic democracy, multiplicative scaling, and heterosynaptic plasticity. We therefore suggest that in its search for functional synaptic plasticity, evolution might have come up with its own version of natural-gradient descent

    Information geometry

    Get PDF
    This Special Issue of the journal Entropy, titled “Information Geometry I”, contains a collection of 17 papers concerning the foundations and applications of information geometry. Based on a geometrical interpretation of probability, information geometry has become a rich mathematical field employing the methods of differential geometry. It has numerous applications to data science, physics, and neuroscience. Presenting original research, yet written in an accessible, tutorial style, this collection of papers will be useful for scientists who are new to the field, while providing an excellent reference for the more experienced researcher. Several papers are written by authorities in the field, and topics cover the foundations of information geometry, as well as applications to statistics, Bayesian inference, machine learning, complex systems, physics, and neuroscience

    New Directions for Contact Integrators

    Get PDF
    Contact integrators are a family of geometric numerical schemes which guarantee the conservation of the contact structure. In this work we review the construction of both the variational and Hamiltonian versions of these methods. We illustrate some of the advantages of geometric integration in the dissipative setting by focusing on models inspired by recent studies in celestial mechanics and cosmology.Comment: To appear as Chapter 24 in GSI 2021, Springer LNCS 1282

    Learning Dynamics from Data Using Optimal Transport Techniques and Applications

    Get PDF
    Optimal Transport has been studied widely in recent years, the concept of Wasserstein distance brings a lot of applications in computational mathematics, machine learning, engineering, even finance areas. Meanwhile, people are gradually realizing that as the amount of data as well as the needs of utilizing data increase vastly, data-driven models have great potentials in real-world applications. In this thesis, we apply the theories of OT and design data-driven algorithms to form and compute various OT problems. We also build a framework to learn inverse OT problem. Furthermore, we develop OT and deep learning based models to solve stochastic differential equations, optimal control, mean field games related problems, all in data-driven settings. In Chapter 2, we provide necessary mathematical concepts and results that form the basis of this thesis. It contains brief surveys of optimal transport, stochastic differential equations, Fokker-Planck equations, deep learning, optimal controls and mean field games. Chapter 3 to Chapter 5 present several scalable algorithms to handle optimal transport problems within different settings. Specifically, Chapter 3 shows a new saddle scheme and learning strategy for computing the Wasserstein geodesic, as well as the Wasserstein distance and OT map between two probability distributions in high dimensions. We parametrize the map and Lagrange multipliers as neural networks. We demonstrate the performance of our algorithms through a series of experiments with both synthetic and realistic data. Chapter 4 presents a scalable algorithm for computing the Monge map between two probability distributions since computing the Monge maps remains challenging, in spite of the rapid developments of the numerical methods for optimal transport problems. Similarly, we formulate the problem as a mini-max problem and solve it via deep learning. The performance of our algorithms is demonstrated through a series of experiments with both synthetic and realistic data. In Chapter 5 we study OT problem in an inverse view, which we also call Inverse OT (IOT) problem. IOT also refers to the problem of learning the cost function for OT from observed transport plan or its samples. We derive an unconstrained convex optimization formulation of the inverse OT problem. We provide a comprehensive characterization of the properties of inverse OT, including uniqueness of solutions. We also develop two numerical algorithms, one is a fast matrix scaling method based on the Sinkhorn-Knopp algorithm for discrete OT, and the other one is a learning based algorithm that parameterizes the cost function as a deep neural network for continuous OT. Our numerical results demonstrate promising efficiency and accuracy advantages of the proposed algorithms over existing state-of-the-art methods. In Chapter 6 we propose a novel method using the weak form of Fokker Planck Equation (FPE) --- a partial differential equation --- to describe the density evolution of data in a sampled form, which is then combined with Wasserstein generative adversarial network (WGAN) in the training process. In such a sample-based framework we are able to learn the nonlinear dynamics from aggregate data without explicitly solving FPE. We demonstrate our approach in the context of a series of synthetic and real-world data sets. Chapter 7 introduces the application of OT and neural networks in optimal density control. Particularly, we parametrize the control strategy via neural networks, and provide an algorithm to learn the strategy that can drive samples following one distribution to new locations following target distribution. We demonstrate our method in both synthetic and realistic experiments, where we also consider perturbation fields. Finally Chapter 8 presents applications of mean field game in generative modeling and finance area. With more details, we build a GAN framework upon mean field game to generate desired distribution starting with white noise, we also investigate its connection to OT. Moreover, we apply mean field game theories to study the equilibrium trading price in stock markets, we demonstrate the theoretical result by conducting experiments on real trading data.Ph.D

    Image analysis and statistical modeling for applications in cytometry and bioprocess control

    Get PDF
    Today, signal processing has a central role in many of the advancements in systems biology. Modern signal processing is required to provide efficient computational solutions to unravel complex problems that are either arduous or impossible to obtain using conventional approaches. For example, imaging-based high-throughput experiments enable cells to be examined at even subcellular level yielding huge amount of image data. Cytometry is an integral part of such experiments and involves measurement of different cell parameters which requires extraction of quantitative experimental values from cell microscopy images. In order to do that for such large number of images, fast and accurate automated image analysis methods are required. In another example, modeling of bioprocesses and their scale-up is a challenging task where different scales have different parameters and often there are more variables than the available number of observations thus requiring special methodology. In many biomedical cell microscopy studies, it is necessary to analyze the images at single cell or even subcellular level since owing to the heterogeneity of cell populations the population-averaged measurements are often inconclusive. Moreover, the emergence of imaging-based high-content screening experiments, especially for drug design, has put single cell analysis at the forefront since it is required to study the dynamics of single-cell gene expressions for tracking and quantification of cell phenotypic variations. The ability to perform single cell analysis depends on the accuracy of image segmentation in detecting individual cells from images. However, clumping of cells at both nuclei and cytoplasm level hinders accurate cell image segmentation. Part of this thesis work concentrates on developing accurate automated methods for segmentation of bright field as well as multichannel fluorescence microscopy images of cells with an emphasis on clump splitting so that cells are separated from each other as well as from background. The complexity in bioprocess development and control crave for the usage of computational modeling and data analysis approaches for process optimization and scale-up. This is also asserted by the fact that obtaining a priori knowledge needed for the development of traditional scale-up criteria may at times be difficult. Moreover, employment of efficient process modeling may provide the added advantage of automatic identification of influential control parameters. Determination of the values of the identified parameters and the ability to predict them at different scales help in process control and in achieving their scale-up. Bioprocess modeling and control can also benefit from single cell analysis where the latter could add a new dimension to the former once imaging-based in-line sensors allow for monitoring of key variables governing the processes. In this thesis we exploited signal processing techniques for statistical modeling of bioprocess and its scale-up as well as for development of fully automated methods for biomedical cell microscopy image segmentation beginning from image pre-processing and initial segmentation to clump splitting and image post-processing with the goal to facilitate the high-throughput analysis. In order to highlight the contribution of this work, we present three application case studies where we applied the developed methods to solve the problems of cell image segmentation and bioprocess modeling and scale-up

    SIS 2017. Statistics and Data Science: new challenges, new generations

    Get PDF
    The 2017 SIS Conference aims to highlight the crucial role of the Statistics in Data Science. In this new domain of ‘meaning’ extracted from the data, the increasing amount of produced and available data in databases, nowadays, has brought new challenges. That involves different fields of statistics, machine learning, information and computer science, optimization, pattern recognition. These afford together a considerable contribute in the analysis of ‘Big data’, open data, relational and complex data, structured and no-structured. The interest is to collect the contributes which provide from the different domains of Statistics, in the high dimensional data quality validation, sampling extraction, dimensional reduction, pattern selection, data modelling, testing hypotheses and confirming conclusions drawn from the data

    Untangling hotel industry’s inefficiency: An SFA approach applied to a renowned Portuguese hotel chain

    Get PDF
    The present paper explores the technical efficiency of four hotels from Teixeira Duarte Group - a renowned Portuguese hotel chain. An efficiency ranking is established from these four hotel units located in Portugal using Stochastic Frontier Analysis. This methodology allows to discriminate between measurement error and systematic inefficiencies in the estimation process enabling to investigate the main inefficiency causes. Several suggestions concerning efficiency improvement are undertaken for each hotel studied.info:eu-repo/semantics/publishedVersio
    corecore