1,809 research outputs found

    Boosting Bayesian Parameter Inference of Nonlinear Stochastic Differential Equation Models by Hamiltonian Scale Separation

    Full text link
    Parameter inference is a fundamental problem in data-driven modeling. Given observed data that is believed to be a realization of some parameterized model, the aim is to find parameter values that are able to explain the observed data. In many situations, the dominant sources of uncertainty must be included into the model, for making reliable predictions. This naturally leads to stochastic models. Stochastic models render parameter inference much harder, as the aim then is to find a distribution of likely parameter values. In Bayesian statistics, which is a consistent framework for data-driven learning, this so-called posterior distribution can be used to make probabilistic predictions. We propose a novel, exact and very efficient approach for generating posterior parameter distributions, for stochastic differential equation models calibrated to measured time-series. The algorithm is inspired by re-interpreting the posterior distribution as a statistical mechanics partition function of an object akin to a polymer, where the measurements are mapped on heavier beads compared to those of the simulated data. To arrive at distribution samples, we employ a Hamiltonian Monte Carlo approach combined with a multiple time-scale integration. A separation of time scales naturally arises if either the number of measurement points or the number of simulation points becomes large. Furthermore, at least for 1D problems, we can decouple the harmonic modes between measurement points and solve the fastest part of their dynamics analytically. Our approach is applicable to a wide range of inference problems and is highly parallelizable.Comment: 15 pages, 8 figure

    Deterministic submanifolds and analytic solution of the stochastic differential master equation describing a qubit

    Get PDF
    This paper studies the stochastic differential equation (SDE) associated to a two-level quantum system (qubit) subject to Hamiltonian evolution as well as unmonitored and monitored decoherence channels. The latter imply a stochastic evolution of the quantum state (density operator), whose associated probability distribution we characterize. We first show that for two sets of typical experimental settings, corresponding either to weak quantum non demolition measurements or to weak fluorescence measurements, the three Bloch coordinates of the qubit remain confined to a deterministically evolving surface or curve inside the Bloch sphere. We explicitly solve the deterministic evolution, and we provide a closed-form expression for the probability distribution on this surface or curve. Then we relate the existence in general of such deterministically evolving submanifolds to an accessibility question of control theory, which can be answered with an explicit algebraic criterion on the SDE. This allows us to show that, for a qubit, the above two sets of weak measurements are essentially the only ones featuring deterministic surfaces or curves

    A weak turbulence theory for incompressible magnetohydrodynamics

    Get PDF
    We derive a weak turbulence formalism for incompressible magnetohydrodynamics. Three-wave interactions lead to a system of kinetic equations for the spectral densities of energy and helicity. The kinetic equations conserve energy in all wavevector planes normal to the applied magnetic field B0ĂȘ[parallel R: parallel]. Numerically and analytically, we find energy spectra E± [similar] kn±[bot bottom], such that n+ + n− = −4, where E± are the spectra of the ElsĂ€sser variables z± = v ± b in the two-dimensional case (k[parallel R: parallel] = 0). The constants of the spectra are computed exactly and found to depend on the amount of correlation between the velocity and the magnetic field. Comparison with several numerical simulations and models is also made

    Manifold Interpolating Optimal-Transport Flows for Trajectory Inference

    Full text link
    We present a method called Manifold Interpolating Optimal-Transport Flow (MIOFlow) that learns stochastic, continuous population dynamics from static snapshot samples taken at sporadic timepoints. MIOFlow combines dynamic models, manifold learning, and optimal transport by training neural ordinary differential equations (Neural ODE) to interpolate between static population snapshots as penalized by optimal transport with manifold ground distance. Further, we ensure that the flow follows the geometry by operating in the latent space of an autoencoder that we call a geodesic autoencoder (GAE). In GAE the latent space distance between points is regularized to match a novel multiscale geodesic distance on the data manifold that we define. We show that this method is superior to normalizing flows, Schr\"odinger bridges and other generative models that are designed to flow from noise to data in terms of interpolating between populations. Theoretically, we link these trajectories with dynamic optimal transport. We evaluate our method on simulated data with bifurcations and merges, as well as scRNA-seq data from embryoid body differentiation, and acute myeloid leukemia treatment.Comment: Presented at NeurIPS 2022, 24 pages, 7 tables, 14 figure

    Liouville Decoherence in a Model of Flavour Oscillations in the presence of Dark Energy

    Full text link
    We study in some detail the master equation, and its solution in a simplified case modelling flavour oscillations of a two-level system, stemming from the Liouville-string approach to quantum space time foam. In this framework we discuss the appearance of diffusion terms and decoherence due to the interaction of low-energy string matter with space-time defects, such as D-particles in the specific model of ``D-particle foam'', as well as dark energy contributions. We pay particular attention to contrasting the decoherent role of a cosmological constant in inducing exponential quantum damping in the evolution of low-energy observables, such as the probability of flavour oscillations, with the situation where the dark energy relaxes to zero for asymptotically large times, in which case such a damping is absent. Our findings may be of interest to (astrophysical) tests of quantum space-time foam models in the not-so-distant future.Comment: 27 pages late

    Blending generative models with deep learning for multidimensional phenotypic prediction from brain connectivity data

    Get PDF
    Network science as a discipline has provided us with foundational machinery to study complex relational entities such as social networks, genomics, econometrics etc. The human brain is a complex network that has recently garnered immense interest within the data science community. Connectomics or the study of the underlying connectivity patterns in the brain has become an important field of study for the characterization of various neurological disorders such as Autism, Schizophrenia etc. Such connectomic studies have provided several fundamental insights into its intrinsic organisation and implications on our behavior and health. This thesis proposes a collection of mathematical models that are capable of fusing information from functional and structural connectivity with phenotypic information. Here, functional connectivity is measured by resting state functional MRI (rs-fMRI), while anatomical connectivity is captured using Diffusion Tensor Imaging (DTI). The phenotypic information of interest could refer to continuous measures of behavior or cognition, or may capture levels of impairment in the case of neuropsychiatric disorders. We first develop a joint network optimization framework to predict clinical severity from rs-fMRI connectivity matrices. This model couples two key terms into a unified optimization framework: a generative matrix factorization and a discriminative linear regression model. We demonstrate that the proposed joint inference strategy is successful in generalizing to prediction of impairments in Autism Spectrum Disorder (ASD) when compared with several machine learning, graph theoretic and statistical baselines. At the same time, the model is capable of extracting functional brain biomarkers that are informative of individual measures of clinical severity. We then present two modeling extensions to non-parametric and neural network regression models that are coupled with the same generative framework. Building on these general principles, we extend our framework to incorporate multimodal information from Diffusion Tensor Imaging (DTI) and dynamic functional connectivity. At a high level, our generative matrix factorization now estimates a time-varying functional decomposition. At the same time, it is guided by anatomical connectivity priors in a graph-based regularization setup. This connectivity model is coupled with a deep network that predicts multidimensional clinical characterizations and models the temporal dynamics of the functional scan. This framework allows us to simultaneously explain multiple impairments, isolate stable multi-modal connectivity signatures, and study the evolution of various brain states at rest. Lastly, we shift our focus to end-to-end geometric frameworks. These are designed to characterize the complementarity between functional and structural connectivity data spaces, while using clinical information as a secondary guide. As an alternative to the previous generative framework for functional connectivity, our representation learning scheme of choice is a matrix autoencoder that is crafted to reflect the underlying data geometry. This is coupled with a manifold alignment model that maps from function to structure and a deep network that maps to phenotypic information. We demonstrate that the model reliably recovers structural connectivity patterns across individuals, while robustly extracting predictive yet interpretable brain biomarkers. Finally, we also present a preliminary analytical and experimental exposition on the theoretical aspects of the matrix autoencoder representation
    • 

    corecore