2,560 research outputs found

    Simulation of stochastic network dynamics via entropic matching

    Full text link
    The simulation of complex stochastic network dynamics arising, for instance, from models of coupled biomolecular processes remains computationally challenging. Often, the necessity to scan a models' dynamics over a large parameter space renders full-fledged stochastic simulations impractical, motivating approximation schemes. Here we propose an approximation scheme which improves upon the standard linear noise approximation while retaining similar computational complexity. The underlying idea is to minimize, at each time step, the Kullback-Leibler divergence between the true time evolved probability distribution and a Gaussian approximation (entropic matching). This condition leads to ordinary differential equations for the mean and the covariance matrix of the Gaussian. For cases of weak nonlinearity, the method is more accurate than the linear method when both are compared to stochastic simulations.Comment: 23 pages, 6 figures; significantly revised versio

    Artificial Sequences and Complexity Measures

    Get PDF
    In this paper we exploit concepts of information theory to address the fundamental problem of identifying and defining the most suitable tools to extract, in a automatic and agnostic way, information from a generic string of characters. We introduce in particular a class of methods which use in a crucial way data compression techniques in order to define a measure of remoteness and distance between pairs of sequences of characters (e.g. texts) based on their relative information content. We also discuss in detail how specific features of data compression techniques could be used to introduce the notion of dictionary of a given sequence and of Artificial Text and we show how these new tools can be used for information extraction purposes. We point out the versatility and generality of our method that applies to any kind of corpora of character strings independently of the type of coding behind them. We consider as a case study linguistic motivated problems and we present results for automatic language recognition, authorship attribution and self consistent-classification.Comment: Revised version, with major changes, of previous "Data Compression approach to Information Extraction and Classification" by A. Baronchelli and V. Loreto. 15 pages; 5 figure

    Entropic Distance for Nonlinear Master Equation

    Full text link
    More and more works deal with statistical systems far from equilibrium, dominated by unidirectional stochastic processes augmented by rare resets. We analyze the construction of the entropic distance measure appropriate for such dynamics. We demonstrate that a power-like nonlinearity in the state probability in the master equation naturally leads to the Tsallis (Havrda-Charv\'at, Acz\'el-Dar\'oczy) q-entropy formula in the context of seeking for the maximal entropy state at stationarity. A few possible applications of a certain simple and linear master equation to phenomena studied in statistical physics are listed at the end.Comment: Talk given by T.S.Bir\'o at BGL 2017, Gy\"ongy\"os, Hungar

    Relative information entropy and Weyl curvature of the inhomogeneous Universe

    Full text link
    Penrose conjectured a connection between entropy and Weyl curvature of the Universe. This is plausible, as the almost homogeneous and isotropic Universe at the onset of structure formation has negligible Weyl curvature, which then grows (relative to the Ricci curvature) due to the formation of large-scale structure and thus reminds us of the second law of thermodynamics. We study two scalar measures to quantify the deviations from a homogeneous and isotropic space-time, the relative information entropy and a Weyl tensor invariant, and show their relation to the averaging problem. We calculate these two quantities up to second order in standard cosmological perturbation theory and find that they are correlated and can be linked via the kinematic backreaction of a spatially averaged universe model.Comment: 8 pages, matches the published version in Physical Review

    Optimal experimental design for mathematical models of haematopoiesis.

    Get PDF
    The haematopoietic system has a highly regulated and complex structure in which cells are organized to successfully create and maintain new blood cells. It is known that feedback regulation is crucial to tightly control this system, but the specific mechanisms by which control is exerted are not completely understood. In this work, we aim to uncover the underlying mechanisms in haematopoiesis by conducting perturbation experiments, where animal subjects are exposed to an external agent in order to observe the system response and evolution. We have developed a novel Bayesian hierarchical framework for optimal design of perturbation experiments and proper analysis of the data collected. We use a deterministic model that accounts for feedback and feedforward regulation on cell division rates and self-renewal probabilities. A significant obstacle is that the experimental data are not longitudinal, rather each data point corresponds to a different animal. We overcome this difficulty by modelling the unobserved cellular levels as latent variables. We then use principles of Bayesian experimental design to optimally distribute time points at which the haematopoietic cells are quantified. We evaluate our approach using synthetic and real experimental data and show that an optimal design can lead to better estimates of model parameters

    Entropy-based parametric estimation of spike train statistics

    Full text link
    We consider the evolution of a network of neurons, focusing on the asymptotic behavior of spikes dynamics instead of membrane potential dynamics. The spike response is not sought as a deterministic response in this context, but as a conditional probability : "Reading out the code" consists of inferring such a probability. This probability is computed from empirical raster plots, by using the framework of thermodynamic formalism in ergodic theory. This gives us a parametric statistical model where the probability has the form of a Gibbs distribution. In this respect, this approach generalizes the seminal and profound work of Schneidman and collaborators. A minimal presentation of the formalism is reviewed here, while a general algorithmic estimation method is proposed yielding fast convergent implementations. It is also made explicit how several spike observables (entropy, rate, synchronizations, correlations) are given in closed-form from the parametric estimation. This paradigm does not only allow us to estimate the spike statistics, given a design choice, but also to compare different models, thus answering comparative questions about the neural code such as : "are correlations (or time synchrony or a given set of spike patterns, ..) significant with respect to rate coding only ?" A numerical validation of the method is proposed and the perspectives regarding spike-train code analysis are also discussed.Comment: 37 pages, 8 figures, submitte
    • …
    corecore