15,609 research outputs found

    Computational statistics using the Bayesian Inference Engine

    Full text link
    This paper introduces the Bayesian Inference Engine (BIE), a general parallel, optimised software package for parameter inference and model selection. This package is motivated by the analysis needs of modern astronomical surveys and the need to organise and reuse expensive derived data. The BIE is the first platform for computational statistics designed explicitly to enable Bayesian update and model comparison for astronomical problems. Bayesian update is based on the representation of high-dimensional posterior distributions using metric-ball-tree based kernel density estimation. Among its algorithmic offerings, the BIE emphasises hybrid tempered MCMC schemes that robustly sample multimodal posterior distributions in high-dimensional parameter spaces. Moreover, the BIE is implements a full persistence or serialisation system that stores the full byte-level image of the running inference and previously characterised posterior distributions for later use. Two new algorithms to compute the marginal likelihood from the posterior distribution, developed for and implemented in the BIE, enable model comparison for complex models and data sets. Finally, the BIE was designed to be a collaborative platform for applying Bayesian methodology to astronomy. It includes an extensible object-oriented and easily extended framework that implements every aspect of the Bayesian inference. By providing a variety of statistical algorithms for all phases of the inference problem, a scientist may explore a variety of approaches with a single model and data implementation. Additional technical details and download details are available from http://www.astro.umass.edu/bie. The BIE is distributed under the GNU GPL.Comment: Resubmitted version. Additional technical details and download details are available from http://www.astro.umass.edu/bie. The BIE is distributed under the GNU GP

    Unbiased Monte Carlo estimate of stochastic differential equations expectations

    Get PDF
    We develop a pure Monte Carlo method to compute E(g(XT))E(g(X_T)) where gg is a bounded and Lipschitz function and XtX_t an Ito process. This approach extends a previously proposed method to the general multidimensional case with a SDE with varying coefficients. A variance reduction method relying on interacting particle systems is also developped.Comment: 32 pages, 14 figure

    Opinion influence and evolution in social networks: a Markovian agents model

    Full text link
    In this paper, the effect on collective opinions of filtering algorithms managed by social network platforms is modeled and investigated. A stochastic multi-agent model for opinion dynamics is proposed, that accounts for a centralized tuning of the strength of interaction between individuals. The evolution of each individual opinion is described by a Markov chain, whose transition rates are affected by the opinions of the neighbors through influence parameters. The properties of this model are studied in a general setting as well as in interesting special cases. A general result is that the overall model of the social network behaves like a high-dimensional Markov chain, which is viable to Monte Carlo simulation. Under the assumption of identical agents and unbiased influence, it is shown that the influence intensity affects the variance, but not the expectation, of the number of individuals sharing a certain opinion. Moreover, a detailed analysis is carried out for the so-called Peer Assembly, which describes the evolution of binary opinions in a completely connected graph of identical agents. It is shown that the Peer Assembly can be lumped into a birth-death chain that can be given a complete analytical characterization. Both analytical results and simulation experiments are used to highlight the emergence of particular collective behaviours, e.g. consensus and herding, depending on the centralized tuning of the influence parameters.Comment: Revised version (May 2018

    Differential equation approximations for Markov chains

    Full text link
    We formulate some simple conditions under which a Markov chain may be approximated by the solution to a differential equation, with quantifiable error probabilities. The role of a choice of coordinate functions for the Markov chain is emphasised. The general theory is illustrated in three examples: the classical stochastic epidemic, a population process model with fast and slow variables, and core-finding algorithms for large random hypergraphs.Comment: Published in at http://dx.doi.org/10.1214/07-PS121 the Probability Surveys (http://www.i-journals.org/ps/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • …
    corecore