9,929 research outputs found

    Data-driven modelling of biological multi-scale processes

    Full text link
    Biological processes involve a variety of spatial and temporal scales. A holistic understanding of many biological processes therefore requires multi-scale models which capture the relevant properties on all these scales. In this manuscript we review mathematical modelling approaches used to describe the individual spatial scales and how they are integrated into holistic models. We discuss the relation between spatial and temporal scales and the implication of that on multi-scale modelling. Based upon this overview over state-of-the-art modelling approaches, we formulate key challenges in mathematical and computational modelling of biological multi-scale and multi-physics processes. In particular, we considered the availability of analysis tools for multi-scale models and model-based multi-scale data integration. We provide a compact review of methods for model-based data integration and model-based hypothesis testing. Furthermore, novel approaches and recent trends are discussed, including computation time reduction using reduced order and surrogate models, which contribute to the solution of inference problems. We conclude the manuscript by providing a few ideas for the development of tailored multi-scale inference methods.Comment: This manuscript will appear in the Journal of Coupled Systems and Multiscale Dynamics (American Scientific Publishers

    Computation of Gaussian orthant probabilities in high dimension

    Full text link
    We study the computation of Gaussian orthant probabilities, i.e. the probability that a Gaussian falls inside a quadrant. The Geweke-Hajivassiliou-Keane (GHK) algorithm [Genz, 1992; Geweke, 1991; Hajivassiliou et al., 1996; Keane, 1993], is currently used for integrals of dimension greater than 10. In this paper we show that for Markovian covariances GHK can be interpreted as the estimator of the normalizing constant of a state space model using sequential importance sampling (SIS). We show for an AR(1) the variance of the GHK, properly normalized, diverges exponentially fast with the dimension. As an improvement we propose using a particle filter (PF). We then generalize this idea to arbitrary covariance matrices using Sequential Monte Carlo (SMC) with properly tailored MCMC moves. We show empirically that this can lead to drastic improvements on currently used algorithms. We also extend the framework to orthants of mixture of Gaussians (Student, Cauchy etc.), and to the simulation of truncated Gaussians

    On Similarities between Inference in Game Theory and Machine Learning

    No full text
    In this paper, we elucidate the equivalence between inference in game theory and machine learning. Our aim in so doing is to establish an equivalent vocabulary between the two domains so as to facilitate developments at the intersection of both fields, and as proof of the usefulness of this approach, we use recent developments in each field to make useful improvements to the other. More specifically, we consider the analogies between smooth best responses in fictitious play and Bayesian inference methods. Initially, we use these insights to develop and demonstrate an improved algorithm for learning in games based on probabilistic moderation. That is, by integrating over the distribution of opponent strategies (a Bayesian approach within machine learning) rather than taking a simple empirical average (the approach used in standard fictitious play) we derive a novel moderated fictitious play algorithm and show that it is more likely than standard fictitious play to converge to a payoff-dominant but risk-dominated Nash equilibrium in a simple coordination game. Furthermore we consider the converse case, and show how insights from game theory can be used to derive two improved mean field variational learning algorithms. We first show that the standard update rule of mean field variational learning is analogous to a Cournot adjustment within game theory. By analogy with fictitious play, we then suggest an improved update rule, and show that this results in fictitious variational play, an improved mean field variational learning algorithm that exhibits better convergence in highly or strongly connected graphical models. Second, we use a recent advance in fictitious play, namely dynamic fictitious play, to derive a derivative action variational learning algorithm, that exhibits superior convergence properties on a canonical machine learning problem (clustering a mixture distribution)

    On Measuring the Welfare Cost of Business Cycles

    Get PDF
    Lucas (1987) argues that the gain from eliminating aggregate fluctuations is trivial. Following Lucas, a number of researchers have altered assumptions on preferences and found that the gain from eliminating business cycles are potentially very large. However, in these exercises little discipline is placed on preference parameters. This paper estimates the welfare cost of business cycles, allowing for potential time-non-separabilities in preferences, where discipline is placed on the choice of preference parameters by requiring that the preferences be consistent with observed fluctuations in a model of business cycles. That is, a theoretical real business cycle world is constructed and the representative agent is then placed in this world. The agent responds optimally to exogenous shocks, given the frictions in the economy. The agent's preference parameters, along with other structural parameters, are estimated using a Bayesian procedure involving Markov Chain Monte Carlo methods. Two main results emerge from the paper. First, the form for the time-non-separability estimated in this paper is very different than the forms suggested and used elsewhere in the literature. Second, the welfare cost of business cycles is close to Lucas's estimate.Business Cycles, Nonseparable preferences, Welfare cost, Markov Chain Monte Carlo

    On Measuring the Welfare Cost of Business Cycles

    Get PDF
    Lucas (1987) argues that the gain from eliminating aggregate fluctuations is trivial. Following Lucas, a number of researchers have altered assumptions on preferences and found that the gain from eliminating business cycles are potentially very large. However, in these exercises little discipline is placed on preference parameters. This paper estimates the welfare cost of business cycles, allowing for potential time-non-separabilities in preferences, where discipline is placed on the choice of preference parameters by requiring that the preferences be consistent with observed fluctuations in a model of business cycles. That is, a theoretical real business cycle world is constructed and the representative agent is then placed in this world. The agent responds optimally to exogenous shocks, given the frictions in the economy. The agent's preference parameters, along with other structural parameters, are estimated using a Bayesian procedure involving Markov Chain Monte Carlo methods. Two main results emerge from the paper. First, the form for the time-non-separability estimated in this paper is very different than the forms suggested and used elsewhere in the literature. Second, the welfare cost of business cycles is close to Lucas's estimate.
    corecore