20,440 research outputs found

    Geometrically stopped Markovian random growth processes and Pareto tails

    Full text link
    Many empirical studies document power law behavior in size distributions of economic interest such as cities, firms, income, and wealth. One mechanism for generating such behavior combines independent and identically distributed Gaussian additive shocks to log-size with a geometric age distribution. We generalize this mechanism by allowing the shocks to be non-Gaussian (but light-tailed) and dependent upon a Markov state variable. Our main results provide sharp bounds on tail probabilities, a simple equation determining Pareto exponents, and comparative statics. We present two applications: we show that (i) the tails of the wealth distribution in a heterogeneous-agent dynamic general equilibrium model with idiosyncratic investment risk are Paretian, and (ii) a random growth model for the population dynamics of Japanese municipalities is consistent with the observed Pareto exponent but only after allowing for Markovian dynamics

    Evidence accumulation in a Laplace domain decision space

    Full text link
    Evidence accumulation models of simple decision-making have long assumed that the brain estimates a scalar decision variable corresponding to the log-likelihood ratio of the two alternatives. Typical neural implementations of this algorithmic cognitive model assume that large numbers of neurons are each noisy exemplars of the scalar decision variable. Here we propose a neural implementation of the diffusion model in which many neurons construct and maintain the Laplace transform of the distance to each of the decision bounds. As in classic findings from brain regions including LIP, the firing rate of neurons coding for the Laplace transform of net accumulated evidence grows to a bound during random dot motion tasks. However, rather than noisy exemplars of a single mean value, this approach makes the novel prediction that firing rates grow to the bound exponentially, across neurons there should be a distribution of different rates. A second set of neurons records an approximate inversion of the Laplace transform, these neurons directly estimate net accumulated evidence. In analogy to time cells and place cells observed in the hippocampus and other brain regions, the neurons in this second set have receptive fields along a "decision axis." This finding is consistent with recent findings from rodent recordings. This theoretical approach places simple evidence accumulation models in the same mathematical language as recent proposals for representing time and space in cognitive models for memory.Comment: Revised for CB

    Structure Learning in Coupled Dynamical Systems and Dynamic Causal Modelling

    Get PDF
    Identifying a coupled dynamical system out of many plausible candidates, each of which could serve as the underlying generator of some observed measurements, is a profoundly ill posed problem that commonly arises when modelling real world phenomena. In this review, we detail a set of statistical procedures for inferring the structure of nonlinear coupled dynamical systems (structure learning), which has proved useful in neuroscience research. A key focus here is the comparison of competing models of (ie, hypotheses about) network architectures and implicit coupling functions in terms of their Bayesian model evidence. These methods are collectively referred to as dynamical casual modelling (DCM). We focus on a relatively new approach that is proving remarkably useful; namely, Bayesian model reduction (BMR), which enables rapid evaluation and comparison of models that differ in their network architecture. We illustrate the usefulness of these techniques through modelling neurovascular coupling (cellular pathways linking neuronal and vascular systems), whose function is an active focus of research in neurobiology and the imaging of coupled neuronal systems
    • …
    corecore