4,399 research outputs found

    How to Couple from the Past Using a Read-Once Source of Randomness

    Full text link
    We give a new method for generating perfectly random samples from the stationary distribution of a Markov chain. The method is related to coupling from the past (CFTP), but only runs the Markov chain forwards in time, and never restarts it at previous times in the past. The method is also related to an idea known as PASTA (Poisson arrivals see time averages) in the operations research literature. Because the new algorithm can be run using a read-once stream of randomness, we call it read-once CFTP. The memory and time requirements of read-once CFTP are on par with the requirements of the usual form of CFTP, and for a variety of applications the requirements may be noticeably less. Some perfect sampling algorithms for point processes are based on an extension of CFTP known as coupling into and from the past; for completeness, we give a read-once version of coupling into and from the past, but it remains unpractical. For these point process applications, we give an alternative coupling method with which read-once CFTP may be efficiently used.Comment: 28 pages, 2 figure

    Particle Metropolis-Hastings using gradient and Hessian information

    Full text link
    Particle Metropolis-Hastings (PMH) allows for Bayesian parameter inference in nonlinear state space models by combining Markov chain Monte Carlo (MCMC) and particle filtering. The latter is used to estimate the intractable likelihood. In its original formulation, PMH makes use of a marginal MCMC proposal for the parameters, typically a Gaussian random walk. However, this can lead to a poor exploration of the parameter space and an inefficient use of the generated particles. We propose a number of alternative versions of PMH that incorporate gradient and Hessian information about the posterior into the proposal. This information is more or less obtained as a byproduct of the likelihood estimation. Indeed, we show how to estimate the required information using a fixed-lag particle smoother, with a computational cost growing linearly in the number of particles. We conclude that the proposed methods can: (i) decrease the length of the burn-in phase, (ii) increase the mixing of the Markov chain at the stationary phase, and (iii) make the proposal distribution scale invariant which simplifies tuning.Comment: 27 pages, 5 figures, 2 tables. The final publication is available at Springer via: http://dx.doi.org/10.1007/s11222-014-9510-

    Weighted random--geometric and random--rectangular graphs: Spectral and eigenfunction properties of the adjacency matrix

    Get PDF
    Within a random-matrix-theory approach, we use the nearest-neighbor energy level spacing distribution P(s)P(s) and the entropic eigenfunction localization length ℓ\ell to study spectral and eigenfunction properties (of adjacency matrices) of weighted random--geometric and random--rectangular graphs. A random--geometric graph (RGG) considers a set of vertices uniformly and independently distributed on the unit square, while for a random--rectangular graph (RRG) the embedding geometry is a rectangle. The RRG model depends on three parameters: The rectangle side lengths aa and 1/a1/a, the connection radius rr, and the number of vertices NN. We then study in detail the case a=1a=1 which corresponds to weighted RGGs and explore weighted RRGs characterized by a∼1a\sim 1, i.e.~two-dimensional geometries, but also approach the limit of quasi-one-dimensional wires when a≫1a\gg1. In general we look for the scaling properties of P(s)P(s) and ℓ\ell as a function of aa, rr and NN. We find that the ratio r/Nγr/N^\gamma, with γ(a)≈−1/2\gamma(a)\approx -1/2, fixes the properties of both RGGs and RRGs. Moreover, when a≥10a\ge 10 we show that spectral and eigenfunction properties of weighted RRGs are universal for the fixed ratio r/CNγr/{\cal C}N^\gamma, with C≈a{\cal C}\approx a.Comment: 8 pages, 6 figure

    Earthquake Size Distribution: Power-Law with Exponent Beta = 1/2?

    Full text link
    We propose that the widely observed and universal Gutenberg-Richter relation is a mathematical consequence of the critical branching nature of earthquake process in a brittle fracture environment. These arguments, though preliminary, are confirmed by recent investigations of the seismic moment distribution in global earthquake catalogs and by the results on the distribution in crystals of dislocation avalanche sizes. We consider possible systematic and random errors in determining earthquake size, especially its seismic moment. These effects increase the estimate of the parameter beta of the power-law distribution of earthquake sizes. In particular, we find that estimated beta-values may be inflated by 1-3% because relative moment uncertainties decrease with increasing earthquake size. Moreover, earthquake clustering greatly influences the beta-parameter. If clusters (aftershock sequences) are taken as the entity to be studied, then the exponent value for their size distribution would decrease by 5-10%. The complexity of any earthquake source also inflates the estimated beta-value by at least 3-7%. The centroid depth distribution also should influence the beta-value, an approximate calculation suggests that the exponent value may be increased by 2-6%. Taking all these effects into account, we propose that the recently obtained beta-value of 0.63 could be reduced to about 0.52--0.56: near the universal constant value (1/2) predicted by theoretical arguments. We also consider possible consequences of the universal beta-value and its relevance for theoretical and practical understanding of earthquake occurrence in various tectonic and Earth structure environments. Using comparative crystal deformation results may help us understand the generation of seismic tremors and slow earthquakes and illuminate the transition from brittle fracture to plastic flow.Comment: 46 pages, 2 tables, 11 figures 53 pages, 2 tables, 12 figure

    Opinion fluctuations and disagreement in social networks

    Get PDF
    We study a tractable opinion dynamics model that generates long-run disagreements and persistent opinion fluctuations. Our model involves an inhomogeneous stochastic gossip process of continuous opinion dynamics in a society consisting of two types of agents: regular agents, who update their beliefs according to information that they receive from their social neighbors; and stubborn agents, who never update their opinions. When the society contains stubborn agents with different opinions, the belief dynamics never lead to a consensus (among the regular agents). Instead, beliefs in the society fail to converge almost surely, the belief profile keeps on fluctuating in an ergodic fashion, and it converges in law to a non-degenerate random vector. The structure of the network and the location of the stubborn agents within it shape the opinion dynamics. The expected belief vector evolves according to an ordinary differential equation coinciding with the Kolmogorov backward equation of a continuous-time Markov chain with absorbing states corresponding to the stubborn agents and converges to a harmonic vector, with every regular agent's value being the weighted average of its neighbors' values, and boundary conditions corresponding to the stubborn agents'. Expected cross-products of the agents' beliefs allow for a similar characterization in terms of coupled Markov chains on the network. We prove that, in large-scale societies which are highly fluid, meaning that the product of the mixing time of the Markov chain on the graph describing the social network and the relative size of the linkages to stubborn agents vanishes as the population size grows large, a condition of \emph{homogeneous influence} emerges, whereby the stationary beliefs' marginal distributions of most of the regular agents have approximately equal first and second moments.Comment: 33 pages, accepted for publication in Mathematics of Operation Researc
    • …
    corecore