4,643 research outputs found

    Spike train statistics and Gibbs distributions

    Get PDF
    This paper is based on a lecture given in the LACONEU summer school, Valparaiso, January 2012. We introduce Gibbs distribution in a general setting, including non stationary dynamics, and present then three examples of such Gibbs distributions, in the context of neural networks spike train statistics: (i) Maximum entropy model with spatio-temporal constraints; (ii) Generalized Linear Models; (iii) Conductance based Inte- grate and Fire model with chemical synapses and gap junctions.Comment: 23 pages, submitte

    Parameter estimation for integer-valued Gibbs distributions

    Full text link
    We consider Gibbs distributions, which are families of probability distributions over a discrete space Ω\Omega with probability mass function given by μβΩ(x)=eβH(x)Z(β)\mu^\Omega_\beta(x) = \frac{e^{\beta H(x)}}{Z(\beta)}. Here H:Ω{0,..,n}H:\Omega\rightarrow\{0,..,n\} is a fixed function (called a Hamiltonian), β\beta is the parameter of the distribution, and the normalization factor Z(β)=xΩeβH(x)=k=0nckeβkZ(\beta)=\sum_{x\in\Omega}e^{\beta H(x)}=\sum_{k=0}^nc_ke^{\beta k} is called the partition function. We study how function ZZ can be estimated using an oracle that produces samples xμβΩ(.)x\sim\mu^\Omega_\beta(.) for a value β\beta in a given interval [βmin,βmax][\beta_{min},\beta_{max}]. We consider the problem of estimating the normalized coefficients ckc_k for indices kKk\in\cal K satisfying maxβμβΩ({xH(x)=k})μ\max_\beta\mu^\Omega_\beta(\{x|H(x)=k\})\ge\mu_*, where μ(0,1)\mu_*\in(0,1) is a given parameter and K\cal K is a given subset of H\cal H. We solve this using O~(min{q,n2}+min{q,K}μϵ2)\tilde O(\frac{\min\{q,n^2\}+\frac{\min\{\sqrt q,|\cal K|\}}{\mu_*}}{\epsilon^2}) samples where q=logZ(βmax)Z(βmin)q=\log\frac{Z(\beta_{max})}{Z(\beta_{min})}, and we show this is optimal up to logarithmic factors. We also improve the sample complexity to roughly O~(1/μ+min{q+n,n2}ϵ2)\tilde O(\frac{1/\mu_*+\min\{q+n,n^2\}}{\epsilon^2}) for applications where the coefficients are log-concave (e.g. counting connected subgraphs of a given graph). As a key subroutine, we show how to estimate qq using O~(min{q,n2}ϵ2)\tilde O(\frac{\min\{q,n^2\}}{\epsilon^2}) samples. This improves over a prior algorithm of Kolmogorov (2018) that uses O~(qϵ2)\tilde O(\frac q{\epsilon^2}) samples. We also show a "batched" version of this algorithm which simultaneously estimates Z(β)Z(βmin)\frac{Z(\beta)}{Z(\beta_{min})} for many values of β\beta, at essentially the same cost as for estimating just Z(βmax)Z(βmin)\frac{Z(\beta_{max})}{Z(\beta_{min})} alone. We show matching lower bounds, demonstrating that this complexity is optimal as a function of n,qn,q up to logarithmic terms.Comment: Superseded by arXiv:2007.1082

    On Sampling from the Gibbs Distribution with Random Maximum A-Posteriori Perturbations

    Full text link
    In this paper we describe how MAP inference can be used to sample efficiently from Gibbs distributions. Specifically, we provide means for drawing either approximate or unbiased samples from Gibbs' distributions by introducing low dimensional perturbations and solving the corresponding MAP assignments. Our approach also leads to new ways to derive lower bounds on partition functions. We demonstrate empirically that our method excels in the typical "high signal - high coupling" regime. The setting results in ragged energy landscapes that are challenging for alternative approaches to sampling and/or lower bounds

    Convergence Rate of Riemannian Hamiltonian Monte Carlo and Faster Polytope Volume Computation

    Full text link
    We give the first rigorous proof of the convergence of Riemannian Hamiltonian Monte Carlo, a general (and practical) method for sampling Gibbs distributions. Our analysis shows that the rate of convergence is bounded in terms of natural smoothness parameters of an associated Riemannian manifold. We then apply the method with the manifold defined by the log barrier function to the problems of (1) uniformly sampling a polytope and (2) computing its volume, the latter by extending Gaussian cooling to the manifold setting. In both cases, the total number of steps needed is O^{*}(mn^{\frac{2}{3}}), improving the state of the art. A key ingredient of our analysis is a proof of an analog of the KLS conjecture for Gibbs distributions over manifolds

    Approximation algorithms for the normalizing constant of Gibbs distributions

    Full text link
    Consider a family of distributions {πβ}\{\pi_{\beta}\} where XπβX\sim\pi_{\beta} means that P(X=x)=exp(βH(x))/Z(β)\mathbb{P}(X=x)=\exp(-\beta H(x))/Z(\beta). Here Z(β)Z(\beta) is the proper normalizing constant, equal to xexp(βH(x))\sum_x\exp(-\beta H(x)). Then {πβ}\{\pi_{\beta}\} is known as a Gibbs distribution, and Z(β)Z(\beta) is the partition function. This work presents a new method for approximating the partition function to a specified level of relative accuracy using only a number of samples, that is, O(ln(Z(β))ln(ln(Z(β))))O(\ln(Z(\beta))\ln(\ln(Z(\beta)))) when Z(0)1Z(0)\geq1. This is a sharp improvement over previous, similar approaches that used a much more complicated algorithm, requiring O(ln(Z(β))ln(ln(Z(β)))5)O(\ln(Z(\beta))\ln(\ln(Z(\beta)))^5) samples.Comment: Published in at http://dx.doi.org/10.1214/14-AAP1015 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    How Gibbs distributions may naturally arise from synaptic adaptation mechanisms. A model-based argumentation

    Get PDF
    This paper addresses two questions in the context of neuronal networks dynamics, using methods from dynamical systems theory and statistical physics: (i) How to characterize the statistical properties of sequences of action potentials ("spike trains") produced by neuronal networks ? and; (ii) what are the effects of synaptic plasticity on these statistics ? We introduce a framework in which spike trains are associated to a coding of membrane potential trajectories, and actually, constitute a symbolic coding in important explicit examples (the so-called gIF models). On this basis, we use the thermodynamic formalism from ergodic theory to show how Gibbs distributions are natural probability measures to describe the statistics of spike trains, given the empirical averages of prescribed quantities. As a second result, we show that Gibbs distributions naturally arise when considering "slow" synaptic plasticity rules where the characteristic time for synapse adaptation is quite longer than the characteristic time for neurons dynamics.Comment: 39 pages, 3 figure

    Gibbs distributions for random partitions generated by a fragmentation process

    Full text link
    In this paper we study random partitions of 1,...n, where every cluster of size j can be in any of w\_j possible internal states. The Gibbs (n,k,w) distribution is obtained by sampling uniformly among such partitions with k clusters. We provide conditions on the weight sequence w allowing construction of a partition valued random process where at step k the state has the Gibbs (n,k,w) distribution, so the partition is subject to irreversible fragmentation as time evolves. For a particular one-parameter family of weight sequences w\_j, the time-reversed process is the discrete Marcus-Lushnikov coalescent process with affine collision rate K\_{i,j}=a+b(i+j) for some real numbers a and b. Under further restrictions on a and b, the fragmentation process can be realized by conditioning a Galton-Watson tree with suitable offspring distribution to have n nodes, and cutting the edges of this tree by random sampling of edges without replacement, to partition the tree into a collection of subtrees. Suitable offspring distributions include the binomial, negative binomial and Poisson distributions.Comment: 38 pages, 2 figures, version considerably modified. To appear in the Journal of Statistical Physic
    corecore