44 research outputs found

    Statistical physics of pairwise probability models

    Get PDF
    Statistical models for describing the probability distribution over the states of biological systems are commonly used for dimensional reduction. Among these models, pairwise models are very attractive in part because they can be fit using a reasonable amount of data: knowledge of the means and correlations between pairs of elements in the system is sufficient. Not surprisingly, then, using pairwise models for studying neural data has been the focus of many studies in recent years. In this paper, we describe how tools from statistical physics can be employed for studying and using pairwise models. We build on our previous work on the subject and study the relation between different methods for fitting these models and evaluating their quality. In particular, using data from simulated cortical networks we study how the quality of various approximate methods for inferring the parameters in a pairwise model depends on the time bin chosen for binning the data. We also study the effect of the size of the time bin on the model quality itself, again using simulated data. We show that using finer time bins increases the quality of the pairwise model. We offer new ways of deriving the expressions reported in our previous work for assessing the quality of pairwise models.Comment: 25 pages, 3 figure

    Exact mean field inference in asymmetric kinetic Ising systems

    Full text link
    We develop an elementary mean field approach for fully asymmetric kinetic Ising models, which can be applied to a single instance of the problem. In the case of the asymmetric SK model this method gives the exact values of the local magnetizations and the exact relation between equal-time and time-delayed correlations. It can also be used to solve efficiently the inverse problem, i.e. determine the couplings and local fields from a set of patterns, also in cases where the fields and couplings are time-dependent. This approach generalizes some recent attempts to solve this dynamical inference problem, which were valid in the limit of weak coupling. It provides the exact solution to the problem also in strongly coupled problems. This mean field inference can also be used as an efficient approximate method to infer the couplings and fields in problems which are not infinite range, for instance in diluted asymmetric spin glasses.Comment: 10 pages, 7 figure

    Intrinsic limitations of inverse inference in the pairwise Ising spin glass

    Full text link
    We analyze the limits inherent to the inverse reconstruction of a pairwise Ising spin glass based on susceptibility propagation. We establish the conditions under which the susceptibility propagation algorithm is able to reconstruct the characteristics of the network given first- and second-order local observables, evaluate eventual errors due to various types of noise in the originally observed data, and discuss the scaling of the problem with the number of degrees of freedom

    How biased are maximum entropy models?

    Get PDF
    Maximum entropy models have become popular statistical models in neuroscience and other areas in biology, and can be useful tools for obtaining estimates of mutual information in biological systems. However, maximum entropy models fit to small data sets can be subject to sampling bias; i.e. the true entropy of the data can be severely underestimated. Here we study the sampling properties of estimates of the entropy obtained from maximum entropy models. We show that if the data is generated by a distribution that lies in the model class, the bias is equal to the number of parameters divided by twice the number of observations. However, in practice, the true distribution is usually outside the model class, and we show here that this misspecification can lead to much larger bias. We provide a perturbative approximation of the maximally expected bias when the true model is out of model class, and we illustrate our results using numerical simulations of an Ising model; i.e. the second-order maximum entropy distribution on binary data.

    Beyond inverse Ising model: structure of the analytical solution for a class of inverse problems

    Full text link
    I consider the problem of deriving couplings of a statistical model from measured correlations, a task which generalizes the well-known inverse Ising problem. After reminding that such problem can be mapped on the one of expressing the entropy of a system as a function of its corresponding observables, I show the conditions under which this can be done without resorting to iterative algorithms. I find that inverse problems are local (the inverse Fisher information is sparse) whenever the corresponding models have a factorized form, and the entropy can be split in a sum of small cluster contributions. I illustrate these ideas through two examples (the Ising model on a tree and the one-dimensional periodic chain with arbitrary order interaction) and support the results with numerical simulations. The extension of these methods to more general scenarios is finally discussed.Comment: 15 pages, 6 figure

    Dynamics and Performance of Susceptibility Propagation on Synthetic Data

    Full text link
    We study the performance and convergence properties of the Susceptibility Propagation (SusP) algorithm for solving the Inverse Ising problem. We first study how the temperature parameter (T) in a Sherrington-Kirkpatrick model generating the data influences the performance and convergence of the algorithm. We find that at the high temperature regime (T>4), the algorithm performs well and its quality is only limited by the quality of the supplied data. In the low temperature regime (T<4), we find that the algorithm typically does not converge, yielding diverging values for the couplings. However, we show that by stopping the algorithm at the right time before divergence becomes serious, good reconstruction can be achieved down to T~2. We then show that dense connectivity, loopiness of the connectivity, and high absolute magnetization all have deteriorating effects on the performance of the algorithm. When absolute magnetization is high, we show that other methods can be work better than SusP. Finally, we show that for neural data with high absolute magnetization, SusP performs less well than TAP inversion.Comment: 9 pages, 7 figure

    Stimulus-dependent maximum entropy models of neural population codes

    Get PDF
    Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. To be able to infer a model for this distribution from large-scale neural recordings, we introduce a stimulus-dependent maximum entropy (SDME) model---a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. The model is able to capture the single-cell response properties as well as the correlations in neural spiking due to shared stimulus and due to effective neuron-to-neuron connections. Here we show that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. As a result, the SDME model gives a more accurate account of single cell responses and in particular outperforms uncoupled models in reproducing the distributions of codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like surprise and information transmission in a neural population.Comment: 11 pages, 7 figure
    corecore