55,906 research outputs found

    Spatio-temporal spike trains analysis for large scale networks using maximum entropy principle and Monte-Carlo method

    Full text link
    Understanding the dynamics of neural networks is a major challenge in experimental neuroscience. For that purpose, a modelling of the recorded activity that reproduces the main statistics of the data is required. In a first part, we present a review on recent results dealing with spike train statistics analysis using maximum entropy models (MaxEnt). Most of these studies have been focusing on modelling synchronous spike patterns, leaving aside the temporal dynamics of the neural activity. However, the maximum entropy principle can be generalized to the temporal case, leading to Markovian models where memory effects and time correlations in the dynamics are properly taken into account. In a second part, we present a new method based on Monte-Carlo sampling which is suited for the fitting of large-scale spatio-temporal MaxEnt models. The formalism and the tools presented here will be essential to fit MaxEnt spatio-temporal models to large neural ensembles.Comment: 41 pages, 10 figure

    An Empirical Examination of Maximum Entropy Estimation.

    Get PDF
    Maximum entropy estimation is a relatively new estimation technique in econometrics. We carry out several Monte Carlo experiments using real data as a basis in order to understand the properties of the maximum entropy estimator. We compare the maximum entropy and generalized maximum entropy estimators to traditional estimation techniques in linear regression, binary choice, and multinomial choice models. In addition, we discuss maximum entropy estimation in censored and truncated regression models. We find that the generalized maximum entropy estimator dominates the logit estimator and the multinomial logit estimator in Monte Carlo experiments. The generalized maximum entropy estimator in discrete choice models allows us to jointly estimate the unknown probabilities and the unknown errors resulting in more uniform predicted probabilities and reducing the variance of the parameter estimates. In the linear regression problem, the generalized maximum entropy estimator allows us to impose nonsample information about the unknown parameters and errors. However, we must impose a set of support points for unknown parameters and errors, which is not always an easy thing to do. We find that when we do specify nonsample information that is correct, the generalized maximum entropy estimator has lower risk than either the ordinary least squares or the inequality restricted least squares estimators. From our sampling experiments using real data, we find that maximum entropy estimation is a viable estimation technique in several econometric models

    Selection of proposal distributions for generalized importance sampling estimators

    Get PDF
    The standard importance sampling (IS) estimator, generally does not work well in examples involving simultaneous inference on several targets as the importance weights can take arbitrarily large values making the estimator highly unstable. In such situations, alternative generalized IS estimators involving samples from multiple proposal distributions are preferred. Just like the standard IS, the success of these multiple IS estimators crucially depends on the choice of the proposal distributions. The selection of these proposal distributions is the focus of this article. We propose three methods based on (i) a geometric space filling coverage criterion, (ii) a minimax variance approach, and (iii) a maximum entropy approach. The first two methods are applicable to any multi-proposal IS estimator, whereas the third approach is described in the context of Doss's (2010) two-stage IS estimator. For the first method we propose a suitable measure of coverage based on the symmetric Kullback-Leibler divergence, while the second and third approaches use estimates of asymptotic variances of Doss's (2010) IS estimator and Geyer's (1994) reverse logistic estimator, respectively. Thus, we provide consistent spectral variance estimators for these asymptotic variances. The proposed methods for selecting proposal densities are illustrated using various detailed examples

    Numerical Study of the Oscillatory Convergence to the Attractor at the Edge of Chaos

    Full text link
    This paper compares three different types of ``onset of chaos'' in the logistic and generalized logistic map: the Feigenbaum attractor at the end of the period doubling bifurcations; the tangent bifurcation at the border of the period three window; the transition to chaos in the generalized logistic with inflection 1/2 (xn+1=ÎŒxn1/2x_{n+1} = \mu x_{n}^{1/2}), in which the main bifurcation cascade, as well as the bifurcations generated by the periodic windows in the chaotic region, collapse in a single point. The occupation number and the Tsallis entropy are studied. The different regimes of convergence to the attractor, starting from two kinds of far-from-equilibrium initial conditions, are distinguished by the presence or absence of log-log oscillations, by different power-law scalings and by a gap in the saturation levels. We show that the escort distribution implicit in the Tsallis entropy may tune the log-log oscillations or the crossover times.Comment: 10 pages, 5 figure

    Origins of the Combinatorial Basis of Entropy

    Full text link
    The combinatorial basis of entropy, given by Boltzmann, can be written H=N−1ln⁥WH = N^{-1} \ln \mathbb{W}, where HH is the dimensionless entropy, NN is the number of entities and W\mathbb{W} is number of ways in which a given realization of a system can occur (its statistical weight). This can be broadened to give generalized combinatorial (or probabilistic) definitions of entropy and cross-entropy: H=Îș(ϕ(W)+C)H=\kappa (\phi(\mathbb{W}) +C) and D=−Îș(ϕ(P)+C)D=-\kappa (\phi(\mathbb{P}) +C), where P\mathbb{P} is the probability of a given realization, ϕ\phi is a convenient transformation function, Îș\kappa is a scaling parameter and CC an arbitrary constant. If W\mathbb{W} or P\mathbb{P} satisfy the multinomial weight or distribution, then using ϕ(⋅)=ln⁥(⋅)\phi(\cdot)=\ln(\cdot) and Îș=N−1\kappa=N^{-1}, HH and DD asymptotically converge to the Shannon and Kullback-Leibler functions. In general, however, W\mathbb{W} or P\mathbb{P} need not be multinomial, nor may they approach an asymptotic limit. In such cases, the entropy or cross-entropy function can be {\it defined} so that its extremization ("MaxEnt'' or "MinXEnt"), subject to the constraints, gives the ``most probable'' (``MaxProb'') realization of the system. This gives a probabilistic basis for MaxEnt and MinXEnt, independent of any information-theoretic justification. This work examines the origins of the governing distribution P\mathbb{P}.... (truncated)Comment: MaxEnt07 manuscript, version 4 revise

    Quantitative Comparison of Abundance Structures of Generalized Communities: From B-Cell Receptor Repertoires to Microbiomes

    Full text link
    The \emph{community}, the assemblage of organisms co-existing in a given space and time, has the potential to become one of the unifying concepts of biology, especially with the advent of high-throughput sequencing experiments that reveal genetic diversity exhaustively. In this spirit we show that a tool from community ecology, the Rank Abundance Distribution (RAD), can be turned by the new MaxRank normalization method into a generic, expressive descriptor for quantitative comparison of communities in many areas of biology. To illustrate the versatility of the method, we analyze RADs from various \emph{generalized communities}, i.e.\ assemblages of genetically diverse cells or organisms, including human B cells, gut microbiomes under antibiotic treatment and of different ages and countries of origin, and other human and environmental microbial communities. We show that normalized RADs enable novel quantitative approaches that help to understand structures and dynamics of complex generalize communities
    • 

    corecore