19,052 research outputs found

    An Invariance Principle for Maintaining the Operating Point of a Neuron

    No full text
    Sensory neurons adapt to changes in the natural statistics of their environments through processes such as gain control and firing threshold adjustment. It has been argued that neurons early in sensory pathways adapt according to information-theoretic criteria, perhaps maximising their coding efficiency or information rate. Here, we draw a distinction between how a neuron’s preferred operating point is determined and how its preferred operating point is maintained through adaptation. We propose that a neuron’s preferred operating point can be characterised by the probability density function (PDF) of its output spike rate, and that adaptation maintains an invariant output PDF, regardless of how this output PDF is initially set. Considering a sigmoidal transfer function for simplicity, we derive simple adaptation rules for a neuron with one sensory input that permit adaptation to the lower-order statistics of the input, independent of how the preferred operating point of the neuron is set. Thus, if the preferred operating point is, in fact, set according to information-theoretic criteria, then these rules nonetheless maintain a neuron at that point. Our approach generalises from the unimodal case to the multimodal case, for a neuron with inputs from distinct sensory channels, and we briefly consider this case too

    A Maximum Entropy Procedure to Solve Likelihood Equations

    Get PDF
    In this article, we provide initial findings regarding the problem of solving likelihood equations by means of a maximum entropy (ME) approach. Unlike standard procedures that require equating the score function of the maximum likelihood problem at zero, we propose an alternative strategy where the score is instead used as an external informative constraint to the maximization of the convex Shannon\u2019s entropy function. The problem involves the reparameterization of the score parameters as expected values of discrete probability distributions where probabilities need to be estimated. This leads to a simpler situation where parameters are searched in smaller (hyper) simplex space. We assessed our proposal by means of empirical case studies and a simulation study, the latter involving the most critical case of logistic regression under data separation. The results suggested that the maximum entropy reformulation of the score problem solves the likelihood equation problem. Similarly, when maximum likelihood estimation is difficult, as is the case of logistic regression under separation, the maximum entropy proposal achieved results (numerically) comparable to those obtained by the Firth\u2019s bias-corrected approach. Overall, these first findings reveal that a maximum entropy solution can be considered as an alternative technique to solve the likelihood equation

    Entropy balancing is doubly robust

    Full text link
    Covariate balance is a conventional key diagnostic for methods used estimating causal effects from observational studies. Recently, there is an emerging interest in directly incorporating covariate balance in the estimation. We study a recently proposed entropy maximization method called Entropy Balancing (EB), which exactly matches the covariate moments for the different experimental groups in its optimization problem. We show EB is doubly robust with respect to linear outcome regression and logistic propensity score regression, and it reaches the asymptotic semiparametric variance bound when both regressions are correctly specified. This is surprising to us because there is no attempt to model the outcome or the treatment assignment in the original proposal of EB. Our theoretical results and simulations suggest that EB is a very appealing alternative to the conventional weighting estimators that estimate the propensity score by maximum likelihood.Comment: 23 pages, 6 figures, Journal of Causal Inference 201

    Moment Closure - A Brief Review

    Full text link
    Moment closure methods appear in myriad scientific disciplines in the modelling of complex systems. The goal is to achieve a closed form of a large, usually even infinite, set of coupled differential (or difference) equations. Each equation describes the evolution of one "moment", a suitable coarse-grained quantity computable from the full state space. If the system is too large for analytical and/or numerical methods, then one aims to reduce it by finding a moment closure relation expressing "higher-order moments" in terms of "lower-order moments". In this brief review, we focus on highlighting how moment closure methods occur in different contexts. We also conjecture via a geometric explanation why it has been difficult to rigorously justify many moment closure approximations although they work very well in practice.Comment: short survey paper (max 20 pages) for a broad audience in mathematics, physics, chemistry and quantitative biolog
    • …
    corecore