9,388 research outputs found

    Parameters estimation for spatio-temporal maximum entropy distributions: application to neural spike trains

    Get PDF
    We propose a numerical method to learn Maximum Entropy (MaxEnt) distributions with spatio-temporal constraints from experimental spike trains. This is an extension of two papers [10] and [4] who proposed the estimation of parameters where only spatial constraints were taken into account. The extension we propose allows to properly handle memory effects in spike statistics, for large sized neural networks.Comment: 34 pages, 33 figure

    Differential Entropy on Statistical Spaces

    Full text link
    We show that the previously introduced concept of distance on statistical spaces leads to a straightforward definition of differential entropy on these statistical spaces. These spaces are characterized by the fact that their points can only be localized within a certain volume and exhibit thus a feature of fuzziness. This implies that Riemann integrability of relevant integrals is no longer secured. Some discussion on the specialization of this formalism to quantum states concludes the paper.Comment: 4 pages, to appear in the proceedings of the joint meeting of the 2nd International Conference on Cybernetics and Information Technologies, Systems and Applications (CITSA 2005) and the 11th International Conference on Information Systems Analysis and Synthesis (ISAS 2005), to be held in Orlando, USA, on July 14-17, 200

    Differential Entropy on Statistical Spaces

    Full text link
    We show that the previously introduced concept of distance on statistical spaces leads to a straightforward definition of differential entropy on these statistical spaces. These spaces are characterized by the fact that their points can only be localized within a certain volume and exhibit thus a feature of fuzziness. This implies that Riemann integrability of relevant integrals is no longer secured. Some discussion on the specialization of this formalism to quantum states concludes the paper.Comment: 4 pages, to appear in the proceedings of the joint meeting of the 2nd International Conference on Cybernetics and Information Technologies, Systems and Applications (CITSA 2005) and the 11th International Conference on Information Systems Analysis and Synthesis (ISAS 2005), to be held in Orlando, USA, on July 14-17, 200

    A simple derivation and classification of common probability distributions based on information symmetry and measurement scale

    Full text link
    Commonly observed patterns typically follow a few distinct families of probability distributions. Over one hundred years ago, Karl Pearson provided a systematic derivation and classification of the common continuous distributions. His approach was phenomenological: a differential equation that generated common distributions without any underlying conceptual basis for why common distributions have particular forms and what explains the familial relations. Pearson's system and its descendants remain the most popular systematic classification of probability distributions. Here, we unify the disparate forms of common distributions into a single system based on two meaningful and justifiable propositions. First, distributions follow maximum entropy subject to constraints, where maximum entropy is equivalent to minimum information. Second, different problems associate magnitude to information in different ways, an association we describe in terms of the relation between information invariance and measurement scale. Our framework relates the different continuous probability distributions through the variations in measurement scale that change each family of maximum entropy distributions into a distinct family.Comment: 17 pages, 0 figure
    corecore