33,889 research outputs found

    Non-orthogonal eigenvectors, fluctuation-dissipation relations and entropy production

    Full text link
    Celebrated fluctuation-dissipation theorem (FDT) linking the response function to time dependent correlations of observables measured in the reference unperturbed state is one of the central results in equilibrium statistical mechanics. In this letter we discuss an extension of the standard FDT to the case when multidimensional matrix representing transition probabilities is strictly non-normal. This feature dramatically modifies the dynamics, by incorporating the effect of eigenvector non-orthogonality via the associated overlap matrix of Chalker-Mehlig type. In particular, the rate of entropy production per unit time is strongly enhanced by that matrix. We suggest, that this mechanism has an impact on the studies of collective phenomena in neural matrix models, leading, via transient behavior, to such phenomena as synchronisation and emergence of the memory. We also expect, that the described mechanism generating the entropy production is generic for wide class of phenomena, where dynamics is driven by non-normal operators. For the case of driving by a large Ginibre matrix the entropy production rate is evaluated analytically, as well as for the Rajan-Abbott model for neural networks.Comment: 3 figures, 8 pages. Important references added, calculation of entropy production rates for Rajan-Abbott model of neural networks and for Ginibre ensemble completed, title change

    Dynamical Entropy Production in Spiking Neuron Networks in the Balanced State

    Full text link
    We demonstrate deterministic extensive chaos in the dynamics of large sparse networks of theta neurons in the balanced state. The analysis is based on numerically exact calculations of the full spectrum of Lyapunov exponents, the entropy production rate and the attractor dimension. Extensive chaos is found in inhibitory networks and becomes more intense when an excitatory population is included. We find a strikingly high rate of entropy production that would limit information representation in cortical spike patterns to the immediate stimulus response.Comment: 4 pages, 4 figure

    Structured chaos shapes spike-response noise entropy in balanced neural networks

    Get PDF
    Large networks of sparsely coupled, excitatory and inhibitory cells occur throughout the brain. A striking feature of these networks is that they are chaotic. How does this chaos manifest in the neural code? Specifically, how variable are the spike patterns that such a network produces in response to an input signal? To answer this, we derive a bound for the entropy of multi-cell spike pattern distributions in large recurrent networks of spiking neurons responding to fluctuating inputs. The analysis is based on results from random dynamical systems theory and is complimented by detailed numerical simulations. We find that the spike pattern entropy is an order of magnitude lower than what would be extrapolated from single cells. This holds despite the fact that network coupling becomes vanishingly sparse as network size grows -- a phenomenon that depends on ``extensive chaos," as previously discovered for balanced networks without stimulus drive. Moreover, we show how spike pattern entropy is controlled by temporal features of the inputs. Our findings provide insight into how neural networks may encode stimuli in the presence of inherently chaotic dynamics.Comment: 9 pages, 5 figure

    Max-Pooling Loss Training of Long Short-Term Memory Networks for Small-Footprint Keyword Spotting

    Full text link
    We propose a max-pooling based loss function for training Long Short-Term Memory (LSTM) networks for small-footprint keyword spotting (KWS), with low CPU, memory, and latency requirements. The max-pooling loss training can be further guided by initializing with a cross-entropy loss trained network. A posterior smoothing based evaluation approach is employed to measure keyword spotting performance. Our experimental results show that LSTM models trained using cross-entropy loss or max-pooling loss outperform a cross-entropy loss trained baseline feed-forward Deep Neural Network (DNN). In addition, max-pooling loss trained LSTM with randomly initialized network performs better compared to cross-entropy loss trained LSTM. Finally, the max-pooling loss trained LSTM initialized with a cross-entropy pre-trained network shows the best performance, which yields 67.6%67.6\% relative reduction compared to baseline feed-forward DNN in Area Under the Curve (AUC) measure

    Unified framework for the entropy production and the stochastic interaction based on information geometry

    Full text link
    We show a relationship between the entropy production in stochastic thermodynamics and the stochastic interaction in the information integrated theory. To clarify this relationship, we newly introduce an information geometric interpretation of the entropy production for a total system and the partial entropy productions for subsystems. We show that the violation of the additivity of the entropy productions is related to the stochastic interaction. This framework is a thermodynamic foundation of the integrated information theory. We also show that our information geometric formalism leads to a novel expression of the entropy production related to an optimization problem minimizing the Kullback-Leibler divergence. We analytically illustrate this interpretation by using the spin model.Comment: 13pages, 4 figure
    corecore