59,028 research outputs found

    Dynamical Synapses Enhance Neural Information Processing: Gracefulness, Accuracy and Mobility

    Full text link
    Experimental data have revealed that neuronal connection efficacy exhibits two forms of short-term plasticity, namely, short-term depression (STD) and short-term facilitation (STF). They have time constants residing between fast neural signaling and rapid learning, and may serve as substrates for neural systems manipulating temporal information on relevant time scales. The present study investigates the impact of STD and STF on the dynamics of continuous attractor neural networks (CANNs) and their potential roles in neural information processing. We find that STD endows the network with slow-decaying plateau behaviors-the network that is initially being stimulated to an active state decays to a silent state very slowly on the time scale of STD rather than on the time scale of neural signaling. This provides a mechanism for neural systems to hold sensory memory easily and shut off persistent activities gracefully. With STF, we find that the network can hold a memory trace of external inputs in the facilitated neuronal interactions, which provides a way to stabilize the network response to noisy inputs, leading to improved accuracy in population decoding. Furthermore, we find that STD increases the mobility of the network states. The increased mobility enhances the tracking performance of the network in response to time-varying stimuli, leading to anticipative neural responses. In general, we find that STD and STP tend to have opposite effects on network dynamics and complementary computational advantages, suggesting that the brain may employ a strategy of weighting them differentially depending on the computational purpose.Comment: 40 pages, 17 figure

    Information Scrambling in Quantum Neural Networks

    Get PDF
    The quantum neural network is one of the promising applications for near-term noisy intermediate-scale quantum computers. A quantum neural network distills the information from the input wave function into the output qubits. In this Letter, we show that this process can also be viewed from the opposite direction: the quantum information in the output qubits is scrambled into the input. This observation motivates us to use the tripartite information—a quantity recently developed to characterize information scrambling—to diagnose the training dynamics of quantum neural networks. We empirically find strong correlation between the dynamical behavior of the tripartite information and the loss function in the training process, from which we identify that the training process has two stages for randomly initialized networks. In the early stage, the network performance improves rapidly and the tripartite information increases linearly with a universal slope, meaning that the neural network becomes less scrambled than the random unitary. In the latter stage, the network performance improves slowly while the tripartite information decreases. We present evidences that the network constructs local correlations in the early stage and learns large-scale structures in the latter stage. We believe this two-stage training dynamics is universal and is applicable to a wide range of problems. Our work builds bridges between two research subjects of quantum neural networks and information scrambling, which opens up a new perspective to understand quantum neural networks

    A Markovian event-based framework for stochastic spiking neural networks

    Full text link
    In spiking neural networks, the information is conveyed by the spike times, that depend on the intrinsic dynamics of each neuron, the input they receive and on the connections between neurons. In this article we study the Markovian nature of the sequence of spike times in stochastic neural networks, and in particular the ability to deduce from a spike train the next spike time, and therefore produce a description of the network activity only based on the spike times regardless of the membrane potential process. To study this question in a rigorous manner, we introduce and study an event-based description of networks of noisy integrate-and-fire neurons, i.e. that is based on the computation of the spike times. We show that the firing times of the neurons in the networks constitute a Markov chain, whose transition probability is related to the probability distribution of the interspike interval of the neurons in the network. In the cases where the Markovian model can be developed, the transition probability is explicitly derived in such classical cases of neural networks as the linear integrate-and-fire neuron models with excitatory and inhibitory interactions, for different types of synapses, possibly featuring noisy synaptic integration, transmission delays and absolute and relative refractory period. This covers most of the cases that have been investigated in the event-based description of spiking deterministic neural networks

    Simulation of a neural network-driven fuzzy controller

    Get PDF
    A software simulation package was developed to facilitate the analysis of a fuzzy logic tracking system constructed by first training a neural network. The adaptive vector quantization neural network used a competitive learning algorithm to classify control data from a controller in a noisy environment. The neural network memory generated rules for a fuzzy controller by mapping the state of the network into a predetermined fuzzy database. The software is intended to be expanded to allow further analysis of neural dynamics and to compare the performance of the resulting fuzzy controller to conventional controllers

    A recurrent neural network with ever changing synapses

    Full text link
    A recurrent neural network with noisy input is studied analytically, on the basis of a Discrete Time Master Equation. The latter is derived from a biologically realizable learning rule for the weights of the connections. In a numerical study it is found that the fixed points of the dynamics of the net are time dependent, implying that the representation in the brain of a fixed piece of information (e.g., a word to be recognized) is not fixed in time.Comment: 17 pages, LaTeX, 4 figure
    • …
    corecore