324 research outputs found

    Linear stability analysis of retrieval state in associative memory neural networks of spiking neurons

    Full text link
    We study associative memory neural networks of the Hodgkin-Huxley type of spiking neurons in which multiple periodic spatio-temporal patterns of spike timing are memorized as limit-cycle-type attractors. In encoding the spatio-temporal patterns, we assume the spike-timing-dependent synaptic plasticity with the asymmetric time window. Analysis for periodic solution of retrieval state reveals that if the area of the negative part of the time window is equivalent to the positive part, then crosstalk among encoded patterns vanishes. Phase transition due to the loss of the stability of periodic solution is observed when we assume fast alpha-function for direct interaction among neurons. In order to evaluate the critical point of this phase transition, we employ Floquet theory in which the stability problem of the infinite number of spiking neurons interacting with alpha-function is reduced into the eigenvalue problem with the finite size of matrix. Numerical integration of the single-body dynamics yields the explicit value of the matrix, which enables us to determine the critical point of the phase transition with a high degree of precision.Comment: Accepted for publication in Phys. Rev.

    Unital Quantum Channels - Convex Structure and Revivals of Birkhoff's Theorem

    Get PDF
    The set of doubly-stochastic quantum channels and its subset of mixtures of unitaries are investigated. We provide a detailed analysis of their structure together with computable criteria for the separation of the two sets. When applied to O(d)-covariant channels this leads to a complete characterization and reveals a remarkable feature: instances of channels which are not in the convex hull of unitaries can return to it when either taking finitely many copies of them or supplementing with a completely depolarizing channel. In these scenarios this implies that a channel whose noise initially resists any environment-assisted attempt of correction can become perfectly correctable.Comment: 31 page

    Spatial representation of temporal information through spike timing dependent plasticity

    Get PDF
    We suggest a mechanism based on spike time dependent plasticity (STDP) of synapses to store, retrieve and predict temporal sequences. The mechanism is demonstrated in a model system of simplified integrate-and-fire type neurons densely connected by STDP synapses. All synapses are modified according to the so-called normal STDP rule observed in various real biological synapses. After conditioning through repeated input of a limited number of of temporal sequences the system is able to complete the temporal sequence upon receiving the input of a fraction of them. This is an example of effective unsupervised learning in an biologically realistic system. We investigate the dependence of learning success on entrainment time, system size and presence of noise. Possible applications include learning of motor sequences, recognition and prediction of temporal sensory information in the visual as well as the auditory system and late processing in the olfactory system of insects.Comment: 13 pages, 14 figures, completely revised and augmented versio

    Self-organization in the olfactory system: one shot odor recognition in insects

    Get PDF
    We show in a model of spiking neurons that synaptic plasticity in the mushroom bodies in combination with the general fan-in, fan-out properties of the early processing layers of the olfactory system might be sufficient to account for its efficient recognition of odors. For a large variety of initial conditions the model system consistently finds a working solution without any fine-tuning, and is, therefore, inherently robust. We demonstrate that gain control through the known feedforward inhibition of lateral horn interneurons increases the capacity of the system but is not essential for its general function. We also predict an upper limit for the number of odor classes Drosophila can discriminate based on the number and connectivity of its olfactory neurons

    Associative memory storing an extensive number of patterns based on a network of oscillators with distributed natural frequencies in the presence of external white noise

    Full text link
    We study associative memory based on temporal coding in which successful retrieval is realized as an entrainment in a network of simple phase oscillators with distributed natural frequencies under the influence of white noise. The memory patterns are assumed to be given by uniformly distributed random numbers on [0,2π)[0,2\pi) so that the patterns encode the phase differences of the oscillators. To derive the macroscopic order parameter equations for the network with an extensive number of stored patterns, we introduce the effective transfer function by assuming the fixed-point equation of the form of the TAP equation, which describes the time-averaged output as a function of the effective time-averaged local field. Properties of the networks associated with synchronization phenomena for a discrete symmetric natural frequency distribution with three frequency components are studied based on the order parameter equations, and are shown to be in good agreement with the results of numerical simulations. Two types of retrieval states are found to occur with respect to the degree of synchronization, when the size of the width of the natural frequency distribution is changed.Comment: published in Phys. Rev.

    An associative memory of Hodgkin-Huxley neuron networks with Willshaw-type synaptic couplings

    Full text link
    An associative memory has been discussed of neural networks consisting of spiking N (=100) Hodgkin-Huxley (HH) neurons with time-delayed couplings, which memorize P patterns in their synaptic weights. In addition to excitatory synapses whose strengths are modified after the Willshaw-type learning rule with the 0/1 code for quiescent/active states, the network includes uniform inhibitory synapses which are introduced to reduce cross-talk noises. Our simulations of the HH neuron network for the noise-free state have shown to yield a fairly good performance with the storage capacity of αc=Pmax/N0.42.4\alpha_c = P_{\rm max}/N \sim 0.4 - 2.4 for the low neuron activity of f0.040.10f \sim 0.04-0.10. This storage capacity of our temporal-code network is comparable to that of the rate-code model with the Willshaw-type synapses. Our HH neuron network is realized not to be vulnerable to the distribution of time delays in couplings. The variability of interspace interval (ISI) of output spike trains in the process of retrieving stored patterns is also discussed.Comment: 15 pages, 3 figures, changed Titl

    Efficient Analysis of High Dimensional Data in Tensor Formats

    Get PDF
    In this article we introduce new methods for the analysis of high dimensional data in tensor formats, where the underling data come from the stochastic elliptic boundary value problem. After discretisation of the deterministic operator as well as the presented random fields via KLE and PCE, the obtained high dimensional operator can be approximated via sums of elementary tensors. This tensors representation can be effectively used for computing different values of interest, such as maximum norm, level sets and cumulative distribution function. The basic concept of the data analysis in high dimensions is discussed on tensors represented in the canonical format, however the approach can be easily used in other tensor formats. As an intermediate step we describe efficient iterative algorithms for computing the characteristic and sign functions as well as pointwise inverse in the canonical tensor format. Since during majority of algebraic operations as well as during iteration steps the representation rank grows up, we use lower-rank approximation and inexact recursive iteration schemes

    Dynamical principles in neuroscience

    Full text link
    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and Fundación BBVA

    Cell Microscopic Segmentation with Spiking Neuron Networks

    Full text link
    International audienceSpiking Neuron Networks (SNNs) overcome the computational power of neural networks made of thresholds or sigmoidal units. Indeed, SNNs add a new dimension, the temporal axis, to the representation capacity and the processing abilities of neural networks. In this paper, we present how SNN can be applied with efficacy for cell microscopic image segmentation. Results obtained confirm the validity of the approach. The strategy is performed on cytological color images. Quantitative measures are used to evaluate the resulting segmentations

    Emergence of Connectivity Motifs in Networks of Model Neurons with Short- and Long-term Plastic Synapses

    Get PDF
    Recent evidence in rodent cerebral cortex and olfactory bulb suggests that short-term dynamics of excitatory synaptic transmission is correlated to stereotypical connectivity motifs. It was observed that neurons with short-term facilitating synapses form predominantly reciprocal pairwise connections, while neurons with short-term depressing synapses form unidirectional pairwise connections. The cause of these structural differences in synaptic microcircuits is unknown. We propose that these connectivity motifs emerge from the interactions between short-term synaptic dynamics (SD) and long-term spike-timing dependent plasticity (STDP). While the impact of STDP on SD was shown in vitro, the mutual interactions between STDP and SD in large networks are still the subject of intense research. We formulate a computational model by combining SD and STDP, which captures faithfully short- and long-term dependence on both spike times and frequency. As a proof of concept, we simulate recurrent networks of spiking neurons with random initial connection efficacies and where synapses are either all short-term facilitating or all depressing. For identical background inputs, and as a direct consequence of internally generated activity, we find that networks with depressing synapses evolve unidirectional connectivity motifs, while networks with facilitating synapses evolve reciprocal connectivity motifs. This holds for heterogeneous networks including both facilitating and depressing synapses. Our study highlights the conditions under which SD-STDP might the correlation between facilitation and reciprocal connectivity motifs, as well as between depression and unidirectional motifs. We further suggest experiments for the validation of the proposed mechanism
    corecore