1,371 research outputs found
Linear stability analysis of retrieval state in associative memory neural networks of spiking neurons
We study associative memory neural networks of the Hodgkin-Huxley type of
spiking neurons in which multiple periodic spatio-temporal patterns of spike
timing are memorized as limit-cycle-type attractors. In encoding the
spatio-temporal patterns, we assume the spike-timing-dependent synaptic
plasticity with the asymmetric time window. Analysis for periodic solution of
retrieval state reveals that if the area of the negative part of the time
window is equivalent to the positive part, then crosstalk among encoded
patterns vanishes. Phase transition due to the loss of the stability of
periodic solution is observed when we assume fast alpha-function for direct
interaction among neurons. In order to evaluate the critical point of this
phase transition, we employ Floquet theory in which the stability problem of
the infinite number of spiking neurons interacting with alpha-function is
reduced into the eigenvalue problem with the finite size of matrix. Numerical
integration of the single-body dynamics yields the explicit value of the
matrix, which enables us to determine the critical point of the phase
transition with a high degree of precision.Comment: Accepted for publication in Phys. Rev.
Neural Avalanches at the Critical Point between Replay and Non-Replay of Spatiotemporal Patterns
We model spontaneous cortical activity with a network of coupled spiking
units, in which multiple spatio-temporal patterns are stored as dynamical
attractors. We introduce an order parameter, which measures the overlap
(similarity) between the activity of the network and the stored patterns. We
find that, depending on the excitability of the network, different working
regimes are possible. For high excitability, the dynamical attractors are
stable, and a collective activity that replays one of the stored patterns
emerges spontaneously, while for low excitability, no replay is induced.
Between these two regimes, there is a critical region in which the dynamical
attractors are unstable, and intermittent short replays are induced by noise.
At the critical spiking threshold, the order parameter goes from zero to one,
and its fluctuations are maximized, as expected for a phase transition (and as
observed in recent experimental results in the brain). Notably, in this
critical region, the avalanche size and duration distributions follow power
laws. Critical exponents are consistent with a scaling relationship observed
recently in neural avalanches measurements. In conclusion, our simple model
suggests that avalanche power laws in cortical spontaneous activity may be the
effect of a network at the critical point between the replay and non-replay of
spatio-temporal patterns
Capacity, Fidelity, and Noise Tolerance of Associative Spatial-Temporal Memories Based on Memristive Neuromorphic Network
We have calculated the key characteristics of associative
(content-addressable) spatial-temporal memories based on neuromorphic networks
with restricted connectivity - "CrossNets". Such networks may be naturally
implemented in nanoelectronic hardware using hybrid CMOS/memristor circuits,
which may feature extremely high energy efficiency, approaching that of
biological cortical circuits, at much higher operation speed. Our numerical
simulations, in some cases confirmed by analytical calculations, have shown
that the characteristics depend substantially on the method of information
recording into the memory. Of the four methods we have explored, two look
especially promising - one based on the quadratic programming, and the other
one being a specific discrete version of the gradient descent. The latter
method provides a slightly lower memory capacity (at the same fidelity) then
the former one, but it allows local recording, which may be more readily
implemented in nanoelectronic hardware. Most importantly, at the synchronous
retrieval, both methods provide a capacity higher than that of the well-known
Ternary Content-Addressable Memories with the same number of nonvolatile memory
cells (e.g., memristors), though the input noise immunity of the CrossNet
memories is somewhat lower
Storage of phase-coded patterns via STDP in fully-connected and sparse network: a study of the network capacity
We study the storage and retrieval of phase-coded patterns as stable
dynamical attractors in recurrent neural networks, for both an analog and a
integrate-and-fire spiking model. The synaptic strength is determined by a
learning rule based on spike-time-dependent plasticity, with an asymmetric time
window depending on the relative timing between pre- and post-synaptic
activity. We store multiple patterns and study the network capacity.
For the analog model, we find that the network capacity scales linearly with
the network size, and that both capacity and the oscillation frequency of the
retrieval state depend on the asymmetry of the learning time window. In
addition to fully-connected networks, we study sparse networks, where each
neuron is connected only to a small number z << N of other neurons. Connections
can be short range, between neighboring neurons placed on a regular lattice, or
long range, between randomly chosen pairs of neurons. We find that a small
fraction of long range connections is able to amplify the capacity of the
network. This imply that a small-world-network topology is optimal, as a
compromise between the cost of long range connections and the capacity
increase.
Also in the spiking integrate and fire model the crucial result of storing
and retrieval of multiple phase-coded patterns is observed. The capacity of the
fully-connected spiking network is investigated, together with the relation
between oscillation frequency of retrieval state and window asymmetry
Experimental study of artificial neural networks using a digital memristor simulator
© 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.This paper presents a fully digital implementation of a memristor hardware simulator, as the core of an emulator, based on a behavioral model of voltage-controlled threshold-type bipolar memristors. Compared to other analog solutions, the proposed digital design is compact, easily reconfigurable, demonstrates very good matching with the mathematical model on which it is based, and complies with all the required features for memristor emulators. We validated its functionality using Altera Quartus II and ModelSim tools targeting low-cost yet powerful field programmable gate array (FPGA) families. We tested its suitability for complex memristive circuits as well as its synapse functioning in artificial neural networks (ANNs), implementing examples of associative memory and unsupervised learning of spatio-temporal correlations in parallel input streams using a simplified STDP. We provide the full circuit schematics of all our digital circuit designs and comment on the required hardware resources and their scaling trends, thus presenting a design framework for applications based on our hardware simulator.Peer ReviewedPostprint (author's final draft
Brain-Inspired Spatio-Temporal Associative Memories for Neuroimaging Data Classification: EEG and fMRI
Humans learn from a lot of information sources to make decisions. Once this information is learned in the brain, spatio-temporal associations are made, connecting all these sources (variables) in space and time represented as brain connectivity. In reality, to make a decision, we usually have only part of the information, either as a limited number of variables, limited time to make the decision, or both. The brain functions as a spatio-temporal associative memory. Inspired by the ability of the human brain, a brain-inspired spatio-temporal associative memory was proposed earlier that utilized the NeuCube brain-inspired spiking neural network framework. Here we applied the STAM framework to develop STAM for neuroimaging data, on the cases of EEG and fMRI, resulting in STAM-EEG and STAM-fMRI. This paper showed that once a NeuCube STAM classification model was trained on a complete spatio-temporal EEG or fMRI data, it could be recalled using only part of the time series, or/and only part of the used variables. We evaluated both temporal and spatial association and generalization accuracy accordingly. This was a pilot study that opens the field for the development of classification systems on other neuroimaging data, such as longitudinal MRI data, trained on complete data but recalled on partial data. Future research includes STAM that will work on data, collected across different settings, in different labs and clinics, that may vary in terms of the variables and time of data collection, along with other parameters. The proposed STAM will be further investigated for early diagnosis and prognosis of brain conditions and for diagnostic/prognostic marker discovery
Self-organization in the olfactory system: one shot odor recognition in insects
We show in a model of spiking neurons that synaptic plasticity in the mushroom bodies in combination with the general fan-in, fan-out properties of the early processing layers of the olfactory system might be sufficient to account for its efficient recognition of odors. For a large variety of initial conditions the model system consistently finds a working solution without any fine-tuning, and is, therefore, inherently robust. We demonstrate that gain control through the known feedforward inhibition of lateral horn interneurons increases the capacity of the system but is not essential for its general function. We also predict an upper limit for the number of odor classes Drosophila can discriminate based on the number and connectivity of its olfactory neurons
- …