137,004 research outputs found
On the number of limit cycles in asymmetric neural networks
The comprehension of the mechanisms at the basis of the functioning of
complexly interconnected networks represents one of the main goals of
neuroscience. In this work, we investigate how the structure of recurrent
connectivity influences the ability of a network to have storable patterns and
in particular limit cycles, by modeling a recurrent neural network with
McCulloch-Pitts neurons as a content-addressable memory system.
A key role in such models is played by the connectivity matrix, which, for
neural networks, corresponds to a schematic representation of the "connectome":
the set of chemical synapses and electrical junctions among neurons. The shape
of the recurrent connectivity matrix plays a crucial role in the process of
storing memories. This relation has already been exposed by the work of Tanaka
and Edwards, which presents a theoretical approach to evaluate the mean number
of fixed points in a fully connected model at thermodynamic limit.
Interestingly, further studies on the same kind of model but with a finite
number of nodes have shown how the symmetry parameter influences the types of
attractors featured in the system. Our study extends the work of Tanaka and
Edwards by providing a theoretical evaluation of the mean number of attractors
of any given length for different degrees of symmetry in the connectivity
matrices.Comment: 35 pages, 12 figure
Modeling Fault Propagation Paths in Power Systems: A New Framework Based on Event SNP Systems With Neurotransmitter Concentration
To reveal fault propagation paths is one of the most critical studies for the analysis of
power system security; however, it is rather dif cult. This paper proposes a new framework for the fault
propagation path modeling method of power systems based on membrane computing.We rst model the fault
propagation paths by proposing the event spiking neural P systems (Ev-SNP systems) with neurotransmitter
concentration, which can intuitively reveal the fault propagation path due to the ability of its graphics models
and parallel knowledge reasoning. The neurotransmitter concentration is used to represent the probability
and gravity degree of fault propagation among synapses. Then, to reduce the dimension of the Ev-SNP
system and make them suitable for large-scale power systems, we propose a model reduction method
for the Ev-SNP system and devise its simpli ed model by constructing single-input and single-output
neurons, called reduction-SNP system (RSNP system). Moreover, we apply the RSNP system to the IEEE
14- and 118-bus systems to study their fault propagation paths. The proposed approach rst extends the
SNP systems to a large-scaled application in critical infrastructures from a single element to a system-wise
investigation as well as from the post-ante fault diagnosis to a new ex-ante fault propagation path prediction,
and the simulation results show a new success and promising approach to the engineering domain
Data-driven modeling of the olfactory neural codes and their dynamics in the insect antennal lobe
Recordings from neurons in the insects' olfactory primary processing center,
the antennal lobe (AL), reveal that the AL is able to process the input from
chemical receptors into distinct neural activity patterns, called olfactory
neural codes. These exciting results show the importance of neural codes and
their relation to perception. The next challenge is to \emph{model the
dynamics} of neural codes. In our study, we perform multichannel recordings
from the projection neurons in the AL driven by different odorants. We then
derive a neural network from the electrophysiological data. The network
consists of lateral-inhibitory neurons and excitatory neurons, and is capable
of producing unique olfactory neural codes for the tested odorants.
Specifically, we (i) design a projection, an odor space, for the neural
recording from the AL, which discriminates between distinct odorants
trajectories (ii) characterize scent recognition, i.e., decision-making based
on olfactory signals and (iii) infer the wiring of the neural circuit, the
connectome of the AL. We show that the constructed model is consistent with
biological observations, such as contrast enhancement and robustness to noise.
The study answers a key biological question in identifying how lateral
inhibitory neurons can be wired to excitatory neurons to permit robust activity
patterns
A Comprehensive Workflow for General-Purpose Neural Modeling with Highly Configurable Neuromorphic Hardware Systems
In this paper we present a methodological framework that meets novel
requirements emerging from upcoming types of accelerated and highly
configurable neuromorphic hardware systems. We describe in detail a device with
45 million programmable and dynamic synapses that is currently under
development, and we sketch the conceptual challenges that arise from taking
this platform into operation. More specifically, we aim at the establishment of
this neuromorphic system as a flexible and neuroscientifically valuable
modeling tool that can be used by non-hardware-experts. We consider various
functional aspects to be crucial for this purpose, and we introduce a
consistent workflow with detailed descriptions of all involved modules that
implement the suggested steps: The integration of the hardware interface into
the simulator-independent model description language PyNN; a fully automated
translation between the PyNN domain and appropriate hardware configurations; an
executable specification of the future neuromorphic system that can be
seamlessly integrated into this biology-to-hardware mapping process as a test
bench for all software layers and possible hardware design modifications; an
evaluation scheme that deploys models from a dedicated benchmark library,
compares the results generated by virtual or prototype hardware devices with
reference software simulations and analyzes the differences. The integration of
these components into one hardware-software workflow provides an ecosystem for
ongoing preparative studies that support the hardware design process and
represents the basis for the maturity of the model-to-hardware mapping
software. The functionality and flexibility of the latter is proven with a
variety of experimental results
BrainFrame: A node-level heterogeneous accelerator platform for neuron simulations
Objective: The advent of High-Performance Computing (HPC) in recent years has
led to its increasing use in brain study through computational models. The
scale and complexity of such models are constantly increasing, leading to
challenging computational requirements. Even though modern HPC platforms can
often deal with such challenges, the vast diversity of the modeling field does
not permit for a single acceleration (or homogeneous) platform to effectively
address the complete array of modeling requirements. Approach: In this paper we
propose and build BrainFrame, a heterogeneous acceleration platform,
incorporating three distinct acceleration technologies, a Dataflow Engine, a
Xeon Phi and a GP-GPU. The PyNN framework is also integrated into the platform.
As a challenging proof of concept, we analyze the performance of BrainFrame on
different instances of a state-of-the-art neuron model, modeling the Inferior-
Olivary Nucleus using a biophysically-meaningful, extended Hodgkin-Huxley
representation. The model instances take into account not only the neuronal-
network dimensions but also different network-connectivity circumstances that
can drastically change application workload characteristics. Main results: The
synthetic approach of three HPC technologies demonstrated that BrainFrame is
better able to cope with the modeling diversity encountered. Our performance
analysis shows clearly that the model directly affect performance and all three
technologies are required to cope with all the model use cases.Comment: 16 pages, 18 figures, 5 table
Neuro-Fuzzy Computing System with the Capacity of Implementation on Memristor-Crossbar and Optimization-Free Hardware Training
In this paper, first we present a new explanation for the relation between
logical circuits and artificial neural networks, logical circuits and fuzzy
logic, and artificial neural networks and fuzzy inference systems. Then, based
on these results, we propose a new neuro-fuzzy computing system which can
effectively be implemented on the memristor-crossbar structure. One important
feature of the proposed system is that its hardware can directly be trained
using the Hebbian learning rule and without the need to any optimization. The
system also has a very good capability to deal with huge number of input-out
training data without facing problems like overtraining.Comment: 16 pages, 11 images, submitted to IEEE Trans. on Fuzzy system
- …