594 research outputs found
Intraneuronal information processing, directional selectivity and memory for spatio-temporal sequences.
Interacting intracellular signalling pathways can perform computations on a scale that is slower, but more fine-grained, than the interactions between neurons upon which we normally build our computational models of the brain (Bray D 1995 Nature 376 307-12). What computations might these potentially powerful intraneuronal mechanisms be performing? The answer suggested here is: storage of spatio-temporal trajectories; thus, neurons have some of the capacities required to perform such a task. In the retina, it is suggested that calcium-induced calcium release (CICR) may provide the basis for directional selectivity. In the cortex, if activation mechanisms with different delays could be separately reinforced at individual synapses then each such Hebbian super-synapse would store a memory trace of the delay between pre- and post-synaptic activity, forming an ideal basis for the memory and response to phase sequences
Racing to Learn: Statistical Inference and Learning in a Single Spiking Neuron with Adaptive Kernels
This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a
simple spiking neuron model that performs statistical inference and
unsupervised learning of spatiotemporal spike patterns. SKAN is the first
proposed neuron model to investigate the effects of dynamic synapto-dendritic
kernels and demonstrate their computational power even at the single neuron
scale. The rule-set defining the neuron is simple there are no complex
mathematical operations such as normalization, exponentiation or even
multiplication. The functionalities of SKAN emerge from the real-time
interaction of simple additive and binary processes. Like a biological neuron,
SKAN is robust to signal and parameter noise, and can utilize both in its
operations. At the network scale neurons are locked in a race with each other
with the fastest neuron to spike effectively hiding its learnt pattern from its
neighbors. The robustness to noise, high speed and simple building blocks not
only make SKAN an interesting neuron model in computational neuroscience, but
also make it ideal for implementation in digital and analog neuromorphic
systems which is demonstrated through an implementation in a Field Programmable
Gate Array (FPGA).Comment: In submission to Frontiers in Neuroscienc
Effects of Active Conductance Distribution over Dendrites on the Synaptic Integration in an Identified Nonspiking Interneuron
The synaptic integration in individual central neuron is critically affected by how active conductances are distributed over dendrites. It has been well known that the dendrites of central neurons are richly endowed with voltage- and ligand-regulated ion conductances. Nonspiking interneurons (NSIs), almost exclusively characteristic to arthropod central nervous systems, do not generate action potentials and hence lack voltage-regulated sodium channels, yet having a variety of voltage-regulated potassium conductances on their dendritic membrane including the one similar to the delayed-rectifier type potassium conductance. It remains unknown, however, how the active conductances are distributed over dendrites and how the synaptic integration is affected by those conductances in NSIs and other invertebrate neurons where the cell body is not included in the signal pathway from input synapses to output sites. In the present study, we quantitatively investigated the functional significance of active conductance distribution pattern in the spatio-temporal spread of synaptic potentials over dendrites of an identified NSI in the crayfish central nervous system by computer simulation. We systematically changed the distribution pattern of active conductances in the neuron's multicompartment model and examined how the synaptic potential waveform was affected by each distribution pattern. It was revealed that specific patterns of nonuniform distribution of potassium conductances were consistent, while other patterns were not, with the waveform of compound synaptic potentials recorded physiologically in the major input-output pathway of the cell, suggesting that the possibility of nonuniform distribution of potassium conductances over the dendrite cannot be excluded as well as the possibility of uniform distribution. Local synaptic circuits involving input and output synapses on the same branch or on the same side were found to be potentially affected under the condition of nonuniform distribution while operation of the major input-output pathway from the soma side to the one on the opposite side remained the same under both conditions of uniform and nonuniform distribution of potassium conductances over the NSI dendrite
Investigation of Synapto-dendritic Kernel Adapting Neuron models and their use in spiking neuromorphic architectures
The motivation for this thesis is idea that abstract, adaptive, hardware efficient, inter-neuronal transfer functions (or kernels) which carry information in the form of postsynaptic membrane potentials, are the most important (and erstwhile missing) element in neuromorphic implementations of Spiking Neural Networks (SNN). In the absence of such abstract kernels, spiking neuromorphic systems must realize very large numbers of synapses and their associated connectivity. The resultant hardware and bandwidth limitations create difficult tradeoffs which diminish the usefulness of such systems.
In this thesis a novel model of spiking neurons is proposed. The proposed Synapto-dendritic Kernel Adapting Neuron (SKAN) uses the adaptation of their synapto-dendritic kernels in conjunction with an adaptive threshold to perform unsupervised learning and inference on spatio-temporal spike patterns. The hardware and connectivity requirements of the neuron model are minimized through the use of simple accumulator-based kernels as well as through the use of timing information to perform a winner take all operation between the neurons. The learning and inference operations of SKAN are characterized and shown to be robust across a range of noise environments.
Next, the SKAN model is augmented with a simplified hardware-efficient model of Spike Timing Dependent Plasticity (STDP). In biology STDP is the mechanism which allows neurons to learn spatio-temporal spike patterns. However when the proposed SKAN model is augmented with a simplified STDP rule, where the synaptic kernel is used as a binary flag that enable synaptic potentiation, the result is a synaptic encoding of afferent Signal to Noise Ratio (SNR). In this combined model the neuron not only learns the target spatio-temporal spike patterns but also weighs each channel independently according to its signal to noise ratio. Additionally a novel approach is presented to achieving homeostatic plasticity in digital hardware which reduces hardware cost by eliminating the need for multipliers.
Finally the behavior and potential utility of this combined model is investigated in a range of noise conditions and the digital hardware resource utilization of SKAN and SKAN + STDP is detailed using Field Programmable Gate Arrays (FPGA)
Neuromorphic analogue VLSI
Neuromorphic systems emulate the organization and function of nervous systems. They are usually composed of analogue electronic circuits that are fabricated in the complementary metal-oxide-semiconductor (CMOS) medium using very large-scale integration (VLSI) technology. However, these neuromorphic systems are not another kind of digital computer in which abstract neural networks are simulated symbolically in terms of their mathematical behavior. Instead, they directly embody, in the physics of their CMOS circuits, analogues of the physical processes that underlie the computations of neural systems. The significance of neuromorphic systems is that they offer a method of exploring neural computation in a medium whose physical behavior is analogous to that of biological nervous systems and that operates in real time irrespective of size. The implications of this approach are both scientific and practical. The study of neuromorphic systems provides a bridge between levels of understanding. For example, it provides a link between the physical processes of neurons and their computational significance. In addition, the synthesis of neuromorphic systems transposes our knowledge of neuroscience into practical devices that can interact directly with the real world in the same way that biological nervous systems do
Liquid State Machine with Dendritically Enhanced Readout for Low-power, Neuromorphic VLSI Implementations
In this paper, we describe a new neuro-inspired, hardware-friendly readout
stage for the liquid state machine (LSM), a popular model for reservoir
computing. Compared to the parallel perceptron architecture trained by the
p-delta algorithm, which is the state of the art in terms of performance of
readout stages, our readout architecture and learning algorithm can attain
better performance with significantly less synaptic resources making it
attractive for VLSI implementation. Inspired by the nonlinear properties of
dendrites in biological neurons, our readout stage incorporates neurons having
multiple dendrites with a lumped nonlinearity. The number of synaptic
connections on each branch is significantly lower than the total number of
connections from the liquid neurons and the learning algorithm tries to find
the best 'combination' of input connections on each branch to reduce the error.
Hence, the learning involves network rewiring (NRW) of the readout network
similar to structural plasticity observed in its biological counterparts. We
show that compared to a single perceptron using analog weights, this
architecture for the readout can attain, even by using the same number of
binary valued synapses, up to 3.3 times less error for a two-class spike train
classification problem and 2.4 times less error for an input rate approximation
task. Even with 60 times larger synapses, a group of 60 parallel perceptrons
cannot attain the performance of the proposed dendritically enhanced readout.
An additional advantage of this method for hardware implementations is that the
'choice' of connectivity can be easily implemented exploiting address event
representation (AER) protocols commonly used in current neuromorphic systems
where the connection matrix is stored in memory. Also, due to the use of binary
synapses, our proposed method is more robust against statistical variations.Comment: 14 pages, 19 figures, Journa
Homogeneous Spiking Neuromorphic System for Real-World Pattern Recognition
A neuromorphic chip that combines CMOS analog spiking neurons and memristive
synapses offers a promising solution to brain-inspired computing, as it can
provide massive neural network parallelism and density. Previous hybrid analog
CMOS-memristor approaches required extensive CMOS circuitry for training, and
thus eliminated most of the density advantages gained by the adoption of
memristor synapses. Further, they used different waveforms for pre and
post-synaptic spikes that added undesirable circuit overhead. Here we describe
a hardware architecture that can feature a large number of memristor synapses
to learn real-world patterns. We present a versatile CMOS neuron that combines
integrate-and-fire behavior, drives passive memristors and implements
competitive learning in a compact circuit module, and enables in-situ
plasticity in the memristor synapses. We demonstrate handwritten-digits
recognition using the proposed architecture using transistor-level circuit
simulations. As the described neuromorphic architecture is homogeneous, it
realizes a fundamental building block for large-scale energy-efficient
brain-inspired silicon chips that could lead to next-generation cognitive
computing.Comment: This is a preprint of an article accepted for publication in IEEE
Journal on Emerging and Selected Topics in Circuits and Systems, vol 5, no.
2, June 201
- …