106 research outputs found
Neuromorphic Online Learning for Spatiotemporal Patterns with a Forward-only Timeline
Spiking neural networks (SNNs) are bio-plausible computing models with high
energy efficiency. The temporal dynamics of neurons and synapses enable them to
detect temporal patterns and generate sequences. While Backpropagation Through
Time (BPTT) is traditionally used to train SNNs, it is not suitable for online
learning of embedded applications due to its high computation and memory cost
as well as extended latency. Previous works have proposed online learning
algorithms, but they often utilize highly simplified spiking neuron models
without synaptic dynamics and reset feedback, resulting in subpar performance.
In this work, we present Spatiotemporal Online Learning for Synaptic Adaptation
(SOLSA), specifically designed for online learning of SNNs composed of Leaky
Integrate and Fire (LIF) neurons with exponentially decayed synapses and soft
reset. The algorithm not only learns the synaptic weight but also adapts the
temporal filters associated to the synapses. Compared to the BPTT algorithm,
SOLSA has much lower memory requirement and achieves a more balanced temporal
workload distribution. Moreover, SOLSA incorporates enhancement techniques such
as scheduled weight update, early stop training and adaptive synapse filter,
which speed up the convergence and enhance the learning performance. When
compared to other non-BPTT based SNN learning, SOLSA demonstrates an average
learning accuracy improvement of 14.2%. Furthermore, compared to BPTT, SOLSA
achieves a 5% higher average learning accuracy with a 72% reduction in memory
cost.Comment: 9 pages,8 figure
Memristor-Based HTM Spatial Pooler with On-Device Learning for Pattern Recognition
This article investigates hardware implementation of hierarchical temporal memory (HTM), a brain-inspired machine learning algorithm that mimics the key functions of the neocortex and is applicable to many machine learning tasks. Spatial pooler (SP) is one of the main parts of HTM, designed to learn the spatial information and obtain the sparse distributed representations (SDRs) of input patterns. The other part is temporal memory (TM) which aims to learn the temporal information of inputs. The memristor, which is an appropriate synapse emulator for neuromorphic systems, can be used as the synapse in SP and TM circuits. In this article, a memristor-based SP (MSP) circuit structure is designed to accelerate the execution of the SP algorithm. The presented MSP has properties of modeling both the synaptic permanence and the synaptic connection state within a single synapse, and on-device and parallel learning. Simulation results of statistic metrics and classification tasks on several real-world datasets substantiate the validity of MSP
Learning of chunking sequences in cognition and behavior
We often learn and recall long sequences in smaller segments, such as a phone number 858 534 22 30 memorized as four segments. Behavioral experiments suggest that humans and some animals employ this strategy of breaking down cognitive or behavioral sequences into chunks in a wide variety of tasks, but the dynamical principles of how this is achieved remains unknown. Here, we study the temporal dynamics of chunking for learning cognitive sequences in a chunking representation using a dynamical model of competing modes arranged to evoke hierarchical Winnerless Competition (WLC) dynamics. Sequential memory is represented as trajectories along a chain of metastable fixed points at each level of the hierarchy, and bistable Hebbian dynamics enables the learning of such trajectories in an unsupervised fashion. Using computer simulations, we demonstrate the learning of a chunking representation of sequences and their robust recall. During learning, the dynamics associates a set of modes to each information-carrying item in the sequence and encodes their relative order. During recall, hierarchical WLC guarantees the robustness of the sequence order when the sequence is not too long. The resulting patterns of activities share several features observed in behavioral experiments, such as the pauses between boundaries of chunks, their size and their duration. Failures in learning chunking sequences provide new insights into the dynamical causes of neurological disorders such as Parkinson's disease and Schizophrenia
Semantic learning in autonomously active recurrent neural networks
The human brain is autonomously active, being characterized by a
self-sustained neural activity which would be present even in the absence of
external sensory stimuli. Here we study the interrelation between the
self-sustained activity in autonomously active recurrent neural nets and
external sensory stimuli.
There is no a priori semantical relation between the influx of external
stimuli and the patterns generated internally by the autonomous and ongoing
brain dynamics. The question then arises when and how are semantic correlations
between internal and external dynamical processes learned and built up?
We study this problem within the paradigm of transient state dynamics for the
neural activity in recurrent neural nets, i.e. for an autonomous neural
activity characterized by an infinite time-series of transiently stable
attractor states. We propose that external stimuli will be relevant during the
sensitive periods, {\it viz} the transition period between one transient state
and the subsequent semi-stable attractor. A diffusive learning signal is
generated unsupervised whenever the stimulus influences the internal dynamics
qualitatively.
For testing we have presented to the model system stimuli corresponding to
the bars and stripes problem. We found that the system performs a non-linear
independent component analysis on its own, being continuously and autonomously
active. This emergent cognitive capability results here from a general
principle for the neural dynamics, the competition between neural ensembles.Comment: Journal of Algorithms in Cognition, Informatics and Logic, special
issue on `Perspectives and Challenges for Recurrent Neural Networks', in
pres
A Biologically Based Spatio-Temporal Framework for the Matching and Encoding of Data
This thesis presents a neuron model and framework for the architecture and interaction of neurons in order to accomplish two tasks, 1) data matching, and 2) the storage and retrieval of information. The tasks are approached from the basis of biologically inspired spiking neural network theory. The fundamental aspects of this model are extracted and implemented in conjunction with the designed framework, resulting in a model that takes advantage of the spatio-temporal nature of neurons to match, store, and retrieve data. The driving features are the rest, or refractory, period of the neurons, and the finite, positively sloped post-synaptic responses. When superposed these responses may increase a neuron’s potential past the threshold, causing the neuron to fire. A framework, the Competitive Classifying Unit, composed of groups of dynamic threshold neurons is used to match binary strings, with and without noise present. In the absence of noise, results show an increase in accuracy with decreasing standard deviation in the randomness of the neuron threshold. With noise present, the framework retains its ability to identify the specified sequence.
To realize the second task, an additional architectural structure for storing and retrieving data based on the spike time arrival is presented. Training pulse arrival times in conjunction with firings caused by upstream neurons result in synapse weight adjustments. Ultimately, the data is storable and retrievable due to the synapse connections developed between neurons in a network, synapse connections that are either strengthened or pared away during training. Due to the precise timing requirements of system, a clock is required to measure the passage of time. This necessity which implies periodicity, synchronicity is supported by the number of upstream neuron firings required to cause a downstream neuron to fire, and all of this is supported by homeostasis constraints. Finally, the limits for data storage capacity (i.e.: the number and length of binary strings) is determined based on number of neurons in a neuron cluster
Unleashing the Potential of Spiking Neural Networks by Dynamic Confidence
This paper presents a new methodology to alleviate the fundamental trade-off
between accuracy and latency in spiking neural networks (SNNs). The approach
involves decoding confidence information over time from the SNN outputs and
using it to develop a decision-making agent that can dynamically determine when
to terminate each inference.
The proposed method, Dynamic Confidence, provides several significant
benefits to SNNs. 1. It can effectively optimize latency dynamically at
runtime, setting it apart from many existing low-latency SNN algorithms. Our
experiments on CIFAR-10 and ImageNet datasets have demonstrated an average 40%
speedup across eight different settings after applying Dynamic Confidence. 2.
The decision-making agent in Dynamic Confidence is straightforward to construct
and highly robust in parameter space, making it extremely easy to implement. 3.
The proposed method enables visualizing the potential of any given SNN, which
sets a target for current SNNs to approach. For instance, if an SNN can
terminate at the most appropriate time point for each input sample, a ResNet-50
SNN can achieve an accuracy as high as 82.47% on ImageNet within just 4.71 time
steps on average. Unlocking the potential of SNNs needs a highly-reliable
decision-making agent to be constructed and fed with a high-quality estimation
of ground truth. In this regard, Dynamic Confidence represents a meaningful
step toward realizing the potential of SNNs
Recommended from our members
Synaptic plasticity and memory addressing in biological and artificial neural networks
Biological brains are composed of neurons, interconnected by synapses to create large complex networks. Learning and memory occur, in large part, due to synaptic plasticity -- modifications in the efficacy of information transmission through these synaptic connections. Artificial neural networks model these with neural "units" which communicate through synaptic weights. Models of learning and memory propose synaptic plasticity rules that describe and predict the weight modifications. An equally important but under-evaluated question is the selection of \textit{which} synapses should be updated in response to a memory event. In this work, we attempt to separate the questions of synaptic plasticity from that of memory addressing.
Chapter 1 provides an overview of the problem of memory addressing and a summary of the solutions that have been considered in computational neuroscience and artificial intelligence, as well as those that may exist in biology. Chapter 2 presents in detail a solution to memory addressing and synaptic plasticity in the context of familiarity detection, suggesting strong feedforward weights and anti-Hebbian plasticity as the respective mechanisms. Chapter 3 proposes a model of recall, with storage performed by addressing through local third factors and neo-Hebbian plasticity, and retrieval by content-based addressing. In Chapter 4, we consider the problem of concurrent memory consolidation and memorization. Both storage and retrieval are performed by content-based addressing, but the plasticity rule itself is implemented by gradient descent, modulated according to whether an item should be stored in a distributed manner or memorized verbatim. However, the classical method for computing gradients in recurrent neural networks, backpropagation through time, is generally considered unbiological. In Chapter 5 we suggest a more realistic implementation through an approximation of recurrent backpropagation.
Taken together, these results propose a number of potential mechanisms for memory storage and retrieval, each of which separates the mechanism of synaptic updating -- plasticity -- from that of synapse selection -- addressing. Explicit studies of memory addressing may find applications not only in artificial intelligence but also in biology. In artificial networks, for example, selectively updating memories in large language models can help improve user privacy and security. In biological ones, understanding memory addressing can help with health outcomes and treating memory-based illnesses such as Alzheimers or PTSD
- …