5 research outputs found
Neuromemrisitive Architecture of HTM with On-Device Learning and Neurogenesis
Hierarchical temporal memory (HTM) is a biomimetic sequence memory algorithm
that holds promise for invariant representations of spatial and spatiotemporal
inputs. This paper presents a comprehensive neuromemristive crossbar
architecture for the spatial pooler (SP) and the sparse distributed
representation classifier, which are fundamental to the algorithm. There are
several unique features in the proposed architecture that tightly link with the
HTM algorithm. A memristor that is suitable for emulating the HTM synapses is
identified and a new Z-window function is proposed. The architecture exploits
the concept of synthetic synapses to enable potential synapses in the HTM. The
crossbar for the SP avoids dark spots caused by unutilized crossbar regions and
supports rapid on-chip training within 2 clock cycles. This research also
leverages plasticity mechanisms such as neurogenesis and homeostatic intrinsic
plasticity to strengthen the robustness and performance of the SP. The proposed
design is benchmarked for image recognition tasks using MNIST and Yale faces
datasets, and is evaluated using different metrics including entropy,
sparseness, and noise robustness. Detailed power analysis at different stages
of the SP operations is performed to demonstrate the suitability for mobile
platforms
Metaplasticity in Multistate Memristor Synaptic Networks
Recent studies have shown that metaplastic synapses can retain information
longer than simple binary synapses and are beneficial for continual learning.
In this paper, we explore the multistate metaplastic synapse characteristics in
the context of high retention and reception of information. Inherent behavior
of a memristor emulating the multistate synapse is employed to capture the
metaplastic behavior. An integrated neural network study for learning and
memory retention is performed by integrating the synapse in a
crossbar at the circuit level and network at the architectural
level. An on-device training circuitry ensures the dynamic learning in the
network. In the network, it is observed that the number of input
patterns the multistate synapse can classify is 2.1x that of a simple
binary synapse model, at a mean accuracy of 75%
Memristor-Based HTM Spatial Pooler with On-Device Learning for Pattern Recognition
This article investigates hardware implementation of hierarchical temporal memory (HTM), a brain-inspired machine learning algorithm that mimics the key functions of the neocortex and is applicable to many machine learning tasks. Spatial pooler (SP) is one of the main parts of HTM, designed to learn the spatial information and obtain the sparse distributed representations (SDRs) of input patterns. The other part is temporal memory (TM) which aims to learn the temporal information of inputs. The memristor, which is an appropriate synapse emulator for neuromorphic systems, can be used as the synapse in SP and TM circuits. In this article, a memristor-based SP (MSP) circuit structure is designed to accelerate the execution of the SP algorithm. The presented MSP has properties of modeling both the synaptic permanence and the synaptic connection state within a single synapse, and on-device and parallel learning. Simulation results of statistic metrics and classification tasks on several real-world datasets substantiate the validity of MSP
Energy Efficient Neocortex-Inspired Systems with On-Device Learning
Shifting the compute workloads from cloud toward edge devices can significantly improve the overall latency for inference and learning. On the contrary this paradigm shift exacerbates the resource constraints on the edge devices. Neuromorphic computing architectures, inspired by the neural processes, are natural substrates for edge devices. They offer co-located memory, in-situ training, energy efficiency, high memory density, and compute capacity in a small form factor. Owing to these features, in the recent past, there has been a rapid proliferation of hybrid CMOS/Memristor neuromorphic computing systems. However, most of these systems offer limited plasticity, target either spatial or temporal input streams, and are not demonstrated on large scale heterogeneous tasks. There is a critical knowledge gap in designing scalable neuromorphic systems that can support hybrid plasticity for spatio-temporal input streams on edge devices.
This research proposes Pyragrid, a low latency and energy efficient neuromorphic computing system for processing spatio-temporal information natively on the edge. Pyragrid is a full-scale custom hybrid CMOS/Memristor architecture with analog computational modules and an underlying digital communication scheme. Pyragrid is designed for hierarchical temporal memory, a biomimetic sequence memory algorithm inspired by the neocortex. It features a novel synthetic synapses representation that enables dynamic synaptic pathways with reduced memory usage and interconnects. The dynamic growth in the synaptic pathways is emulated in the memristor device physical behavior, while the synaptic modulation is enabled through a custom training scheme optimized for area and power.
Pyragrid features data reuse, in-memory computing, and event-driven sparse local computing to reduce data movement by ~44x and maximize system throughput and power efficiency by ~3x and ~161x over custom CMOS digital design. The innate sparsity in Pyragrid results in overall robustness to noise and device failure, particularly when processing visual input and predicting time series sequences. Porting the proposed system on edge devices can enhance their computational capability, response time, and battery life