5,612 research outputs found
Memory and information processing in neuromorphic systems
A striking difference between brain-inspired neuromorphic processors and
current von Neumann processors architectures is the way in which memory and
processing is organized. As Information and Communication Technologies continue
to address the need for increased computational power through the increase of
cores within a digital processor, neuromorphic engineers and scientists can
complement this need by building processor architectures where memory is
distributed with the processing. In this paper we present a survey of
brain-inspired processor architectures that support models of cortical networks
and deep neural networks. These architectures range from serial clocked
implementations of multi-neuron systems to massively parallel asynchronous ones
and from purely digital systems to mixed analog/digital systems which implement
more biological-like models of neurons and synapses together with a suite of
adaptation and learning mechanisms analogous to the ones found in biological
nervous systems. We describe the advantages of the different approaches being
pursued and present the challenges that need to be addressed for building
artificial neural processing systems that can display the richness of behaviors
seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed
neuromorphic computing platforms and system
Building Blocks for Spikes Signals Processing
Neuromorphic engineers study models and
implementations of systems that mimic neurons behavior in the
brain. Neuro-inspired systems commonly use spikes to
represent information. This representation has several
advantages: its robustness to noise thanks to repetition, its
continuous and analog information representation using digital
pulses, its capacity of pre-processing during transmission time,
... , Furthermore, spikes is an efficient way, found by nature, to
codify, transmit and process information. In this paper we
propose, design, and analyze neuro-inspired building blocks
that can perform spike-based analog filters used in signal
processing. We present a VHDL implementation for FPGA.
Presented building blocks take advantages of the spike rate
coded representation to perform a massively parallel processing
without complex hardware units, like floating point arithmetic
units, or a large memory. Those low requirements of hardware
allow the integration of a high number of blocks inside a FPGA,
allowing to process fully in parallel several spikes coded signals.Junta de Andalucía P06-TIC-O1417Ministerio de Ciencia e Innovación TEC2009-10639-C04-02Ministerio de Ciencia e Innovación TEC2006-11730-C03-0
Energy Efficient Neocortex-Inspired Systems with On-Device Learning
Shifting the compute workloads from cloud toward edge devices can significantly improve the overall latency for inference and learning. On the contrary this paradigm shift exacerbates the resource constraints on the edge devices. Neuromorphic computing architectures, inspired by the neural processes, are natural substrates for edge devices. They offer co-located memory, in-situ training, energy efficiency, high memory density, and compute capacity in a small form factor. Owing to these features, in the recent past, there has been a rapid proliferation of hybrid CMOS/Memristor neuromorphic computing systems. However, most of these systems offer limited plasticity, target either spatial or temporal input streams, and are not demonstrated on large scale heterogeneous tasks. There is a critical knowledge gap in designing scalable neuromorphic systems that can support hybrid plasticity for spatio-temporal input streams on edge devices.
This research proposes Pyragrid, a low latency and energy efficient neuromorphic computing system for processing spatio-temporal information natively on the edge. Pyragrid is a full-scale custom hybrid CMOS/Memristor architecture with analog computational modules and an underlying digital communication scheme. Pyragrid is designed for hierarchical temporal memory, a biomimetic sequence memory algorithm inspired by the neocortex. It features a novel synthetic synapses representation that enables dynamic synaptic pathways with reduced memory usage and interconnects. The dynamic growth in the synaptic pathways is emulated in the memristor device physical behavior, while the synaptic modulation is enabled through a custom training scheme optimized for area and power.
Pyragrid features data reuse, in-memory computing, and event-driven sparse local computing to reduce data movement by ~44x and maximize system throughput and power efficiency by ~3x and ~161x over custom CMOS digital design. The innate sparsity in Pyragrid results in overall robustness to noise and device failure, particularly when processing visual input and predicting time series sequences. Porting the proposed system on edge devices can enhance their computational capability, response time, and battery life
Neuro-memristive Circuits for Edge Computing: A review
The volume, veracity, variability, and velocity of data produced from the
ever-increasing network of sensors connected to Internet pose challenges for
power management, scalability, and sustainability of cloud computing
infrastructure. Increasing the data processing capability of edge computing
devices at lower power requirements can reduce several overheads for cloud
computing solutions. This paper provides the review of neuromorphic
CMOS-memristive architectures that can be integrated into edge computing
devices. We discuss why the neuromorphic architectures are useful for edge
devices and show the advantages, drawbacks and open problems in the field of
neuro-memristive circuits for edge computing
Forward Table-Based Presynaptic Event-Triggered Spike-Timing-Dependent Plasticity
Spike-timing-dependent plasticity (STDP) incurs both causal and acausal
synaptic weight updates, for negative and positive time differences between
pre-synaptic and post-synaptic spike events. For realizing such updates in
neuromorphic hardware, current implementations either require forward and
reverse lookup access to the synaptic connectivity table, or rely on
memory-intensive architectures such as crossbar arrays. We present a novel
method for realizing both causal and acausal weight updates using only forward
lookup access of the synaptic connectivity table, permitting memory-efficient
implementation. A simplified implementation in FPGA, using a single timer
variable for each neuron, closely approximates exact STDP cumulative weight
updates for neuron refractory periods greater than 10 ms, and reduces to exact
STDP for refractory periods greater than the STDP time window. Compared to
conventional crossbar implementation, the forward table-based implementation
leads to substantial memory savings for sparsely connected networks supporting
scalable neuromorphic systems with fully reconfigurable synaptic connectivity
and plasticity.Comment: Submitted to BioCAS 201
- …