9 research outputs found
NeuroBench:Advancing Neuromorphic Computing through Collaborative, Fair and Representative Benchmarking
The field of neuromorphic computing holds great promise in terms of advancing computing efficiency and capabilities by following brain-inspired principles. However, the rich diversity of techniques employed in neuromorphic research has resulted in a lack of clear standards for benchmarking, hindering effective evaluation of the advantages and strengths of neuromorphic methods compared to traditional deep-learning-based methods. This paper presents a collaborative effort, bringing together members from academia and the industry, to define benchmarks for neuromorphic computing: NeuroBench. The goals of NeuroBench are to be a collaborative, fair, and representative benchmark suite developed by the community, for the community. In this paper, we discuss the challenges associated with benchmarking neuromorphic solutions, and outline the key features of NeuroBench. We believe that NeuroBench will be a significant step towards defining standards that can unify the goals of neuromorphic computing and drive its technological progress. Please visit neurobench.ai for the latest updates on the benchmark tasks and metrics
NeuroBench: Advancing Neuromorphic Computing through Collaborative, Fair and Representative Benchmarking
The field of neuromorphic computing holds great promise in terms of advancing
computing efficiency and capabilities by following brain-inspired principles.
However, the rich diversity of techniques employed in neuromorphic research has
resulted in a lack of clear standards for benchmarking, hindering effective
evaluation of the advantages and strengths of neuromorphic methods compared to
traditional deep-learning-based methods. This paper presents a collaborative
effort, bringing together members from academia and the industry, to define
benchmarks for neuromorphic computing: NeuroBench. The goals of NeuroBench are
to be a collaborative, fair, and representative benchmark suite developed by
the community, for the community. In this paper, we discuss the challenges
associated with benchmarking neuromorphic solutions, and outline the key
features of NeuroBench. We believe that NeuroBench will be a significant step
towards defining standards that can unify the goals of neuromorphic computing
and drive its technological progress. Please visit neurobench.ai for the latest
updates on the benchmark tasks and metrics
NeuroBench:A Framework for Benchmarking Neuromorphic Computing Algorithms and Systems
Neuromorphic computing shows promise for advancing computing efficiency and capabilities of AI applications using brain-inspired principles. However, the neuromorphic research field currently lacks standardized benchmarks, making it difficult to accurately measure technological advancements, compare performance with conventional methods, and identify promising future research directions. Prior neuromorphic computing benchmark efforts have not seen widespread adoption due to a lack of inclusive, actionable, and iterative benchmark design and guidelines. To address these shortcomings, we present NeuroBench: a benchmark framework for neuromorphic computing algorithms and systems. NeuroBench is a collaboratively-designed effort from an open community of nearly 100 co-authors across over 50 institutions in industry and academia, aiming to provide a representative structure for standardizing the evaluation of neuromorphic approaches. The NeuroBench framework introduces a common set of tools and systematic methodology for inclusive benchmark measurement, delivering an objective reference framework for quantifying neuromorphic approaches in both hardware-independent (algorithm track) and hardware-dependent (system track) settings. In this article, we present initial performance baselines across various model architectures on the algorithm track and outline the system track benchmark tasks and guidelines. NeuroBench is intended to continually expand its benchmarks and features to foster and track the progress made by the research community
Recommended from our members
Energy-efficient Event-based Vision Sensors and Compute-In-Memory Architectures for Neuromorphic and Machine Learning Applications
Neuromorphic engineering pursues the design of electronic systems emulating function and structural organization of biological neural systems in silicon integrated circuits that embody similar physical principles. The work in this dissertation presents advances in the field of neuromorphic engineering by demonstrating the design and applications of energy-efficient event-based sensors, compute-in-memory architectures, event-based learning algorithms and asynchronous data converters.This dissertation focuses on neuromorphic very large scale integration (VLSI) architecture and algorithm design for the implementation of sensors and processors that are highly energy-efficient, emulating brain function through event-based sensory processing. In particular, three novel contributions are presented that work towards achieving the goal of integrated visual cortical processing on silicon hardware. First, a novel hybrid approach to vision sensing is presented, called query-driven dynamic vision that achieves the best energy efficiency reported to-date and then show various applications enabled by such sensors with improved performance compared to conventional sensors. Second, an integrated compute-in-memory (CIM) architecture is presented that combines an emerging device called resistive random access memory (ReRAM) with complimentary metal oxide semiconductor (CMOS) technology. This design achieves the highest versatility in terms of reconfigurable dataflow, multiple modes of neuron activation using a single topology and the best energy-efficiency reported to-date for CMOS-RRAM CIM architectures. Third, a learning rule called the inverted synaptic time dependent plasticity (iSTDP) rule is presented, that can learn temporal patterns using only spike event timing information.Combining the above three works, it is possible to realize a preliminary form of biological vision on hardware, where the artificial silicon retina (qDVS) provides the event-based visual stimulus to the primary visual cortex layers implemented on a CIM architecture using convolutional neural networks (CNN) and can deploy event-based learning algorithms for temporal pattern recognition
Recommended from our members
Energy-efficient Event-based Vision Sensors and Compute-In-Memory Architectures for Neuromorphic and Machine Learning Applications
Neuromorphic engineering pursues the design of electronic systems emulating function and structural organization of biological neural systems in silicon integrated circuits that embody similar physical principles. The work in this dissertation presents advances in the field of neuromorphic engineering by demonstrating the design and applications of energy-efficient event-based sensors, compute-in-memory architectures, event-based learning algorithms and asynchronous data converters.This dissertation focuses on neuromorphic very large scale integration (VLSI) architecture and algorithm design for the implementation of sensors and processors that are highly energy-efficient, emulating brain function through event-based sensory processing. In particular, three novel contributions are presented that work towards achieving the goal of integrated visual cortical processing on silicon hardware. First, a novel hybrid approach to vision sensing is presented, called query-driven dynamic vision that achieves the best energy efficiency reported to-date and then show various applications enabled by such sensors with improved performance compared to conventional sensors. Second, an integrated compute-in-memory (CIM) architecture is presented that combines an emerging device called resistive random access memory (ReRAM) with complimentary metal oxide semiconductor (CMOS) technology. This design achieves the highest versatility in terms of reconfigurable dataflow, multiple modes of neuron activation using a single topology and the best energy-efficiency reported to-date for CMOS-RRAM CIM architectures. Third, a learning rule called the inverted synaptic time dependent plasticity (iSTDP) rule is presented, that can learn temporal patterns using only spike event timing information.Combining the above three works, it is possible to realize a preliminary form of biological vision on hardware, where the artificial silicon retina (qDVS) provides the event-based visual stimulus to the primary visual cortex layers implemented on a CIM architecture using convolutional neural networks (CNN) and can deploy event-based learning algorithms for temporal pattern recognition
Error correction algorithm for high accuracy bio-impedance measurement in wearable healthcare applications
Implantable and ambulatory measurement of physiological signals such as Bio-impedance using miniature biomedical devices needs careful tradeoff between limited power budget, measurement accuracy and complexity of implementation. This paper addresses this tradeoff through an extensive analysis of different stimulation and demodulation techniques for accurate Bio-impedance measurement. Three cases are considered for rigorous analysis of a generic impedance model, with multiple poles, which is stimulated using a square/sinusoidal current and demodulated using square/sinusoidal clock. For each case, the error in determining pole parameters (resistance and capacitance) is derived and compared. An error correction algorithm is proposed for square wave demodulation which reduces the peak estimation error from 9.3% to 1.3% for a simple tissue model. Simulation results in Matlab using ideal RC values show an average accuracy of for single pole and for two pole RC networks. Measurements using ideal components for a single pole model gives an overall and readings from saline phantom solution (primarily resistive) gives an . A Figure of Merit is derived based on ability to accurately resolve multiple poles in unknown impedance with minimal measurement points per decade, for given frequency range and supply current budget. This analysis is used to arrive at an optimal tradeoff between accuracy and power. Results indicate that the algorithm is generic and can be used for any application that involves resolving poles of an unknown impedance. It can be implemented as a post-processing technique for error correction or even incorporated into wearable signal monitoring ICs
Recommended from our members
A Fully Integrated 144 MHz Wireless-Power-Receiver-on-Chip with an Adaptive Buck-Boost Regulating Rectifier and Low-Loss H-Tree Signal Distribution
An adaptive buck-boost resonant regulating rectifier (B2 R3 ) with an integrated on-chip coil and low-loss H-Tree power/signal distribution is presented for efficient and robust wireless power transfer (WPT) over a wide range of input and load conditions. The B2 R3 integrated on a 9 mm2 chip powers integrated neural interfacing circuits as a load, with a TX-load power conversion efficiency of 2.64 % at 10 mm distance, resulting in a WPT system efficiency FoM of 102
Recommended from our members
A Fully Integrated 144 MHz Wireless-Power-Receiver-on-Chip with an Adaptive Buck-Boost Regulating Rectifier and Low-Loss H-Tree Signal Distribution
An adaptive buck-boost resonant regulating rectifier (B2 R3 ) with an integrated on-chip coil and low-loss H-Tree power/signal distribution is presented for efficient and robust wireless power transfer (WPT) over a wide range of input and load conditions. The B2 R3 integrated on a 9 mm2 chip powers integrated neural interfacing circuits as a load, with a TX-load power conversion efficiency of 2.64 % at 10 mm distance, resulting in a WPT system efficiency FoM of 102
A compute-in-memory chip based on resistive random-access memory.
Realizing increasingly complex artificial intelligence (AI) functionalities directly on edge devices calls for unprecedented energy efficiency of edge hardware. Compute-in-memory (CIM) based on resistive random-access memory (RRAM)1 promises to meet such demand by storing AI model weights in dense, analogue and non-volatile RRAM devices, and by performing AI computation directly within RRAM, thus eliminating power-hungry data movement between separate compute and memory2-5. Although recent studies have demonstrated in-memory matrix-vector multiplication on fully integrated RRAM-CIM hardware6-17, it remains a goal for a RRAM-CIM chip to simultaneously deliver high energy efficiency, versatility to support diverse models and software-comparable accuracy. Although efficiency, versatility and accuracy are all indispensable for broad adoption of the technology, the inter-related trade-offs among them cannot be addressed by isolated improvements on any single abstraction level of the design. Here, by co-optimizing across all hierarchies of the design from algorithms and architecture to circuits and devices, we present NeuRRAM-a RRAM-based CIM chip that simultaneously delivers versatility in reconfiguring CIM cores for diverse model architectures, energy efficiency that is two-times better than previous state-of-the-art RRAM-CIM chips across various computational bit-precisions, and inference accuracy comparable to software models quantized to four-bit weights across various AI tasks, including accuracy of 99.0 percent on MNIST18 and 85.7 percent on CIFAR-1019 image classification, 84.7-percent accuracy on Google speech command recognition20, and a 70-percent reduction in image-reconstruction error on a Bayesian image-recovery task