2,987 research outputs found
An energy efficient rate selection algorithm for voltage quantized dynamic voltage scaling
Ā©2001 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.The paper presents a highly energy efficient alternative algorithm to the conventional workload averaging technique for voltage quantized dynamic voltage scaling. This algorithm incorporates the strengths of the conventional workload averaging technique and our previously proposed Rate Selection Algorithm, resulting in higher energy savings while minimizing the buffers size requirement and improving the overall system stability by minimizing the number of voltage transitions. Our experimental work using the Forward Mapped Inverse Discrete Cosine Transform computation (FMIDCT) as the variable workload computation, nine 300-frame MPEG-2 video sequences as the test data, and a 4-level voltage quantization shows that our algorithm produces better energy savings in all test cases when compared to the workload averaging technique, and the maximum energy saving for the test cases was 23%.Lama H. Chandrasena Priyadarshana Chandrasena Michael J. Liebel
MATIC: Learning Around Errors for Efficient Low-Voltage Neural Network Accelerators
As a result of the increasing demand for deep neural network (DNN)-based
services, efforts to develop dedicated hardware accelerators for DNNs are
growing rapidly. However,while accelerators with high performance and
efficiency on convolutional deep neural networks (Conv-DNNs) have been
developed, less progress has been made with regards to fully-connected DNNs
(FC-DNNs). In this paper, we propose MATIC (Memory Adaptive Training with
In-situ Canaries), a methodology that enables aggressive voltage scaling of
accelerator weight memories to improve the energy-efficiency of DNN
accelerators. To enable accurate operation with voltage overscaling, MATIC
combines the characteristics of destructive SRAM reads with the error
resilience of neural networks in a memory-adaptive training process.
Furthermore, PVT-related voltage margins are eliminated using bit-cells from
synaptic weights as in-situ canaries to track runtime environmental variation.
Demonstrated on a low-power DNN accelerator that we fabricate in 65 nm CMOS,
MATIC enables up to 60-80 mV of voltage overscaling (3.3x total energy
reduction versus the nominal voltage), or 18.6x application error reduction.Comment: 6 pages, 12 figures, 3 tables. Published at Design, Automation and
Test in Europe Conference and Exhibition (DATE) 201
Energy-Efficient HOG-based Object Detection at 1080HD 60 fps with Multi-Scale Support
In this paper, we present a real-time and energy-efficient multi-scale object detector using Histogram of Oriented Gradient (HOG) features and Support Vector Machine (SVM) classification. Parallel detectors with balanced workload are used to enable processing of multiple scales and increase the throughput such that voltage scaling can be applied to reduce energy consumption. Image pre-processing is also introduced to further reduce power and area cost of the image scales generation. This design can operate on high definition 1080HD video at 60 fps in real-time with a clock rate of 270 MHz, and consumes 45.3 mW (0.36 nJ/pixel) based on post-layout simulations. The ASIC has an area of 490 kgates and 0.538 Mbit on-chip memory in a 45nm SOI CMOS process
EnforceSNN: Enabling Resilient and Energy-Efficient Spiking Neural Network Inference considering Approximate DRAMs for Embedded Systems
Spiking Neural Networks (SNNs) have shown capabilities of achieving high
accuracy under unsupervised settings and low operational power/energy due to
their bio-plausible computations. Previous studies identified that DRAM-based
off-chip memory accesses dominate the energy consumption of SNN processing.
However, state-of-the-art works do not optimize the DRAM energy-per-access,
thereby hindering the SNN-based systems from achieving further energy
efficiency gains. To substantially reduce the DRAM energy-per-access, an
effective solution is to decrease the DRAM supply voltage, but it may lead to
errors in DRAM cells (i.e., so-called approximate DRAM). Towards this, we
propose \textit{EnforceSNN}, a novel design framework that provides a solution
for resilient and energy-efficient SNN inference using reduced-voltage DRAM for
embedded systems. The key mechanisms of our EnforceSNN are: (1) employing
quantized weights to reduce the DRAM access energy; (2) devising an efficient
DRAM mapping policy to minimize the DRAM energy-per-access; (3) analyzing the
SNN error tolerance to understand its accuracy profile considering different
bit error rate (BER) values; (4) leveraging the information for developing an
efficient fault-aware training (FAT) that considers different BER values and
bit error locations in DRAM to improve the SNN error tolerance; and (5)
developing an algorithm to select the SNN model that offers good trade-offs
among accuracy, memory, and energy consumption. The experimental results show
that our EnforceSNN maintains the accuracy (i.e., no accuracy loss for BER
less-or-equal 10^-3) as compared to the baseline SNN with accurate DRAM, while
achieving up to 84.9\% of DRAM energy saving and up to 4.1x speed-up of DRAM
data throughput across different network sizes.Comment: Accepted for publication at Frontiers in Neuroscience - Section
Neuromorphic Engineerin
Energy Efficient Learning with Low Resolution Stochastic Domain Wall Synapse Based Deep Neural Networks
We demonstrate that extremely low resolution quantized (nominally 5-state)
synapses with large stochastic variations in Domain Wall (DW) position can be
both energy efficient and achieve reasonably high testing accuracies compared
to Deep Neural Networks (DNNs) of similar sizes using floating precision
synaptic weights. Specifically, voltage controlled DW devices demonstrate
stochastic behavior as modeled rigorously with micromagnetic simulations and
can only encode limited states; however, they can be extremely energy efficient
during both training and inference. We show that by implementing suitable
modifications to the learning algorithms, we can address the stochastic
behavior as well as mitigate the effect of their low-resolution to achieve high
testing accuracies. In this study, we propose both in-situ and ex-situ training
algorithms, based on modification of the algorithm proposed by Hubara et al.
[1] which works well with quantization of synaptic weights. We train several
5-layer DNNs on MNIST dataset using 2-, 3- and 5-state DW device as synapse.
For in-situ training, a separate high precision memory unit is adopted to
preserve and accumulate the weight gradients, which are then quantized to
program the low precision DW devices. Moreover, a sizeable noise tolerance
margin is used during the training to address the intrinsic programming noise.
For ex-situ training, a precursor DNN is first trained based on the
characterized DW device model and a noise tolerance margin, which is similar to
the in-situ training. Remarkably, for in-situ inference the energy dissipation
to program the devices is only 13 pJ per inference given that the training is
performed over the entire MNIST dataset for 10 epochs
Efficient and Robust Neuromorphic Computing Design
In recent years, brain inspired neuromorphic computing system (NCS) has been intensively studied in both circuit level and architecture level. NCS has demonstrated remarkable advantages for its high-energy efficiency, extremely compact space occupation and parallel data processing. However, due to the limited hardware resources, severe IR-Drop and process variation problems for synapse crossbar, and limited synapse device resolution, itās still a great challenge for hardware
NCS design to catch up with the fast development of software deep neural networks (DNNs). This dissertation explores model compression and acceleration methods for deep neural networks to save both memory and computation resources for the hardware implementation of DNNs. Firstly, DNNsā weights quantization work is presented to use three orthogonal methods to learn synapses with one-level precision, namely, distribution-aware quantization, quantization regularization and bias tuning, to make image classification accuracy comparable to the state-ofthe-art. And then a two-step framework named group scissor, including rank clipping and group connection deletion methods, is presented to address the problems on large synapse crossbar
consuming and high routing congestion between crossbars.
Results show that after applying weights quantization methods, accuracy drop can be well controlled within negligible level for MNIST and CIFAR-10 dataset, compared to an ideal system without quantization. And for the group scissor framework method, crossbar area and routing area could be reduced to 8% (at most) of original size, indicating that the hardware implementation area has been saved a lot. Furthermore, the system scalability has been improved significantly
- ā¦