6 research outputs found
Cross-Layer Inexact Design for Low-Power Applications
Approximate and error tolerant circuits are a radical new approach to trade calculation accuracy for better speed, power, area and yield. The IcySoC project platform revisits low-power and low-voltage VLSI design through a cross-layer combined inexact design framework
Hardware-Software Inexactness in Noise-aware Design of Low-Power Body Sensor Nodes
Wireless Body Sensor Nodes (WBSNs) are miniaturized and ultra-low-power devices, able to acquire and wirelessly trans- mit biosignals such as electrocardiograms (ECG) for extended periods of times and with little discomfort for subjects [1]. Energy efficiency is of paramount importance for WBSNs, because it allows a higher wearability (by requiring a smaller battery) and/or an increased mean time between charges. In this paper, we investigate how noise-aware design choices can be made to minimize energy consumption in WBSNs. Noise is unavoidable in biosignals acquisitions, either due to external factors (in case of ECGs, muscle contractions and respiration of subjects [2]) or to the design of the front- end analog acquisition block. From this observation stems the opportunity to apply inexact strategies such as on-node lossy compression to minimize the bandwidth over the energy- hungry wireless link [3], as long as the output quality of the signal, when reconstructed on the receiver side, is not constrained by the performed compression. To maximize gains, ultra-low-power platforms must be employed to perform the above-mentioned Digital Signal Processing (DSP) techniques. To this end, we propose an under-designed (but extremely efficient) architecture that only guarantees the correctness of operations performed on the most significant data (i.e., data most affecting the final results), while allowing sporadic errors for the less significant data
Warp: A Hardware Platform for Efficient Multi- Modal Sensing with Adaptive Approximation
We present Warp, the first open hardware platform designed explicitly to support research in approximate computing. Warp incorporates 21 sensors, computation, and circuit-level facilities designed explicitly to enable approximate computing research, in a 3.6 cm×3.3 cm×0.5
cm area. Warp uses these facilities to support a wide range of precision and accuracy versus power and performance tradeoffs
Recommended from our members
Warp: A Hardware Platform for Efficient Multi- Modal Sensing with Adaptive Approximation
We present Warp, the first open hardware platform designed explicitly to support research in approximate computing. Warp incorporates 21 sensors, computation, and circuit-level facilities designed explicitly to enable approximate computing research, in a 3.6 cm×3.3 cm×0.5
cm area. Warp uses these facilities to support a wide range of precision and accuracy versus power and performance tradeoffs
Precision-Energy-Throughput Scaling Of Generic Matrix Multiplication and Convolution Kernels Via Linear Projections
Generic matrix multiplication (GEMM) and one-dimensional
convolution/cross-correlation (CONV) kernels often constitute the bulk of the
compute- and memory-intensive processing within image/audio recognition and
matching systems. We propose a novel method to scale the energy and processing
throughput of GEMM and CONV kernels for such error-tolerant multimedia
applications by adjusting the precision of computation. Our technique employs
linear projections to the input matrix or signal data during the top-level GEMM
and CONV blocking and reordering. The GEMM and CONV kernel processing then uses
the projected inputs and the results are accumulated to form the final outputs.
Throughput and energy scaling takes place by changing the number of projections
computed by each kernel, which in turn produces approximate results, i.e.
changes the precision of the performed computation. Results derived from a
voltage- and frequency-scaled ARM Cortex A15 processor running face recognition
and music matching algorithms demonstrate that the proposed approach allows for
280%~440% increase of processing throughput and 75%~80% decrease of energy
consumption against optimized GEMM and CONV kernels without any impact in the
obtained recognition or matching accuracy. Even higher gains can be obtained if
one is willing to tolerate some reduction in the accuracy of the recognition
and matching applications
Recommended from our members
Error-efficient computing systems
This survey explores the theory and practice of techniques to make computing systems faster or more energy-efficient by allowing them to make controlled errors. In the same way that systems which only use as much energy as necessary are referred to as being energy-efficient, you can think of the class of systems addressed by this survey as being error-efficient: They only prevent as many errors as they need to. The definition of what constitutes an error varies across the parts of a system. And the errors which are acceptable depend on the application at hand. In computing systems, making errors, when behaving correctly would be too expensive, can conserve resources. The resources conserved may be time: By making some errors, systems may be faster. The resource may also be energy: A system may use less power from its batteries or from the electrical grid by only avoiding certain errors while tolerating benign errors that are associated with reduced power consumption. The resource in question may be an even more abstract quantity such as consistency of ordering of the outputs of a system. This survey is for anyone interested in an end-to-end view of one set of techniques that address the theory and practice of making computing systems more efficient by trading errors for improved efficiency