1 research outputs found
Machine Learning-Based Self-Compensating Approximate Computing
Dedicated hardware accelerators are suitable for parallel computational
tasks. Moreover, they have the tendency to accept inexact results. These
hardware accelerators are extensively used in image processing and computer
vision applications, e.g., to process the dense 3-D maps required for
self-driving cars. Such error-tolerant hardware accelerators can be designed
approximately for reduced power consumption and/or processing time. However,
since for some inputs the output errors may reach unacceptable levels, the main
challenge is to \textit{enhance the accuracy} of the results of approximate
accelerators and keep the error magnitude within an allowed range. Towards this
goal, in this paper, we propose a novel machine learning-based
self-compensating approximate accelerators for energy efficient systems. The
proposed error \textit{compensation module}, which is integrated within the
architecture of approximate hardware accelerators, efficiently reduces the
accumulated error at its output. It utilizes \textit{lightweight supervised
machine learning techniques, i.e., decision tree}, to capture input dependency
of the error. We consider image blending application in multiplication mode to
demonstrate a practical application of self-compensating approximate computing.
Simulation results show that the proposed design of self-compensating
approximate accelerator can achieve about 9\% accuracy enhancement, with
negligible overhead in other performance measures, i.e., power, area, delay and
energy.Comment: This work has been accepted at the 14th Annual IEEE International
Systems Conference which will be held April 20-23 at Montreal, Quebec, CANAD