79 research outputs found

    Approximate Computing Survey, Part I: Terminology and Software & Hardware Approximation Techniques

    Full text link
    The rapid growth of demanding applications in domains applying multimedia processing and machine learning has marked a new era for edge and cloud computing. These applications involve massive data and compute-intensive tasks, and thus, typical computing paradigms in embedded systems and data centers are stressed to meet the worldwide demand for high performance. Concurrently, the landscape of the semiconductor field in the last 15 years has constituted power as a first-class design concern. As a result, the community of computing systems is forced to find alternative design approaches to facilitate high-performance and/or power-efficient computing. Among the examined solutions, Approximate Computing has attracted an ever-increasing interest, with research works applying approximations across the entire traditional computing stack, i.e., at software, hardware, and architectural levels. Over the last decade, there is a plethora of approximation techniques in software (programs, frameworks, compilers, runtimes, languages), hardware (circuits, accelerators), and architectures (processors, memories). The current article is Part I of our comprehensive survey on Approximate Computing, and it reviews its motivation, terminology and principles, as well it classifies and presents the technical details of the state-of-the-art software and hardware approximation techniques.Comment: Under Review at ACM Computing Survey

    Designing Approximate Computing Circuits with Scalable and Systematic Data-Driven Techniques

    Get PDF
    Semiconductor feature size has been shrinking significantly in the past decades. This decreasing trend of feature size leads to faster processing speed as well as lower area and power consumption. Among these attributes, power consumption has emerged as the primary concern in the design of integrated circuits in recent years due to the rapid increasing demand of energy efficient Internet of Things (IoT) devices. As a result, low power design approaches for digital circuits have become of great attractive in the past few years. To this end, approximate computing in hardware design has emerged as a promising design technique. It provides design opportunities to improve timing and energy efficiency by relaxing computing quality. This technique is feasible because of the error-resiliency of many emerging resource-hungry computational applications such as multimedia processing and machine learning. Thus, it is reasonable to utilize this characteristic to trade an acceptable amount of computing quality for energy saving. In the literature, most prior works on approximate circuit design focus on using manual design strategies to redesign fundamental computational blocks such as adders and multipliers. However, the manual design techniques are not suitable for system level hardware due to much higher design complexity. In order to tackle this challenge, we focus on designing scalable, systematic and general design methodologies that are applicable on any circuits. In this paper, we present two novel approximate circuit design methods based on machine learning techniques. Both methods skip the complicated manual analysis steps and primarily look at the given input-error pattern to generate approximate circuits. Our first work presents a framework for designing compensation block, an essential component in many approximate circuits, based on feature selection. Our second work further extends and optimizes this framework and integrates data-driven consideration into the design. Several case studies on fixed-width multipliers and other approximate circuits are presented to demonstrate the effectiveness of the proposed design methods. The experimental results show that both of the proposed methods are able to automatically and efficiently design low-error approximate circuits

    Power and area efficient cascaded effectless GDI approximate adder for accelerating multimedia applications using deep learning model

    Get PDF
    Approximate computing is an upsurging technique to accelerate the process through less computational effort while keeping admissible accuracy of error-tolerant applications such as multimedia and deep learning. Inheritance properties of the deep learning process aid the designer to abridge the circuitry and also to increase the computation speed at the cost of the accuracy of results. High computational complexity and low-power requirement of portable devices in the dark silicon era sought suitable alternate for Complementary Metal Oxide Semiconductor (CMOS) technology. Gate Diffusion Input (GDI) logic is one of the prompting alternatives to CMOS logic to reduce transistors and low-power design. In this work, a novel energy and area efficient 1-bit GDI-based full swing Energy and Area efficient Full Adder (EAFA) with minimum error distance is proposed. The proposed architecture was constructed to mitigate the cascaded effect problem in GDI-based circuits. It is proved by extending the proposed 1-bit GDI-based adder for different 16-bit Energy and Area Efficient High-Speed Error-Tolerant Adders (EAHSETA) segmented as accurate and inaccurate adder circuits. The proposed adder’s design metrics in terms of delay, area, and power dissipation are verified through simulation using the Cadence tool. The proposed logic is deployed to accelerate the convolution process in the Low-Weight Digit Detector neural network for real-time handwritten digit classification application as a case study in the Intel Cyclone IV Field Programmable Gate Array (FPGA). The results confirm that our proposed EAHSETA occupies fewer logic elements and improves operation speed with the speed-up factor of 1.29 than other similar techniques while producing 95% of classification accuracy

    A Survey on Approximate Multiplier Designs for Energy Efficiency: From Algorithms to Circuits

    Full text link
    Given the stringent requirements of energy efficiency for Internet-of-Things edge devices, approximate multipliers, as a basic component of many processors and accelerators, have been constantly proposed and studied for decades, especially in error-resilient applications. The computation error and energy efficiency largely depend on how and where the approximation is introduced into a design. Thus, this article aims to provide a comprehensive review of the approximation techniques in multiplier designs ranging from algorithms and architectures to circuits. We have implemented representative approximate multiplier designs in each category to understand the impact of the design techniques on accuracy and efficiency. The designs can then be effectively deployed in high-level applications, such as machine learning, to gain energy efficiency at the cost of slight accuracy loss.Comment: 38 pages, 37 figure
    • …
    corecore