301 research outputs found

    Design and Analysis of Majority Logic Based Approximate Adders and Multipliers

    Get PDF
    As a new paradigm for nanoscale technologies, approximate computing deals with error tolerance in the computational process to improve performance and reduce power consumption. Majority logic (ML) is applicable to many emerging nanotechnologies; its basic building block (the 3-input majority voter, MV) has been extensively used for digital circuit design. In this paper, designs of approximate adders and multipliers based on ML are proposed; the proposed multipliers utilize approximate compressors and a reduction circuitry with so-called complement bits. An influence factor is defined and analyzed to assess the importance of different complement bits depending on the size of the multiplier; a scheme for selection of the complement bits is also presented. The proposed designs are evaluated using hardware metrics (such delay and gate complexity) as well as error metrics. Compared with other ML-based designs found in the technical literature, the proposed designs are found to offer superior performance. Case studies of error-resilient applications are also presented to show the validity of the proposed designs

    Approximate Computing Survey, Part I: Terminology and Software & Hardware Approximation Techniques

    Full text link
    The rapid growth of demanding applications in domains applying multimedia processing and machine learning has marked a new era for edge and cloud computing. These applications involve massive data and compute-intensive tasks, and thus, typical computing paradigms in embedded systems and data centers are stressed to meet the worldwide demand for high performance. Concurrently, the landscape of the semiconductor field in the last 15 years has constituted power as a first-class design concern. As a result, the community of computing systems is forced to find alternative design approaches to facilitate high-performance and/or power-efficient computing. Among the examined solutions, Approximate Computing has attracted an ever-increasing interest, with research works applying approximations across the entire traditional computing stack, i.e., at software, hardware, and architectural levels. Over the last decade, there is a plethora of approximation techniques in software (programs, frameworks, compilers, runtimes, languages), hardware (circuits, accelerators), and architectures (processors, memories). The current article is Part I of our comprehensive survey on Approximate Computing, and it reviews its motivation, terminology and principles, as well it classifies and presents the technical details of the state-of-the-art software and hardware approximation techniques.Comment: Under Review at ACM Computing Survey

    Approximation Opportunities in Edge Computing Hardware : A Systematic Literature Review

    Get PDF
    With the increasing popularity of the Internet of Things and massive Machine Type Communication technologies, the number of connected devices is rising. However, while enabling valuable effects to our lives, bandwidth and latency constraints challenge Cloud processing of their associated data amounts. A promising solution to these challenges is the combination of Edge and approximate computing techniques that allows for data processing nearer to the user. This paper aims to survey the potential benefits of these paradigms’ intersection. We provide a state-of-the-art review of circuit-level and architecture-level hardware techniques and popular applications. We also outline essential future research directions.publishedVersionPeer reviewe

    On the Improving of Approximate Computing Quality Assurance

    Get PDF
    Approximate computing (AC) has been predominantly recommended for implementation in error-tolerant applications as it offers a reduced resource usage, e.g.,~area and power, for a trade-off in output quality. However, AC implementation has not been adopted in commercial designs yet as it is still falling short in providing a good enough quality. Thus, continued research in the field in the field of improving quality of AC designs is indispensable. In this direction, a recent study exploited the use of machine learning (ML) to improve output quality. Nonetheless, the idea of quality assurance in AC designs could be improved in many aspects. In the work we present in this thesis, we propose a few practical methods to improve an ML-based quality assurance methodology, which consist of an ML-model that select the most suitable design from a library of AC circuits. For instance, we extend the library of AC designs used for the ML-based approach with larger data path circuits. Larger designs, however, result in an exponential growth of complexity. Thus we propose the use of data pre-processing in order to reduce this hurdle by prioritizing designs based on their physical properties. Another direction of improving AC circuits designs in general, and the ML-based model in particular is design space exploration (DSE). We therefore propose a novel DSE that drastically reduces the design space based on the aimed targets for area, latency and power of the AC circuit. Moreover, even with a narrowed design space, the number of AC designs to be assessed for their quality could be enormous. Thus, as part of this thesis, we propose a DSE that uses an intricate mathematical modeling for designs to assess their quality. In another effort in improving quality assurance for AC design, we introduce a highly reliable model that uses a minimal overhead. This work is achieved by using redundant AC modules to form an approximate quadruple modular redundancy (AQMR) design. The proposed AQMR is superior to the exact triple modular redundancy (TMR) by offering a better reliability on top of the resource savings resulting from the implementation of AC

    AxOMaP: Designing FPGA-based Approximate Arithmetic Operators using Mathematical Programming

    Full text link
    With the increasing application of machine learning (ML) algorithms in embedded systems, there is a rising necessity to design low-cost computer arithmetic for these resource-constrained systems. As a result, emerging models of computation, such as approximate and stochastic computing, that leverage the inherent error-resilience of such algorithms are being actively explored for implementing ML inference on resource-constrained systems. Approximate computing (AxC) aims to provide disproportionate gains in the power, performance, and area (PPA) of an application by allowing some level of reduction in its behavioral accuracy (BEHAV). Using approximate operators (AxOs) for computer arithmetic forms one of the more prevalent methods of implementing AxC. AxOs provide the additional scope for finer granularity of optimization, compared to only precision scaling of computer arithmetic. To this end, designing platform-specific and cost-efficient approximate operators forms an important research goal. Recently, multiple works have reported using AI/ML-based approaches for synthesizing novel FPGA-based AxOs. However, most of such works limit usage of AI/ML to designing ML-based surrogate functions used during iterative optimization processes. To this end, we propose a novel data analysis-driven mathematical programming-based approach to synthesizing approximate operators for FPGAs. Specifically, we formulate mixed integer quadratically constrained programs based on the results of correlation analysis of the characterization data and use the solutions to enable a more directed search approach for evolutionary optimization algorithms. Compared to traditional evolutionary algorithms-based optimization, we report up to 21% improvement in the hypervolume, for joint optimization of PPA and BEHAV, in the design of signed 8-bit multipliers.Comment: 23 pages, Under review at ACM TRET
    corecore