23 research outputs found

    A low-cost approach for determining the impact of Functional Approximation

    Get PDF
    Approximate Computing (AxC) trades off between the level of accuracy required by the user and the actual precision provided by the computing system to achieve several optimizations such as performance improvement, energy, and area reduction etc.. Several AxC techniques have been proposed so far in the literature. They work at different abstraction level and propose both hardware and software implementations. The common issue of all existing approaches is the lack of a methodology to estimate the impact of a given AxC technique on the application-level accuracy. In this paper, we propose a probabilistic approach to predict the relation between component-level functional approximation and application-level accuracy. Experimental results on a set of benchmark application show that the proposed approach is able to estimate the approximation error with good accuracy and very low computation time

    A Genetic-algorithm-based Approach to the Design of DCT Hardware Accelerators

    Get PDF
    As modern applications demand an unprecedented level of computational resources, traditional computing system design paradigms are no longer adequate to guarantee significant performance enhancement at an affordable cost. Approximate Computing (AxC) has been introduced as a potential candidate to achieve better computational performances by relaxing non-critical functional system specifications. In this article, we propose a systematic and high-abstraction-level approach allowing the automatic generation of near Pareto-optimal approximate configurations for a Discrete Cosine Transform (DCT) hardware accelerator. We obtain the approximate variants by using approximate operations, having configurable approximation degree, rather than full-precise ones. We use a genetic searching algorithm to find the appropriate tuning of the approximation degree, leading to optimal tradeoffs between accuracy and gains. Finally, to evaluate the actual HW gains, we synthesize non-dominated approximate DCT variants for two different target technologies, namely, Field Programmable Gate Arrays (FPGAs) and Application Specific Integrated Circuits (ASICs). Experimental results show that the proposed approach allows performing a meaningful exploration of the design space to find the best tradeoffs in a reasonable time. Indeed, compared to the state-of-the-art work on approximate DCT, the proposed approach allows an 18% average energy improvement while providing at the same time image quality improvement

    Efficient Neural Network Approximation via Bayesian Reasoning

    Get PDF
    Approximate Computing (AxC) trades off between the accuracy required by the user and the precision provided by the computing system to achieve several optimizations such as performance improvement, energy, and area reduction. Several AxC techniques have been proposed so far in the literature. They work at different abstraction levels and propose both hardware and software implementations. The standard issue of all existing approaches is the lack of a methodology to estimate the impact of a given AxC technique on the application-level accuracy. This paper proposes a probabilistic approach based on Bayesian networks to quickly estimate the impact of a given approximation technique on application-level accuracy. Moreover, we have also shown how Bayesian networks allow a backtrack analysis that automatically identifies the most sensitive components. That influence analysis dramatically reduces the space exploration for approximation techniques. Preliminary results on a simple artificial neural network shown the efficiency of the proposed approach

    Design, Verification, Test and In-Field Implications of Approximate Computing Systems

    Get PDF
    Today, the concept of approximation in computing is becoming more and more a “hot topic” to investigate how computing systems can be more energy efficient, faster, and less complex. Intuitively, instead of performing exact computations and, consequently, requiring a high amount of resources, Approximate Computing aims at selectively relaxing the specifications, trading accuracy off for efficiency. While Approximate Computing gives several promises when looking at systems’ performance, energy efficiency and complexity, it poses significant challenges regarding the design, the verification, the test and the in-field reliability of Approximate Computing systems. This tutorial paper covers these aspects leveraging the experience of the authors in the field to present state-of-the-art solutions to apply during the different development phases of an Approximate Computing system

    Multi-Objective Application-Driven Approximate Design Method

    Get PDF
    Approximate Computing (AxC) paradigm aims at designing computing systems that can satisfy the rising performance demands and improve the energy efficiency. AxC exploits the gap between the level of accuracy required by the users, and the actual precision provided by the computing system, for achieving diverse optimizations. Various AxC techniques have been proposed so far in the literature at different abstraction levels from hardware to software. These techniques have been successfully utilized and combined to realize approximate implementations of applications in various domains (e.g. data analytic, scientific computing, multimedia and signal processing, and machine learning). Unfortunately, state-of- the-art approximation methodologies focus on a single abstraction level, such as combining elementary components (e.g., arithmetic operations) which are firstly approximated using component-level metrics and then combined to provide a good trade-off between efficiency and accuracy at the application level. This hinders the possibility for designers to explore different approximation opportunities, optimized for different applications and implementation targets. Therefore, we designed and implemented E-IDEA, an automatic framework that provides an application-driven approximation approach to find the best approximate versions of a given application targeting different implementations (i.e., hardware and software). E-IDEA compounds (i) a source-to-source manipulation tool and (ii) an evolutionary search engine to automatically realize approximate application variants and perform a Design-Space Exploration (DSE). The latter results in a set of non-dominate approximate solutions in terms of trade-off between accuracy and efficiency. Experimental results validate the effectiveness and the flexibility of the approach in generating optimized approximate implementations of different applications, by using different approximation techniques and different accuracy/error metrics and for different implementation targets

    Estimating dynamic power consumption for memristor-based CiM architecture

    No full text
    Nowadays, Computing-in-Memory (CiM) represents one of the most relevant solutions to deal with CMOS technological issues and several works have been proposed so far targeting front and back-end synthesis. However, a given CiM architecture can be synthesized depending on different parameters, leading to different implementations w.r.t. area, power consumption and performance. It is thus mandatory to have an evaluation framework to characterize the actual implementation depending on the above terms. This is even more important during the Design Exploration phase, in which many different implementations are explored to identify the best candidate w.r.t. the user requirements. In this work, we focus on the dynamic power consumption estimation of a given CiM implementation. Instead of resorting to a simulation-based power estimation, we propose an analytical approach that will dramatically speed up the estimation since no simulations are required. By comparing the proposed approach against the simulation-based method over a massive experimental campaign, we show that the accuracy of the estimation turns out to be very high

    Formal Design Space Exploration for memristor-based crossbar architecture

    No full text
    The unceasing shrinking process of CMOS technology is leading to its physical limits, impacting several aspects, such as performances, power consumption and many others. Alternative solutions are under investigation in order to overcome CMOS limitations. Among them, the memristor is one of promising technologies. Several works have been proposed so far, describing how to synthesize boolean logic functions on memristors-based crossbar architecture. However, depending on the synthesis parameters, different architectures can be obtained. Design Space Exploration (DSE) is therefore mandatory to help and guide the designer in order to select the best crossbar configuration. In this paper, we present a formal DSE approach. The main advantage is that it does not require any simulation and thus it avoids any runtime overheads. Preliminary results show the huge gain in runtime compared to simulation-based DSE

    Predicting the Impact of Functional Approximation: From Component- to Application-Level

    No full text
    Approximate Computing (AxC) trades off between the level of accuracy required by the user and the actual precision provided by the computing system to achieve several optimizations such as performance improvement, energy and area reduction etc. Several AxCtechniques have been proposed so far in the literature. They work at different abstraction levels and propose both hardware and software implementations. The common issue of all existing approaches is the lack of a methodology to estimate the impact of a given AxC technique on the application-level accuracy. In this paper we propose a probabilistic approach to predict the relation between component-level functional approximation and application-level accuracy. Experimental results on a set of benchmark applications show that the proposed approach is able to estimate the approximation error with good accuracy and very low computation time

    Testing approximate digital circuits: Challenges and opportunities

    No full text
    Approximate Computing (AxC) is based on the observation that a significant class of applications can inherently tolerate a certain amount of errors (i.e., the output quality is still acceptable to the user). AxC exploits this characteristic in order to apply selective approximations or occasional relaxations of the specifications. The benefit is a significant gain in energy efficiency and area reduction for Integrated Circuits (ICs). During the mission-mode, the IC can be affected by faults caused by environmental perturbations (e.g., radiations, electromagnetic interference), or aging-related phenomena. These faults may be propagated through the IC structure to the outputs and thus lead to observable errors. These errors (due to faults) may worsen the accuracy reduction - already introduced by the AxC - and possibly lead it to become unacceptable. This paper aims at investigating the challenges and the opportunities related to the test of AxC ICs

    A Test Pattern Generation Technique for Approximate Circuits Based on an ILP-Formulated Pattern Selection Procedure

    No full text
    Intrinsic resiliency of many today's applications opens new design opportunities. Some computation accuracy loss within the so-called resilient kernels does not affect the global quality of results. This has led the scientific community to introduce the approximate computing paradigm that exploits such a concept to boost computing system performances. By applying approximation to different layers, it is possible to design more efficient systems-in terms of energy, area, and performance-at the cost of a slight accuracy loss. In particular, at hardware level, this led to approximate integrated circuits. From the test perspective, this particular class of integrated circuits leads to new challenges. On the other hand, it also offers the opportunity of relaxing test constraints at the cost of a careful selection of so-called approximation-redundant faults. Such faults are classified as tolerable because of the slight introduced error. It follows that improvements in yield and test-cost reduction can be achieved. Nevertheless, conventional automatic test pattern generation (ATPG) algorithms, when not aware of the introduced approximation, generate test vectors covering approximation-redundant faults, thus reducing the yield gain. In this work, we show experimental evidence of such problem and present a novel ATPG technique to deal with it. Then, we extensively evaluate the proposed technique, and show that we are able to achieve an average yield improvement ranging from 19% up to 36%-compared to conventional ATPG-in terms of approximation-redundant fault coverage reduction. In some cases, the improvement can reach up to 100%
    corecore