5 research outputs found

    Evaluating fault tolerance on asymmetric multicore systems-on-chip using iso-metrics

    Get PDF
    The end of Dennard scaling has promoted low power consumption into a first-order concern for computing systems. However, conventional power conservation schemes such as voltage and frequency scaling are reaching their limits when used in performance-constrained environments. New technologies are required to break the power wall while sustaining performance on future processors. Low-power embedded processors and near-threshold voltage computing (NTVC) have been proposed as viable solutions to tackle the power wall in future computing systems. Unfortunately, these technologies may also compromise per-core performance and, in the case of NTVC, reliability. These limitations would make them unsuitable for HPC systems and datacenters. To demonstrate that emerging low-power processing technologies can effectively replace conventional technologies, this study relies on ARM's big.LITTLE processors as both an actual and emulation platform, and state-of-the-art implementations of the CG solver. For NTVC in particular, the study describes how efficient algorithm-based fault tolerance schemes preserve the power and energy benefits of very low voltage operation.We thank F. D. Igual, from Universidad Complutense de Madrid, for his help with the Odroid board. Sandra Catalán and Enrique S. Quintana-Ortí were supported by projects TIN2011-23283 and TIN2014-53495-R of the MINECO and FEDER, and the EU project FP7 318793 ‘ EXA2GREEN ’ . This work was partially conducted while this author was visiting Queen ’ s University of Belfast. This research has also been supported in part by the European Commission under grant agreements FP7-323872 (ScoRPiO), FP6-610509 (NanoStreams) and by the UK Engineering and Physical Sciences Research Council under grant agreements EP/L000055/1 (ALEA), EP/L004232/1 (ENPOWER) and EP/K017594/1 (GEMSCLAIM) Invited paper from EEHCO HIPEAC

    DARE: Data-Access Aware Refresh via spatial-temporal application resilience on commodity servers

    Get PDF
    Power consumption and reliability of memory components are two of the most important hurdles in realizing exascale systems. Dynamic random access memory (DRAM) scaling projections predict significant performance and power penalty due to the conventional use of pessimistic refresh periods catering for worst-case cell retention times. Recent approaches relax those pessimistic refresh rates only on ``strong'' cells, or build on application-specific error resilience for data placement. However, these approaches cannot reveal the full potential of a relaxed refresh paradigm shift, since they neglect additional application resilience properties related to the inherent functioning of DRAM. In this article, we elevate Refresh-by-Access as a first-class property of application resilience. We develop a complete, non-intrusive system stack, armed with low-cost Data-Access Aware Refresh (DARE) methods, to facilitate aggressive refresh relaxation and ensure non-disruptive operation on commodity servers. Essentially, our proposed access-aware scheduling of application tasks intelligently amplifies the impact of the implicit refresh of memory accesses, extending the period during which hardware refresh remains disabled, while limiting the number of potential errors, hence their impact on an application's output quality. The stack, implemented on an off-the-shelf server and running a full-fledged Linux OS, captures for the first time the intricate time-dependent system and data interactions in the presence of hardware errors, in contrast to previous architectural simulations approaches of limited detail. Results demonstrate that by applying DARE, it is possible to completely disable hardware refresh, with minor quality loss that ranges from 2% to 18%. </jats:p
    corecore