2,281 research outputs found

    Exploiting Natural On-chip Redundancy for Energy Efficient Memory and Computing

    Get PDF
    Power density is currently the primary design constraint across most computing segments and the main performance limiting factor. For years, industry has kept power density constant, while increasing frequency, lowering transistors supply (Vdd) and threshold (Vth) voltages. However, Vth scaling has stopped because leakage current is exponentially related to it. Transistor count and integration density keep doubling every process generation (Moore’s Law), but the power budget caps the amount of hardware that can be active at the same time, leading to dark silicon. With each new generation, there are more resources available, but we cannot fully exploit their performance potential. In the last years, different research trends have explored how to cope with dark silicon and unlock the energy efficiency of the chips, including Near-Threshold voltage Computing (NTC) and approximate computing. NTC aggressively lowers Vdd to values near Vth. This allows a substantial reduction in power, as dynamic power scales quadratically with supply voltage. The resultant power reduction could be used to activate more chip resources and potentially achieve performance improvements. Unfortunately, Vdd scaling is limited by the tight functionality margins of on-chip SRAM transistors. When scaling Vdd down to values near-threshold, manufacture-induced parameter variations affect the functionality of SRAM cells, which eventually become not reliable. A large amount of emerging applications, on the other hand, features an intrinsic error-resilience property, tolerating a certain amount of noise. In this context, approximate computing takes advantage of this observation and exploits the gap between the level of accuracy required by the application and the level of accuracy given by the computation, providing that reducing the accuracy translates into an energy gain. However, deciding which instructions and data and which techniques are best suited for approximation still poses a major challenge. This dissertation contributes in these two directions. First, it proposes a new approach to mitigate the impact of SRAM failures due to parameter variation for effective operation at ultra-low voltages. We identify two levels of natural on-chip redundancy: cache level and content level. The first arises because of the replication of blocks in multi-level cache hierarchies. We exploit this redundancy with a cache management policy that allocates blocks to entries taking into account the nature of the cache entry and the use pattern of the block. This policy obtains performance improvements between 2% and 34%, with respect to block disabling, a technique with similar complexity, incurring no additional storage overhead. The latter (content level redundancy) arises because of the redundancy of data in real world applications. We exploit this redundancy compressing cache blocks to fit them in partially functional cache entries. At the cost of a slight overhead increase, we can obtain performance within 2% of that obtained when the cache is built with fault-free cells, even if more than 90% of the cache entries have at least a faulty cell. Then, we analyze how the intrinsic noise tolerance of emerging applications can be exploited to design an approximate Instruction Set Architecture (ISA). Exploiting the ISA redundancy, we explore a set of techniques to approximate the execution of instructions across a set of emerging applications, pointing out the potential of reducing the complexity of the ISA, and the trade-offs of the approach. In a proof-of-concept implementation, the ISA is shrunk in two dimensions: Breadth (i.e., simplifying instructions) and Depth (i.e., dropping instructions). This proof-of-concept shows that energy can be reduced on average 20.6% at around 14.9% accuracy loss

    Evaluating follow- up and complexity in cancer clinical trials (EFACCT): an eDelphi study of research professionals’ perspectives.

    Get PDF
    Objectives: To evaluate patient follow-up and complexity in cancer clinical trial delivery, using consensus methods to: (1) identify research professionals’ priorities, (2) understand localised challenges, (3) define study complexity and workloads supporting the development of a trial rating and complexity assessment tool (TRACAT). Design: A classic eDelphi completed in three rounds, conducted as the launch study to a multiphase national project (evaluating follow-up and complexity in cancer clinical trials). Setting: Multicentre online survey involving professionals at National Health Service secondary care hospital sites in Scotland and England varied in scale, geographical location and patient populations. Participants: Principal investigators at 13 hospitals across nine clinical research networks recruited 33 participants using pre-defined eligibility criteria to form a multidisciplinary panel. Main outcome measures: Statements achieving a consensus level of 70% on a 7-point Likert-type scale and ranked trial rating indicators (TRIs) developed by research professionals. Results: The panel developed 75 consensus statements illustrating factors contributing to complexity, follow-up intensity and operational performance in trial delivery, and specified 14 ranked TRIs. Seven open questions in the first qualitative round generated 531 individual statements. Iterative survey rounds returned rates of 82%, 82% and 93%. Conclusions: Clinical trials operate within a dynamic, complex healthcare and innovation system where rapid scientific advances present opportunities and challenges for delivery organisations and professionals. Panellists highlighted cultural and organisational factors limiting the profession’s potential to support growing trial complexity and patient follow-up. Enhanced communication, interoperability, funding and capacity have emerged as key priorities. Future operational models should test dialectic Singerian-based approaches respecting open dialogue and shared values. Research capacity building should prioritise innovative, collaborative approaches embedding validated review and evaluation models to understand changing operational needs and challenges. TRACAT provides a mechanism for continual knowledge assimilation to improve decision-making
    • …
    corecore