2,784 research outputs found

    Online Timing Slack Measurement and its Application in Field-Programmable Gate Arrays

    Get PDF
    Reliability, power consumption and timing performance are key concerns for today's integrated circuits. Measurement techniques capable of quantifying the timing characteristics of a circuit, while it is operating, facilitate a range of benefits. Delay variation due to environmental and operational conditions, and degradation can be monitored by tracking changes in timing performance. Using the measurements in a closed-loop to control power supply voltage or clock frequency allows for the reduction of timing safety margins, leading to improvements in power consumption or throughput performance through the exploitation of better-than worst-case operation. This thesis describes a novel online timing slack measurement method which can directly measure the timing performance of a circuit, accurately and with minimal overhead. Enhancements allow for the improvement of absolute accuracy and resolution. A compilation flow is reported that can automatically instrument arbitrary circuits on FPGAs with the measurement circuitry. On its own this measurement method is able to track the "health" of an integrated circuit, from commissioning through its lifetime, warning of impending failure or instigating pre-emptive degradation mitigation techniques. The use of the measurement method in a closed-loop dynamic voltage and frequency scaling scheme has been demonstrated, achieving significant improvements in power consumption and throughput performance.Open Acces

    Low-energy standby-sparing for hard real-time systems

    No full text
    Time-redundancy techniques are commonly used in real-time systems to achieve fault tolerance without incurring high energy overhead. However, reliability requirements of hard real-time systems that are used in safety-critical applications are so stringent that time-redundancy techniques are sometimes unable to achieve them. Standby sparing as a hardwareredundancy technique can be used to meet high reliability requirements of safety-critical applications. However, conventional standby-sparing techniques are not suitable for lowenergy hard real-time systems as they either impose considerable energy overheads or are not proper for hard timing constraints. In this paper we provide a technique to use standby sparing for hard real-time systems with limited energy budgets. The principal contribution of this work is an online energymanagement technique which is specifically developed for standby-sparing systems that are used in hard real-time applications. This technique operates at runtime and exploits dynamic slacks to reduce the energy consumption while guaranteeing hard deadlines. We compared the low-energy standby-sparing (LESS) system with a low-energy timeredundancy system (from a previous work). The results show that for relaxed time constraints, the LESS system is more reliable and provides about 26% energy saving as compared to the time-redundancy system. For tight deadlines when the timeredundancy system is not sufficiently reliable (for safety-critical application), the LESS system preserves its reliability but with about 49% more energy consumptio

    Application analyses of ultra-low-energy processor

    Get PDF
    Abstract. Low energy consumption has become a critical design feature in modern systems. Internet of Things, wearables and other portable devices create increasing demand for low power design where device size is dictated by battery and low energy means longer battery life and smaller physical size. These are crucial features for wearables and especially implantable medical devices. There are several low power and energy efficient techniques which are applied at different abstraction levels of the system design. A technique usually utilizing software control and hardware features is DVFS (dynamic voltage and frequency scaling), a dynamic power management technique which decreases processor clock frequency and supply voltage. Reduction in energy consumption is achieved with the cost of reduced performance. One of the questions with DVFS is how the execution frequencies are defined. This thesis presents a method for frequency optimization for applications executed on a single core processor. Execution trace data is used to profile the application. FreeRTOS operating system is used although tracing can be implemented with any real-time operating system executing tasks as separate threads. Based on profiling and user-defined data, task execution frequencies are defined assuming that execution time scales linearly with the frequency. A near-threshold ARM Cortex M3 with integrated power management and phase-locked loop is used for measurements. The measurements show that energy savings can be achieved without affecting correct application execution. However, the reduction in energy consumption depends highly on the system used and the application execution profile. Iterative testing and frequency optimization are required to ensure adequate performance. For energy efficiency optimization, energy consumption needs to be considered in every phase of the design.Matalan energiankulutuksen prosessorin sovellusanalyysi. Tiivistelmä. Matala energiankulutus on keskeinen ominaisuus nykyisten järjestelmien suunnittelussa. Esineiden Internet ja puettava tietotekniikka luovat tarpeen yhä pienemmälle energiankulutukselle. Laitteen koko määräytyy akun koon mukana. Matala tehonkulutus tarkoittaa pidempää akunkestoa ja pienempää fyysista kokoa. Nämä ovat ratkaisevia ominaisuuksia, erityisesti implantoitaville lääkinnällisille laitteille. Energiatehokkuuteen ja matalaan energiankulutukseen tähtääviä menetelmiä voidaan soveltaa eri abstraktiotasoilla järjestelmän suunnittelussa. Dynaaminen jännitteen ja taajuuden skaalaus on menetelmä, millä pyritään alentamaan dynaamista tehonkulutusta säätelemällä käyttöjännitettä ja kellotaajuutta. Suorituskyvyn kustannuksella on mahdollista saavuttaa matalampi energiankulutus. Keskeinen kysymys on, miten käytettävät kellotaajuudet tulee määritellä. Tässä diplomityössä kehitetään menetelmä, jota voidaan käyttää optimaalisten kellotaajuuksien määrittämiseen. Suorituksen aikana kerättävää dataa käytetään ohjelman profilointiin ja optimointimallin luomiseen. Suoritusdatan kerääminen on kehitetty FreeRTOS-käyttöjärjestelmälle, mutta periaate on sovellettavissa käyttöjärjestelmille, joissa tehtävät suoritetaan erillisissä prosesseissa. Profilointidata hyödynnetään yhdessä käyttäjän syöttämän data kanssa kellotaajuuksien määrittämiseen olettaen, että suoritusaika skaalautuu lineaarisesti kellotaajuden kanssa. Suositustaajuudet määritetään jokaiselle prosessille erikseen. Mittauksissa käytettiin ARM Cortex M3 prosessoria integroidulla tehonhallinnalla ja vaihelukolla. Mittaustulokset osoittavat, että energiankulutusta voidaan pienentää vaikuttamatta sovelluksen virheettömään suoritukseen. Saavutettava hyöty tehonkulutuksessa on riippuvainen käytettävästä järjestelmästä ja sovelluksen suoritusprofiilista. Riittävä suorituskyky täytyy varmistaa iteratiivisella testaamisella ja kellotaajuuksien optimoinnilla. Tehonkulutus ja energiatehokkuus täytyy huomioida suunnitteluprosessin jokaisella osa-alueella, jotta parhaat tulokset saavutetaan

    Evaluating Built-in ECC of FPGA on-chip Memories for the Mitigation of Undervolting Faults

    Get PDF
    Voltage underscaling below the nominal level is an effective solution for improving energy efficiency in digital circuits, e.g., Field Programmable Gate Arrays (FPGAs). However, further undervolting below a safe voltage level and without accompanying frequency scaling leads to timing related faults, potentially undermining the energy savings. Through experimental voltage underscaling studies on commercial FPGAs, we observed that the rate of these faults exponentially increases for on-chip memories, or Block RAMs (BRAMs). To mitigate these faults, we evaluated the efficiency of the built-in Error-Correction Code (ECC) and observed that more than 90% of the faults are correctable and further 7% are detectable (but not correctable). This efficiency is the result of the single-bit type of these faults, which are then effectively covered by the Single-Error Correction and Double-Error Detection (SECDED) design of the built-in ECC. Finally, motivated by the above experimental observations, we evaluated an FPGA-based Neural Network (NN) accelerator under low-voltage operations, while built-in ECC is leveraged to mitigate undervolting faults and thus, prevent NN significant accuracy loss. In consequence, we achieve 40% of the BRAM power saving through undervolting below the minimum safe voltage level, with a negligible NN accuracy loss, thanks to the substantial fault coverage by the built-in ECC.Comment: 6 pages, 2 figure

    TimeTrader: Exploiting Latency Tail to Save Datacenter Energy for On-line Data-Intensive Applications

    Get PDF
    Datacenters running on-line, data-intensive applications (OLDIs) consume significant amounts of energy. However, reducing their energy is challenging due to their tight response time requirements. A key aspect of OLDIs is that each user query goes to all or many of the nodes in the cluster, so that the overall time budget is dictated by the tail of the replies' latency distribution; replies see latency variations both in the network and compute. Previous work proposes to achieve load-proportional energy by slowing down the computation at lower datacenter loads based directly on response times (i.e., at lower loads, the proposal exploits the average slack in the time budget provisioned for the peak load). In contrast, we propose TimeTrader to reduce energy by exploiting the latency slack in the sub- critical replies which arrive before the deadline (e.g., 80% of replies are 3-4x faster than the tail). This slack is present at all loads and subsumes the previous work's load-related slack. While the previous work shifts the leaves' response time distribution to consume the slack at lower loads, TimeTrader reshapes the distribution at all loads by slowing down individual sub-critical nodes without increasing missed deadlines. TimeTrader exploits slack in both the network and compute budgets. Further, TimeTrader leverages Earliest Deadline First scheduling to largely decouple critical requests from the queuing delays of sub- critical requests which can then be slowed down without hurting critical requests. A combination of real-system measurements and at-scale simulations shows that without adding to missed deadlines, TimeTrader saves 15-19% and 41-49% energy at 90% and 30% loading, respectively, in a datacenter with 512 nodes, whereas previous work saves 0% and 31-37%.Comment: 13 page

    Energy-Efficient Scheduling for Homogeneous Multiprocessor Systems

    Get PDF
    We present a number of novel algorithms, based on mathematical optimization formulations, in order to solve a homogeneous multiprocessor scheduling problem, while minimizing the total energy consumption. In particular, for a system with a discrete speed set, we propose solving a tractable linear program. Our formulations are based on a fluid model and a global scheduling scheme, i.e. tasks are allowed to migrate between processors. The new methods are compared with three global energy/feasibility optimal workload allocation formulations. Simulation results illustrate that our methods achieve both feasibility and energy optimality and outperform existing methods for constrained deadline tasksets. Specifically, the results provided by our algorithm can achieve up to an 80% saving compared to an algorithm without a frequency scaling scheme and up to 70% saving compared to a constant frequency scaling scheme for some simulated tasksets. Another benefit is that our algorithms can solve the scheduling problem in one step instead of using a recursive scheme. Moreover, our formulations can solve a more general class of scheduling problems, i.e. any periodic real-time taskset with arbitrary deadline. Lastly, our algorithms can be applied to both online and offline scheduling schemes.Comment: Corrected typos: definition of J_i in Section 2.1; (3b)-(3c); definition of \Phi_A and \Phi_D in paragraph after (6b). Previous equations were correct only for special case of p_i=d_

    Fine-grained Energy and Thermal Management using Real-time Power Sensors

    Get PDF
    With extensive use of battery powered devices such as smartphones, laptops an

    Dynamic Power Management for Reactive Stream Processing on the SCC Tiled Architecture

    Get PDF
    This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.Dynamic voltage and frequency scaling} (DVFS) is a means to adjust the computing capacity and power consumption of computing systems to the application demands. DVFS is generally useful to provide a compromise between computing demands and power consumption, especially in the areas of resource-constrained computing systems. Many modern processors support some form of DVFS. In this article we focus on the development of an execution framework that provides light-weight DVFS support for reactive stream-processing systems (RSPS). RSPS are a common form of embedded control systems, operating in direct response to inputs from their environment. At the execution framework we focus on support for many-core scheduling for parallel execution of concurrent programs. We provide a DVFS strategy for RSPS that is simple and lightweight, to be used for dynamic adaptation of the power consumption at runtime. The simplicity of the DVFS strategy became possible by sole focus on the application domain of RSPS. The presented DVFS strategy does not require specific assumptions about the message arrival rate or the underlying scheduling method. While DVFS is a very active field, in contrast to most existing research, our approach works also for platforms like many-core processors, where the power settings typically cannot be controlled individually for each computational unit. We also support dynamic scheduling with variable workload. While many research results are provided with simulators, in our approach we present a parallel execution framework with experiments conducted on real hardware, using the SCC many-core processor. The results of our experimental evaluation confirm that our simple DVFS strategy provides potential for significant energy saving on RSPS.Peer reviewe
    corecore