169 research outputs found

    Chapter One – An Overview of Architecture-Level Power- and Energy-Efficient Design Techniques

    Get PDF
    Power dissipation and energy consumption became the primary design constraint for almost all computer systems in the last 15 years. Both computer architects and circuit designers intent to reduce power and energy (without a performance degradation) at all design levels, as it is currently the main obstacle to continue with further scaling according to Moore's law. The aim of this survey is to provide a comprehensive overview of power- and energy-efficient “state-of-the-art” techniques. We classify techniques by component where they apply to, which is the most natural way from a designer point of view. We further divide the techniques by the component of power/energy they optimize (static or dynamic), covering in that way complete low-power design flow at the architectural level. At the end, we conclude that only a holistic approach that assumes optimizations at all design levels can lead to significant savings.Peer ReviewedPostprint (published version

    Multi-objective Pareto front and particle swarm optimization algorithms for power dissipation reduction in microprocessors

    Get PDF
    The progress of microelectronics making possible higher integration densities, and a considerable development of on-board systems are currently undergoing, this growth comes up against a limiting factor of power dissipation. Higher power dissipation will cause an immediate spread of generated heat which causes thermal problems. Consequently, the system's total consumed energy will increase as the system temperature increase. High temperatures in microprocessors and large thermal energy of computer systems produce huge problems of system confidence, performance, and cooling expenses. Power consumed by processors are mainly due to the increase in number of cores and the clock frequency, which is dissipated in the form of heat and causes thermal challenges for chip designers. As the microprocessor’s performance has increased remarkably in Nano-meter technology, power dissipation is becoming non-negligible. To solve this problem, this article addresses power dissipation reduction issues for high performance processors using multi-objective Pareto front (PF), and particle swarm optimization (PSO) algorithms to achieve power dissipation as a prior computation that reduces the real delay of a target microprocessor unit. Simulation is verified the conceptual fundamentals and optimization of joint body and supply voltages (Vth-VDD) which showing satisfactory findings

    Dynamic and Leakage Power-Composition Profile Driven Co-Synthesis for Energy and Cost Reduction

    No full text
    Recent research has shown that combining dynamic voltage scaling (DVS) and adaptive body bias (ABB) techniques achieve the highest reduction in embedded systems energy dissipation [1]. In this paper we show that it is possible to produce comparable energy saving to that obtained using combined DVS and ABB techniques but with reduced hardware cost achieved by employing processing elements (PEs) with separate DVS or ABB capability. A co-synthesis methodology which is aware of tasks’ power-composition profile (the ratio of the dynamic power to the leakage power) is presented. The methodology selects voltage scaling capabilities (DVS, ABB, or combined DVS and ABB) for the PEs, maps, schedules, and voltage scales applications given as task graphs with timing constraints, aiming to dynamic and leakage energy reduction at low hardware cost. We conduct detailed experiments, including a real-life example, to demonstrate the effectiveness of our methodology. We demonstrate that it is possible to produce designs that contain PEs with only DVS or ABB technique but have energy dissipation that are only 4.4% higher when compared with the same designs that employ PEs with combined DVS and ABB capabilities

    Dynamic Voltage Scaling Aware Delay Fault Testing

    No full text
    The application of Dynamic Voltage Scaling (DVS) to reduce energy consumption may have a detrimental impact on the quality of manufacturing tests employed to detect permanent faults. This paper analyses the influence of different voltage/frequency settings on fault detection within a DVS application. In particular, the effect of supply voltage on different types of delay faults is considered. This paper presents a study of these problems with simulation results. We have demonstrated that the test application time increases as we reduce the test voltage. We have also shown that for newer technologies we do not have to go to very low voltage levels for delay fault testing. We conclude that it is necessary to test at more than one operating voltage and that the lowest operating voltage does not necessarily give the best fault cover

    Exploiting Adaptive Techniques to Improve Processor Energy Efficiency

    Get PDF
    Rapid device-miniaturization keeps on inducing challenges in building energy efficient microprocessors. As the size of the transistors continuously decreasing, more uncertainties emerge in their operations. On the other hand, integrating more and more transistors on a single chip accentuates the need to lower its supply-voltage. This dissertation investigates one of the primary device uncertainties - timing error, in microprocessor performance bottleneck in NTC era. Then it proposes various innovative techniques to exploit these opportunities to maintain processor energy efficiency, in the context of emerging challenges. Evaluated with the cross-layer methodology, the proposed approaches achieve substantial improvements in processor energy efficiency, compared to other start-of-art techniques

    Power efficient approaches to redundant multithreading

    Get PDF
    Journal ArticleNoise and radiation-induced soft errors (transient faults) in computer systems have increased significantly over the last few years and are expected to increase even more as we move toward smaller transistor sizes and lower supply voltages. Fault detection and recovery can be achieved through redundancy. The emergence of chip multiprocessors (CMPs) makes it possible to execute redundant threads on a chip and provide relatively low-cost reliability. State-of-the-art implementations execute two copies of the same program as two threads (redundant multithreading), either on the same or on separate processor cores in a CMP, and periodically check results. Although this solution has favorable performance and reliability properties, every redundant instruction flows through a high-frequency complex out-of-order pipeline, thereby incurring a high power consumption penalty. This paper proposes mechanisms that attempt to provide reliability at a modest power and complexity cost. When executing a redundant thread, the trailing thread benefits from the information produced by the leading thread. We take advantage of this property and comprehensively study different strategies to reduce the power overhead of the trailing core in a CMP. These strategies include dynamic frequency scaling, in-order execution, and parallelization of the trailing thread

    inSense: A Variation and Fault Tolerant Architecture for Nanoscale Devices

    Get PDF
    Transistor technology scaling has been the driving force in improving the size, speed, and power consumption of digital systems. As devices approach atomic size, however, their reliability and performance are increasingly compromised due to reduced noise margins, difficulties in fabrication, and emergent nano-scale phenomena. Scaled CMOS devices, in particular, suffer from process variations such as random dopant fluctuation (RDF) and line edge roughness (LER), transistor degradation mechanisms such as negative-bias temperature instability (NBTI) and hot-carrier injection (HCI), and increased sensitivity to single event upsets (SEUs). Consequently, future devices may exhibit reduced performance, diminished lifetimes, and poor reliability. This research proposes a variation and fault tolerant architecture, the inSense architecture, as a circuit-level solution to the problems induced by the aforementioned phenomena. The inSense architecture entails augmenting circuits with introspective and sensory capabilities which are able to dynamically detect and compensate for process variations, transistor degradation, and soft errors. This approach creates ``smart\u27\u27 circuits able to function despite the use of unreliable devices and is applicable to current CMOS technology as well as next-generation devices using new materials and structures. Furthermore, this work presents an automated prototype implementation of the inSense architecture targeted to CMOS devices and is evaluated via implementation in ISCAS \u2785 benchmark circuits. The automated prototype implementation is functionally verified and characterized: it is found that error detection capability (with error windows from ≈\approx30-400ps) can be added for less than 2\% area overhead for circuits of non-trivial complexity. Single event transient (SET) detection capability (configurable with target set-points) is found to be functional, although it generally tracks the standard DMR implementation with respect to overheads

    Power Management for Deep Submicron Microprocessors

    Get PDF
    As VLSI technology scales, the enhanced performance of smaller transistors comes at the expense of increased power consumption. In addition to the dynamic power consumed by the circuits there is a tremendous increase in the leakage power consumption which is further exacerbated by the increasing operating temperatures. The total power consumption of modern processors is distributed between the processor core, memory and interconnects. In this research two novel power management techniques are presented targeting the functional units and the global interconnects. First, since most leakage control schemes for processor functional units are based on circuit level techniques, such schemes inherently lack information about the operational profile of higher-level components of the system. This is a barrier to the pivotal task of predicting standby time. Without this prediction, it is extremely difficult to assess the value of any leakage control scheme. Consequently, a methodology that can predict the standby time is highly beneficial in bridging the gap between the information available at the application level and the circuit implementations. In this work, a novel Dynamic Sleep Signal Generator (DSSG) is presented. It utilizes the usage traces extracted from cycle accurate simulations of benchmark programs to predict the long standby periods associated with the various functional units. The DSSG bases its decisions on the current and previous standby state of the functional units to accurately predict the length of the next standby period. The DSSG presents an alternative to Static Sleep Signal Generation (SSSG) based on static counters that trigger the generation of the sleep signal when the functional units idle for a prespecified number of cycles. The test results of the DSSG are obtained by the use of a modified RISC superscalar processor, implemented by SimpleScalar, the most widely accepted open source vehicle for architectural analysis. In addition, the results are further verified by a Simultaneous Multithreading simulator implemented by SMTSIM. Leakage saving results shows an increase of up to 146% in leakage savings using the DSSG versus the SSSG, with an accuracy of 60-80% for predicting long standby periods. Second, chip designers in their effort to achieve timing closure, have focused on achieving the lowest possible interconnect delay through buffer insertion and routing techniques. This approach, though, taxes the power budget of modern ICs, especially those intended for wireless applications. Also, in order to achieve more functionality, die sizes are constantly increasing. This trend is leading to an increase in the average global interconnect length which, in turn, requires more buffers to achieve timing closure. Unconstrained buffering is bound to adversely affect the overall chip performance, if the power consumption is added as a major performance metric. In fact, the number of global interconnect buffers is expected to reach hundreds of thousands to achieve an appropriate timing closure. To mitigate the impact of the power consumed by the interconnect buffers, a power-efficient multi-pin routing technique is proposed in this research. The problem is based on a graph representation of the routing possibilities, including buffer insertion and identifying the least power path between the interconnect source and set of sinks. The novel multi-pin routing technique is tested by applying it to the ISPD and IBM benchmarks to verify the accuracy, complexity, and solution quality. Results obtained indicate that an average power savings as high as 32% for the 130-nm technology is achieved with no impact on the maximum chip frequency

    Dynamic Voltage Scaling of Supply and Body Bias Exploiting Software Runtime Distribution

    Full text link
    • 

    corecore