193 research outputs found

    Performance and power management for multi-core processors

    Get PDF
    This dissertation addresses the problem of power and performance management for various computing systems, from single voltage island multicore processors to power constrained extreme scale cloud systems. Balancing power and performance in modern computing systems is a complex optimization problem. This challenge is addressed by the statement of this thesis: Improving performance and power consumption in modern computing systems will require new techniques, and the body of control theories can provide the basis for such solutions. This thesis developed dynamic models for throughput and power that adjust well to workload variations. Those models are general and can be applied to various kinds of computing frameworks. Based on those models, we use feedback controllers for throughput regulation and power regulation. The controllers are based on integrators for variable gain designed for stabilizing the closed-loop system as well as for rapidly responding to changing workload in short time frames. The feedback control is robust with respect to model uncertainties and computing errors in the loop, and they exhibit fast convergence despite such errors. This thesis addresses the performance and power management through three main contributions: 1. Effective and efficient power & performance management techniques in a single voltage island multi-core processor. 2. Maximizing power efficiency under a power cap in a multi-core processor that is composed of several voltage islands. 3. A hierarchical power management technique to improve performance and energy efficiency under power budgets in a cloud system.Ph.D

    Dynamic Voltage and Frequency Scaling for Wireless Network-on-Chip

    Get PDF
    Previously, research and design of Network-on-Chip (NoC) paradigms where mainly focused on improving the performance of the interconnection networks. With emerging wide range of low-power applications and energy constrained high-performance applications, it is highly desirable to have NoCs that are highly energy efficient without incurring performance penalty. In the design of high-performance massive multi-core chips, power and heat have become dominant constrains. Increased power consumption can raise chip temperature, which in turn can decrease chip reliability and performance and increase cooling costs. It was proven that Small-world Wireless Network-on-Chip (SWNoC) architecture which replaces multi-hop wire-line path in a NoC by high-bandwidth single hop long range wireless links, reduces the overall energy dissipation when compared to wire-line mesh-based NoC architecture. However, the overall energy dissipation of the wireless NoC is still dominated by wire-line links and switches (buffers). Dynamic Voltage Scaling is an efficient technique for significant power savings in microprocessors. It has been proposed and deployed in modern microprocessors by exploiting the variance in processor utilization. On a Network-on-Chip paradigm, it is more likely that the wire-line links and buffers are not always fully utilized even for different applications. Hence, by exploiting these characteristics of the links and buffers over different traffic, DVFS technique can be incorporated on these switches and wire-line links for huge power savings. In this thesis, a history based DVFS mechanism is proposed. This mechanism uses the past utilization of the wire-line links & buffers to predict the future traffic and accordingly tune the voltage and frequency for the links and buffers dynamically for each time window. This mechanism dynamically minimizes the power consumption while substantially maintaining a high performance over the system. Performance analysis on these DVFS enabled Wireless NoC shows that, the overall energy dissipation is improved by around 40% when compared Small-world Wireless NoCs

    An efficient design space exploration framework to optimize power-efficient heterogeneous many-core multi-threading embedded processor architectures

    Get PDF
    By the middle of this decade, uniprocessor architecture performance had hit a roadblock due to a combination of factors, such as excessive power dissipation due to high operating frequencies, growing memory access latencies, diminishing returns on deeper instruction pipelines, and a saturation of available instruction level parallelism in applications. An attractive and viable alternative embraced by all the processor vendors was multi-core architectures where throughput is improved by using micro-architectural features such as multiple processor cores, interconnects and low latency shared caches integrated on a single chip. The individual cores are often simpler than uniprocessor counterparts, use hardware multi-threading to exploit thread-level parallelism and latency hiding and typically achieve better performance-power figures. The overwhelming success of the multi-core microprocessors in both high performance and embedded computing platforms motivated chip architects to dramatically scale the multi-core processors to many-cores which will include hundreds of cores on-chip to further improve throughput. With such complex large scale architectures however, several key design issues need to be addressed. First, a wide range of micro- architectural parameters such as L1 caches, load/store queues, shared cache structures and interconnection topologies and non-linear interactions between them define a vast non-linear multi-variate micro-architectural design space of many-core processors; the traditional method of using extensive in-loop simulation to explore the design space is simply not practical. Second, to accurately evaluate the performance (measured in terms of cycles per instruction (CPI)) of a candidate design, the contention at the shared cache must be accounted in addition to cycle-by-cycle behavior of the large number of cores which superlinearly increases the number of simulation cycles per iteration of the design exploration. Third, single thread performance does not scale linearly with number of hardware threads per core and number of cores due to memory wall effect. This means that at every step of the design process designers must ensure that single thread performance is not unacceptably slowed down while increasing overall throughput. While all these factors affect design decisions in both high performance and embedded many-core processors, the design of embedded processors required for complex embedded applications such as networking, smart power grids, battlefield decision-making, consumer electronics and biomedical devices to name a few, is fundamentally different from its high performance counterpart because of the need to consider (i) low power and (ii) real-time operations. This implies the design objective for embedded many-core processors cannot be to simply maximize performance, but improve it in such a way that overall power dissipation is minimized and all real-time constraints are met. This necessitates additional power estimation models right at the design stage to accurately measure the cost and reliability of all the candidate designs during the exploration phase. In this dissertation, a statistical machine learning (SML) based design exploration framework is presented which employs an execution-driven cycle- accurate simulator to accurately measure power and performance of embedded many-core processors. The embedded many-core processor domain is Network Processors (NePs) used to processed network IP packets. Future generation NePs required to operate at terabits per second network speeds captures all the aspects of a complex embedded application consisting of shared data structures, large volume of compute-intensive and data-intensive real-time bound tasks and a high level of task (packet) level parallelism. Statistical machine learning (SML) is used to efficiently model performance and power of candidate designs in terms of wide ranges of micro-architectural parameters. The method inherently minimizes number of in-loop simulations in the exploration framework and also efficiently captures the non-linear interactions between the micro-architectural design parameters. To ensure scalability, the design space is partitioned into (i) core-level micro-architectural parameters to optimize single core architectures subject to the real-time constraints and (ii) shared memory level micro- architectural parameters to explore the shared interconnection network and shared cache memory architectures and achieves overall optimality. The cost function of our exploration algorithm is the total power dissipation which is minimized, subject to the constraints of real-time throughput (as determined from the terabit optical network router line-speed) required in IP packet processing embedded application

    Maximizing heterogeneous processor performance under power constraints

    Get PDF

    Dynamic Thermal and Power Management: From Computers to Buildings

    Get PDF
    Thermal and power management have become increasingly important for both computing and physical systems. Computing systems from real-time embedded systems to data centers require effective thermal and power management to prevent overheating and save energy. In the mean time, as a major consumer of energy buildings face challenges to reduce the energy consumption for air conditioning while maintaining comfort of occupants. In this dissertation we investigate dynamic thermal and power management for computer systems and buildings. (1) We present thermal control under utilization bound (TCUB), a novel control-theoretic thermal management algorithm designed for single core real-time embedded systems. A salient feature of TCUB is to maintain both desired processor temperature and real-time performance. (2) To address unique challenges posed by multicore processors, we develop the real-time multicore thermal control (RT-MTC) algorithm. RT-MTC employs a feedback control loop to enforce the desired temperature and CPU utilization of the multicore platform via dynamic frequency and voltage scaling. (3) We research dynamic thermal management for real-time services running on server clusters. We develop the control-theoretic thermal balancing (CTB) to dynamically balance temperature of servers via distributing clients\u27 service requests to servers. Next, (4) we propose CloudPowerCap, a power cap management system for virtualized cloud computing infrastructure. The novelty of CloudPowerCap lies in an integrated approach to coordinate power budget management and resource management in a cloud computing environment. Finally we expand our research to physical environment by exploring several fundamental problems of thermal and power management on buildings. We analyze spatial and temporal data acquired from an real-world auditorium instrumented by a multi-modal sensor network. We propose a data mining technique to determine the appropriate number and location of temperature sensors for estimating the spatiotemporal temperature distribution of the auditorium. Furthermore, we explore the potential energy savings that can be achieved through occupancy-based HVAC scheduling based on real occupancy data of the auditorium
    • …
    corecore