1,247 research outputs found

    Dynamic Thermal and Power Management: From Computers to Buildings

    Get PDF
    Thermal and power management have become increasingly important for both computing and physical systems. Computing systems from real-time embedded systems to data centers require effective thermal and power management to prevent overheating and save energy. In the mean time, as a major consumer of energy buildings face challenges to reduce the energy consumption for air conditioning while maintaining comfort of occupants. In this dissertation we investigate dynamic thermal and power management for computer systems and buildings. (1) We present thermal control under utilization bound (TCUB), a novel control-theoretic thermal management algorithm designed for single core real-time embedded systems. A salient feature of TCUB is to maintain both desired processor temperature and real-time performance. (2) To address unique challenges posed by multicore processors, we develop the real-time multicore thermal control (RT-MTC) algorithm. RT-MTC employs a feedback control loop to enforce the desired temperature and CPU utilization of the multicore platform via dynamic frequency and voltage scaling. (3) We research dynamic thermal management for real-time services running on server clusters. We develop the control-theoretic thermal balancing (CTB) to dynamically balance temperature of servers via distributing clients\u27 service requests to servers. Next, (4) we propose CloudPowerCap, a power cap management system for virtualized cloud computing infrastructure. The novelty of CloudPowerCap lies in an integrated approach to coordinate power budget management and resource management in a cloud computing environment. Finally we expand our research to physical environment by exploring several fundamental problems of thermal and power management on buildings. We analyze spatial and temporal data acquired from an real-world auditorium instrumented by a multi-modal sensor network. We propose a data mining technique to determine the appropriate number and location of temperature sensors for estimating the spatiotemporal temperature distribution of the auditorium. Furthermore, we explore the potential energy savings that can be achieved through occupancy-based HVAC scheduling based on real occupancy data of the auditorium

    A Survey of Prediction and Classification Techniques in Multicore Processor Systems

    Get PDF
    In multicore processor systems, being able to accurately predict the future provides new optimization opportunities, which otherwise could not be exploited. For example, an oracle able to predict a certain application\u27s behavior running on a smart phone could direct the power manager to switch to appropriate dynamic voltage and frequency scaling modes that would guarantee minimum levels of desired performance while saving energy consumption and thereby prolonging battery life. Using predictions enables systems to become proactive rather than continue to operate in a reactive manner. This prediction-based proactive approach has become increasingly popular in the design and optimization of integrated circuits and of multicore processor systems. Prediction transforms from simple forecasting to sophisticated machine learning based prediction and classification that learns from existing data, employs data mining, and predicts future behavior. This can be exploited by novel optimization techniques that can span across all layers of the computing stack. In this survey paper, we present a discussion of the most popular techniques on prediction and classification in the general context of computing systems with emphasis on multicore processors. The paper is far from comprehensive, but, it will help the reader interested in employing prediction in optimization of multicore processor systems

    Performance Controlled Power Optimization for Virtualized Internet Datacenters

    Get PDF
    Modern data centers must provide performance assurance for complex system software such as web applications. In addition, the power consumption of data centers needs to be minimized to reduce operating costs and avoid system overheating. In recent years, more and more data centers start to adopt server virtualization strategies for resource sharing to reduce hardware and operating costs by consolidating applications previously running on multiple physical servers onto a single physical server. In this dissertation, several power efficient algorithms are proposed to effectively reduce server power consumption while achieving the required application-level performance for virtualized servers. First, at the server level this dissertation proposes two control solutions based on dynamic voltage and frequency scaling (DVFS) technology and request batching technology. The two solutions share a performance balancing technique that maintains performance balancing among all virtual machines so that they can have approximately the same performance level relative to their allowed peak values. Then, when the workload intensity is light, we adopt the request batching technology by using a controller to determine the time length for periodically batching incoming requests and putting the processor into sleep mode. When the workload intensity changes from light to moderate, request batching is automatically switched to DVFS to increase the processor frequency for performance guarantees. Second, at the datacenter level, this dissertation proposes a performance-controlled power optimization solution for virtualized server clusters with multi-tier applications. The solution utilizes both DVFS and server consolidation strategies for maximized power savings by integrating feedback control with optimization strategies. At the application level, a multi-input-multi-output controller is designed to achieve the desired performance for applications spanning multiple VMs, on a short time scale, by reallocating the CPU resources and DVFS. At the cluster level, a power optimizer is proposed to incrementally consolidate VMs onto the most power-efficient servers on a longer time scale. Finally, this dissertation proposes a VM scheduling algorithm that exploits core performance heterogeneity to optimize the overall system energy efficiency. The four algorithms at the three different levels are demonstrated with empirical results on hardware testbeds and trace-driven simulations and compared against state-of-the-art baselines

    Intelligent Management of Mobile Systems through Computational Self-Awareness

    Full text link
    Runtime resource management for many-core systems is increasingly complex. The complexity can be due to diverse workload characteristics with conflicting demands, or limited shared resources such as memory bandwidth and power. Resource management strategies for many-core systems must distribute shared resource(s) appropriately across workloads, while coordinating the high-level system goals at runtime in a scalable and robust manner. To address the complexity of dynamic resource management in many-core systems, state-of-the-art techniques that use heuristics have been proposed. These methods lack the formalism in providing robustness against unexpected runtime behavior. One of the common solutions for this problem is to deploy classical control approaches with bounds and formal guarantees. Traditional control theoretic methods lack the ability to adapt to (1) changing goals at runtime (i.e., self-adaptivity), and (2) changing dynamics of the modeled system (i.e., self-optimization). In this chapter, we explore adaptive resource management techniques that provide self-optimization and self-adaptivity by employing principles of computational self-awareness, specifically reflection. By supporting these self-awareness properties, the system can reason about the actions it takes by considering the significance of competing objectives, user requirements, and operating conditions while executing unpredictable workloads

    Adaptive Quality of Service Control in Distributed Real-Time Embedded Systems

    Get PDF
    An increasing number of distributed real-time embedded systems face the critical challenge of providing Quality of Service (QoS) guarantees in open and unpredictable environments. For example, such systems often need to enforce CPU utilization bounds on multiple processors in order to avoid overload and meet end-to-end dead-lines, even when task execution times deviate signiïŹcantly from their estimated values or change dynamically at run-time. This dissertation presents an adaptive QoS control framework which includes a set of control design methodologies to provide robust QoS assurance for systems at diïŹ€erent scales. To demonstrate its eïŹ€ectiveness, we have applied the framework to the end-to-end CPU utilization control problem for a common class of distributed real-time embedded systems with end-to-end tasks. We formulate the utilization control problem as a constrained multi-input-multi-output control model. We then present a centralized control algorithm for small or medium size systems, and a decentralized control algorithm for large-scale systems. Both algorithms are designed systematically based on model predictive control theory to dynamically enforce desired utilizations. We also introduce novel task allocation algorithms to ensure that the system is controllable and feasible for utilization control. Furthermore, we integrate our control algorithms with fault-tolerance mechanisms as an eïŹ€ective way to develop robust middleware systems, which maintain both system reliability and real-time performance even when the system is in face of malicious external resource contentions and permanent processor failures. Both control analysis and extensive experiments demonstrate that our control algorithms and middleware systems can achieve robust utilization guarantees. The control framework has also been successfully applied to other distributed real-time applications such as end-to-end delay control in real-time image transmission. Our results show that adaptive QoS control middleware is a step towards self-managing, self-healing and self-tuning distributed computing platform

    Cloud computing: survey on energy efficiency

    Get PDF
    International audienceCloud computing is today’s most emphasized Information and Communications Technology (ICT) paradigm that is directly or indirectly used by almost every online user. However, such great significance comes with the support of a great infrastructure that includes large data centers comprising thousands of server units and other supporting equipment. Their share in power consumption generates between 1.1% and 1.5% of the total electricity use worldwide and is projected to rise even more. Such alarming numbers demand rethinking the energy efficiency of such infrastructures. However, before making any changes to infrastructure, an analysis of the current status is required. In this article, we perform a comprehensive analysis of an infrastructure supporting the cloud computing paradigm with regards to energy efficiency. First, we define a systematic approach for analyzing the energy efficiency of most important data center domains, including server and network equipment, as well as cloud management systems and appliances consisting of a software utilized by end users. Second, we utilize this approach for analyzing available scientific and industrial literature on state-of-the-art practices in data centers and their equipment. Finally, we extract existing challenges and highlight future research directions

    Efficient Learning Machines

    Get PDF
    Computer scienc

    Upscaling energy control from building to districts: current limitations and future perspectives

    Get PDF
    Due to the complexity and increasing decentralisation of the energy infrastructure, as well as growing penetration of renewable generation and proliferation of energy prosumers, the way in which energy consumption in buildings is managed must change. Buildings need to be considered as active participants in a complex and wider district-level energy landscape. To achieve this, the authors argue the need for a new generation of energy control systems capable of adapting to near real-time environmental conditions while maximising the use of renewables and minimising energy demand within a district environment. This will be enabled by cloud-based demand-response strategies through advanced data analytics and optimisation, underpinned by semantic data models as demonstrated by the Computational Urban Sustainability Platform, CUSP, prototype presented in this paper. The growing popularity of time of use tariffs and smart, IoT connected devices offer opportunities for Energy Service Companies, ESCo’s, to play a significant role in this new energy landscape. They could provide energy management and cost savings for adaptable users, while meeting energy and CO2 reduction targets. The paper provides a critical review and agenda setting perspective for energy management in buildings and beyond
    • 

    corecore