144,845 research outputs found

    Policy optimization for dynamic power management

    Get PDF
    Dynamic power management schemes (also called policies) reduce the power consumption of complex electronic systems by trading off performance for power in a controlled fashion, taking system workload into account. In a power-managed system it is possible to set components into different states, each characterized by performance and power consumption levels. The main function of a power management policy is to decide when to perform component state transitions and which transition should be performed, depending on system history, workload, and performance constraints. In the past, power management policies have been formulated heuristically. The main contribution of this paper is to introduce a finite-state, abstract system model for power-managed systems based on Markov decision processes. Under this model, the problem of finding policies that optimally tradeoff performance for power can be cast as a stochastic optimization problem and solved exactly and efficiently. The applicability and generality of the approach are assessed by formulating the Markov model and optimizing power management policies for several systems

    Beyond CPU Frequency Scaling for a Fine-grained Energy Control of HPC Systems

    Get PDF
    International audienceModern high performance computing subsystems (HPC) - including processor, network, memory, and IO - are provided with power management mechanisms. These include dynamic speed scaling and dynamic resource sleeping. Understanding the behavioral patterns of high performance computing systems at runtime can lead to a multitude of optimization opportunities including controlling and limiting their energy usage. In this paper, we present a general purpose methodology for optimizing energy performance of HPC systems consid- ering processor, disk and network. We rely on the concept of execution vector along with a partial phase recognition technique for on-the-fly dynamic management without any a priori knowledge of the workload. We demonstrate the effectiveness of our management policy under two real-life workloads. Experimental results show that our management policy in comparison with baseline unmanaged execution saves up to 24% of energy with less than 4% performance overhead for our real-life workloads

    CAP (Coordinated Adaptive Power) Management Technique with Adaptive Threshold Policy for Wireless Sensor Nodes

    Get PDF
    Small size of the sensor nodes put constraint on the hardware capabilities and battery supported on it. Though power optimization is a real challenge in all WSN applications, power saving at the cost of data loss is not at all preferable. Very small buffer sizes cannot store the increased data arrival at the time of event occurrences and data gets lost due to buffer overflow. Data loss within the node occurs before transmission mainly either data gets timed out or due to buffer overflows. Both of these situations can be handled using CAP management techniques at individual sensor nodes. Here Dynamic Voltage Frequency Scaling (DVFS) and Dynamic Modulation Scaling (DMS) techniques are used for optimization of power consumed by processor and transmitter respectively. We have considered that DVFS and DMS are coordinated together so that whenever processor changes its processing speed, transmitter also changes its transmission rate and minimum data loss occurs at the intermediate buffer. In this paper we have discussed adaptive threshold policy for selecting a particular state of sensor node and compared the results with fixed threshold policy. CAP management along with adaptive threshold policy improves the lifetime of sensor nodes and also reduces the data loss before transmission

    A Control-based Methodology for Power-performance Optimization in NoCs Exploiting DVFS

    Get PDF
    Networks-on-Chip (NoCs) are considered a viable solution to fully exploit the computational power of multi- and many-cores, but their non negligible power consumption requires ad hoc power-performance design methodologies. In this perspective, several proposals exploited the possibility to dynamically tune voltage and frequency for the interconnect, taking steps from traditional CPU-based power management solutions. However, the impact of the actuators, i.e. the limited range of frequencies for a PLL (Phase Locked Loop) or the time to increase voltage and frequency for a Dynamic Voltage and Frequency Scaling (DVFS) modules, are often not carefully accounted for, thus overestimating the benefits. This paper presents a control-based methodology for the NoC power-performance optimization exploiting the Dynamic Frequency Scaling (DFS). Both timing and power overheads of the actuators are considered, thanks to an ad hoc simulation framework. Moreover the proposed methodology eventually allows for user and/or OS interactions to change between different high level power-performance modes, i.e. to trigger performance oriented or power saving system behaviors. Experimental validation considered a 16-core architecture comparing our proposal with different settings of threshold-based policies. We achieved a speedup up to 3 for the timing and a reduction up to 33.17% of the power ∗ time product against the best threshold-based policy. Moreover, our best control-based scheme provides an averaged power-performance product improvement of 16.50% and 34.79% against the best and the second considered threshold-based policy setting

    ADAPTIVE POWER MANAGEMENT FOR COMPUTERS AND MOBILE DEVICES

    Get PDF
    Power consumption has become a major concern in the design of computing systems today. High power consumption increases cooling cost, degrades the system reliability and also reduces the battery life in portable devices. Modern computing/communication devices support multiple power modes which enable power and performance tradeoff. Dynamic power management (DPM), dynamic voltage and frequency scaling (DVFS), and dynamic task migration for workload consolidation are system level power reduction techniques widely used during runtime. In the first part of the dissertation, we concentrate on the dynamic power management of the personal computer and server platform where the DPM, DVFS and task migrations techniques are proved to be highly effective. A hierarchical energy management framework is assumed, where task migration is applied at the upper level to improve server utilization and energy efficiency, and DPM/DVFS is applied at the lower level to manage the power mode of individual processor. This work focuses on estimating the performance impact of workload consolidation and searching for optimal DPM/DVFS that adapts to the changing workload. Machine learning based modeling and reinforcement learning based policy optimization techniques are investigated. Mobile computing has been weaved into everyday lives to a great extend in recent years. Compared to traditional personal computer and server environment, the mobile computing environment is obviously more context-rich and the usage of mobile computing device is clearly imprinted with user\u27s personal signature. The ability to learn such signature enables immense potential in workload prediction and energy or battery life management. In the second part of the dissertation, we present two mobile device power management techniques which take advantage of the context-rich characteristics of mobile platform and make adaptive energy management decisions based on different user behavior. We firstly investigate the user battery usage behavior modeling and apply the model directly for battery energy management. The first technique aims at maximizing the quality of service (QoS) while keeping the risk of battery depletion below a given threshold. The second technique is an user-aware streaming strategies for energy efficient smartphone video playback applications (e.g. YouTube) that minimizes the sleep and wake penalty of cellular module and at the same time avoid the energy waste from excessive downloading. Runtime power and thermal management has attracted substantial interests in multi-core distributed embedded systems. Fast performance evaluation is an essential step in the research of distributed power and thermal management. In last part of the dissertation, we present an FPGA based emulator of multi-core distributed embedded system designed to support the research in runtime power/thermal management. Hardware and software supports are provided to carry out basic power/thermal management actions including inter-core or inter-FPGA communications, runtime temperature monitoring and dynamic frequency scaling

    Computing an Optimal Control Policy for an Energy Storage

    Full text link
    We introduce StoDynProg, a small library created to solve Optimal Control problems arising in the management of Renewable Power Sources, in particular when coupled with an Energy Storage System. The library implements generic Stochastic Dynamic Programming (SDP) numerical methods which can solve a large class of Dynamic Optimization problems. We demonstrate the library capabilities with a prototype problem: smoothing the power of an Ocean Wave Energy Converter. First we use time series analysis to derive a stochastic Markovian model of this system since it is required by Dynamic Programming. Then, we briefly describe the "policy iteration" algorithm we have implemented and the numerical tools being used. We show how the API design of the library is generic enough to address Dynamic Optimization problems outside the field of Energy Management. Finally, we solve the power smoothing problem and compare the optimal control with a simpler heuristic control.Comment: Part of the Proceedings of the 6th European Conference on Python in Science (EuroSciPy 2013), Pierre de Buyl and Nelle Varoquaux editors, (2014

    Dynamic Network State Learning Model for Mobility Based WMSN Routing Protocol

    Get PDF
    The rising demand of wireless multimedia sensor networks (WMSNs) has motivated academia-industries to develop energy efficient, Quality of Service (QoS) and delay sensitive communication systems to meet major real-world demands like multimedia broadcast, security and surveillance systems, intelligent transport system, etc. Typically, energy efficiency, QoS and delay sensitive transmission are the inevitable requirements of WMSNs. Majority of the existing approaches either use physical layer or system level schemes that individually can’t assure optimal transmission decision to meet the demand. The cumulative efficiency of physical layer power control, adaptive modulation and coding and system level dynamic power management (DPM) are found significant to achieve these demands. With this motivation, in this paper a unified model is derived using enhanced reinforcement learning and stochastic optimization method. Exploiting physical as well as system level network state information, our proposed dynamic network state learning model (NSLM) applies stochastic optimization to learn network state-activity that derives an optimal DPM policy and PHY switching scheduling. NSLM applies known as well as unknown network state variables to derive transmission and PHY switching policy, where it considers DPM as constrained Markov decision process (MDP) problem. Here,the use of Hidden Markov Model and Lagrangian relaxation has made NSLM convergence swift that assures delay-sensitive, QoS enriched, and bandwidth and energy efficient transmission for WMSN under uncertain network conditions. Our proposed NSLM DPM model has outperformed traditional Q-Learning based DPM in terms of buffer cost, holding cost, overflow, energy consumption and bandwidth utilization

    Learning and Management for Internet-of-Things: Accounting for Adaptivity and Scalability

    Get PDF
    Internet-of-Things (IoT) envisions an intelligent infrastructure of networked smart devices offering task-specific monitoring and control services. The unique features of IoT include extreme heterogeneity, massive number of devices, and unpredictable dynamics partially due to human interaction. These call for foundational innovations in network design and management. Ideally, it should allow efficient adaptation to changing environments, and low-cost implementation scalable to massive number of devices, subject to stringent latency constraints. To this end, the overarching goal of this paper is to outline a unified framework for online learning and management policies in IoT through joint advances in communication, networking, learning, and optimization. From the network architecture vantage point, the unified framework leverages a promising fog architecture that enables smart devices to have proximity access to cloud functionalities at the network edge, along the cloud-to-things continuum. From the algorithmic perspective, key innovations target online approaches adaptive to different degrees of nonstationarity in IoT dynamics, and their scalable model-free implementation under limited feedback that motivates blind or bandit approaches. The proposed framework aspires to offer a stepping stone that leads to systematic designs and analysis of task-specific learning and management schemes for IoT, along with a host of new research directions to build on.Comment: Submitted on June 15 to Proceeding of IEEE Special Issue on Adaptive and Scalable Communication Network
    • 

    corecore