34 research outputs found

    A survey of dynamic power optimization techniques

    Get PDF
    One of the most important considerations for the current VLSI/SOC design is power, which can be classified into power analysis and optimization. In this survey, the main concepts of power optimization including the sources and policies are introduced. Among the various approaches, dynamic power management (DPM), which implies to change devices states when they are not working at the highest speed or at their full capacity, is the most efficient one. Our explanations accompanying the figures specify the abstract concepts of DPM. This paper briefly surveys both heuristic and stochastic policies and discusses their advantages and disadvantages

    Stochastic Learning Feedback Hybrid Automata for Power Management in Embedded Systems

    Full text link
    In this paper we show that stochastic learning automata based feedback control switching strategy can be used for dynamic power management (DPM) employed at the system level. DPM strategies are usually incorporated at the operating systems of embedded devices to exploit multiple power states available in today\u27s ACPI compliant devices. The idea is to switch between power states depending on the device usage, and since device usage times are not deterministic, probabilistic techniques are often used to create stochastic strategies, or strategies that make decisions based on probabilities of device usage spans. Previous work (Irani et al., 2001) has shown how to approximate the probability distribution of device idle times and dynamically update them, and then use such knowledge in controlling power states. Here, we use stochastic learning automata (SLA) which interacts with the environment to update such probabilities, and then apply techniques similar to (Irani et al., 2001) to optimize power usage with minimal effect on response time for the devices

    Stochastic Learning Feedback Hybrid Automata for Dynamic Power Management in Embedded Systems

    Full text link
    Dynamic power management (DPM) refers to the strategies employed at system level to reduce energy expenditure (i.e. to prolong battery life) in embedded systems. The trade-off involved in DPM techniques is between the reductions of energy consumption and latency suffered by the tasks. Such trade-offs need to be decided at runtime, making DPM an on-line problem. We formulate DPM as a hybrid automaton control problem and integrate stochastic control. The control strategy is learnt dynamically using stochastic learning hybrid automata (SLHA) with feedback learning algorithms. Simulation-based experiments show the expediency of the feedback systems in stationary environments. Further experiments reveal that SLHA attains better trade-offs than several former predictive algorithms under certain trace data

    A conceptual framework of control, learn, and knowledge for computer power management

    Get PDF
    This conceptual paper observes the human inactivity in computer power management and discovers that; the efficiency of the computer power management (CPM)can be achieved by the eligibility of the human inactivity period. This period reduces the efficiency of CPM. This study examines the self-adaptation(SA) and the knowledge repository (KR)concepts, to model the framework of a new approach in computer power management. The essential elements and features from theseconceptswere adapted and applied as a techniqueto a new implementation of CLK-CPM. As a result, this study has proposed a modelof thetheoretical framework and demonstratesit through its conceptual framework for the technique

    Operating-system directed power reduction

    Full text link

    Dynamic Energy Aware Task Scheduling using Run-Queue Peek

    Get PDF
    Scheduling dependent tasks is one of the most challenging problems in parallel and distributed systems. It is known to be computationally intractable in its general form as well as several restricted cases. An interesting application of scheduling is in the area of energy awareness for mobile battery operated devices where minimizing the energy utilized is the most important scheduling policy consideration. A number of heuristics have been developed for this consideration. In this paper, we study the scheduling problem for a particular battery model. In the proposed work, we show how to enhance a well know approach of accounting for the slack generated at runtime due to the difference between WCET (Worst Case Execution Time) and AET (Actual Execution Time). Our solution exploits the fact that even though some tasks become available based on the actual periodicity of a task they are not executed because the run queue is determined by the schedule generated in the offline phase I of the algorithm using the conservative EDF (Earliest Deadline First) algorithm. We peek at the task run-queue to find such tasks to eliminate wastage of the slack generated. Based on the outcome of the conducted experiments, the proposed algorithm outperformed or matched the performance of the 2-Phase dynamic task scheduling algorithm all the time

    Dynamic Energy Aware Task Scheduling for Periodic Tasks using Expected Execution Time Feedback

    Get PDF
    Scheduling dependent tasks is one of the most challenging problems in parallel and distributed systems. It is known to be computationally intractable in its general form as well as several restricted cases. An interesting application of scheduling is in the area of energy awareness for mobile battery operated devices where minimizing the energy utilized is the most important scheduling policy consideration. A number of heuristics have been developed for this consideration. In this paper, we study the scheduling problem for a particular battery model. In the proposed work, we show how to enhance a well know approach of accounting for the slack generated at runtime due to the difference between WCET (Worst Case Execution Time) and AET (Actual Execution Time). Our solution exploits the knowledge gained about the AET of the tasks after the first period, to come up with EET (Expected Execution Time). We then use the EET as an input for the next period to use as much slack as possible and to eliminate wastage of slack generated. This happens because WCET is used to determine if a task should be executed at runtime. Dynamically adjusting the run-queue to use EET as a feedback, which is based on the previous period’s AET eliminates wastage of the slack generated. Based on the outcome of the conducted experiments, the proposed algorithm outperformed or matched the performance of the 2-Phase dynamic task scheduling algorithm and the run-queue peek algorithm all the time

    A Hierarchical Framework of Cloud Resource Allocation and Power Management Using Deep Reinforcement Learning

    Full text link
    Automatic decision-making approaches, such as reinforcement learning (RL), have been applied to (partially) solve the resource allocation problem adaptively in the cloud computing system. However, a complete cloud resource allocation framework exhibits high dimensions in state and action spaces, which prohibit the usefulness of traditional RL techniques. In addition, high power consumption has become one of the critical concerns in design and control of cloud computing systems, which degrades system reliability and increases cooling cost. An effective dynamic power management (DPM) policy should minimize power consumption while maintaining performance degradation within an acceptable level. Thus, a joint virtual machine (VM) resource allocation and power management framework is critical to the overall cloud computing system. Moreover, novel solution framework is necessary to address the even higher dimensions in state and action spaces. In this paper, we propose a novel hierarchical framework for solving the overall resource allocation and power management problem in cloud computing systems. The proposed hierarchical framework comprises a global tier for VM resource allocation to the servers and a local tier for distributed power management of local servers. The emerging deep reinforcement learning (DRL) technique, which can deal with complicated control problems with large state space, is adopted to solve the global tier problem. Furthermore, an autoencoder and a novel weight sharing structure are adopted to handle the high-dimensional state space and accelerate the convergence speed. On the other hand, the local tier of distributed server power managements comprises an LSTM based workload predictor and a model-free RL based power manager, operating in a distributed manner.Comment: accepted by 37th IEEE International Conference on Distributed Computing (ICDCS 2017

    Experiences in Implementing an Energy-Driven Task Scheduler in RT-Linux

    Get PDF
    Dynamic voltage scaling (DVS) is being increasingly used for power management in embedded systems. Energy is a scarce resource in embedded real-time systems and energy consumption must be carefully balanced against realtime responsiveness. We describe our experiences in implementing an energy driven task scheduler in RT-Linux. We attempt to minimize the energy consumed by a taskset while guaranteeing that all task deadlines are met. Our algorithm, which we call LEDF, follows a greedy approach and schedules as many tasks as possible at a low CPU speed in a power-aware manner. We present simulation results on energy savings using LEDF, and we validate our approach using the RT-Linux testbed on the AMD Athlon 4 processor. Power measurements taken on the testbed closely match the power estimates obtained using simulation. Our results show that DVS results in significant energy savings for practical real-life task sets. We also show that when CPU speeds are restricted to only a few discrete values, this approach saves more energy than currently existing methods
    corecore