48 research outputs found

    Chapter One – An Overview of Architecture-Level Power- and Energy-Efficient Design Techniques

    Get PDF
    Power dissipation and energy consumption became the primary design constraint for almost all computer systems in the last 15 years. Both computer architects and circuit designers intent to reduce power and energy (without a performance degradation) at all design levels, as it is currently the main obstacle to continue with further scaling according to Moore's law. The aim of this survey is to provide a comprehensive overview of power- and energy-efficient “state-of-the-art” techniques. We classify techniques by component where they apply to, which is the most natural way from a designer point of view. We further divide the techniques by the component of power/energy they optimize (static or dynamic), covering in that way complete low-power design flow at the architectural level. At the end, we conclude that only a holistic approach that assumes optimizations at all design levels can lead to significant savings.Peer ReviewedPostprint (published version

    Multilayer Modeling and Design of Energy Managed Microsystems

    Get PDF
    Aggressive energy reduction is one of the key technological challenges that all segments of the semiconductor industry have encountered in the past few years. In addition, the notion of environmental awareness and designing “green” products is yet another major driver for ultra low energy design of electronic systems. Energy management is one of the unique solutions that can address the simultaneous requirements of high-performance, (ultra) low energy and greenness in many classes of computing systems; including high-performance, embedded and wireless. These considerations motivate the focus of this dissertation on the energy efficiency improvement of Energy Managed Microsystems (EMM or EM2). The aim is to maximize the energy efficiency and/or the operational lifetime of these systems. In this thesis we propose solutions that are applicable to many classes of computing systems including high-performance and mobile computing systems. These solutions contribute to make such technologies “greener”. The proposed solutions are multilayer, since they belong to, and may be applicable to, multiple design abstraction layers. The proposed solutions are orthogonal to each other, and if deployed simultaneously in a vertical system integration approach, when possible, the net benefit may be as large as the multiplication of the individual benefits. At high-level, this thesis initially focuses on the modeling and design of interconnections for EM2. For this purpose, a design flow has been proposed for interconnections in EM2. This flow allows designing interconnects with minimum energy requirements that meet all the considered performance objectives, in all specified system operating states. Later, models for energy performance estimation of EM2 are proposed. By energy performance, we refer to the improvements of energy savings of the computing platforms, obtained when some enhancements are applied to those platforms. These models are based on the components of the application profile. The adopted method is inspired by Amdahl’s law, which is driven by the fact that ‘energy’ is ‘additive’, as ‘time’ is ‘additive’. These models can be used for the design space exploration of EM2. The proposed models are high-level and therefore they are easy to use and show fair accuracy, 9.1% error on average, when compared to the results of the implemented benchmarks. Finally, models to estimate energy consumption of EM2 according to their “activity” are proposed. By “activity” we mean the rate at which EM2 perform a set of predefined application functions. Good estimations of energy requirements are very useful when designing and managing the EM2 activity, in order to extend their battery lifetime. The study of the proposed models on some Wireless Sensor Network (WSN) application benchmark confirms a fair accuracy for the energy estimation models, 3% error on average on the considered benchmarks

    Energy Efficient Cloud Data Center

    Get PDF
    Cloud computing has quickly arrived like a deeply accepted computing model. Still,the exploration and investigation on cloud computing is at a premature phase. Cloud computing is facing distinct issues in the field of security, power consumption, software frameworks, QoS, and standardization.The anagement of efficient energy is one of the most challenging research issues. The key and central services of cloud computing system are the SaaS, PaaS, and IaaS. In this thesis, the model of energy efficient cloud data center is proposed. Cloud data center is the main part of the IaaS layer of a cloud computing system. It absorbs a big part of the aggregate energy of a cloud computing system. Our goal is to supply a better explaining of the design issues of energy manage-ment of the IaaS layer in the cloud computing system. Servers and processors are the main component of the data center. Virtualization technologies that are the key features of the cloud computing environment provide the ability for migration of VMs between physical servers of the cloud data centre to improve the energy efficiency. This is called dynamic server consolidation that has direct impact on service response time. Energy efficient cloud data center reduces the overall energy consumed by the data center. This results in, reduction of cost incurred by the data center, long life of hardware components, green IT environment, and making more user friendly. Many VM placement techniques, server consolidation techniques have been proposed. They do not show optimal solution in every circumstances. They show optimum result only for a certain data set. They did not consider both VM placement and its migration simultaneously. They did not attempt to minimize the VM migrations during server consolidation. Still, forceful consolidation can result in the performance degradation and may lead the SLA negligence. So, there is a trade-off between performance and energy. A number of heuristics, protocols and archi-tectures have explored and investigated for server consolidation using VM migration to reduce energy consumption. The primary objective is to minimize the overall energy consumption by servers without violating the SLA. Our proposed model and scheme show the better result at most of the data set. It is based on virtualization technique, VMs, their placement and their migration. Our study focuses on problems like huge amount of energy consumption by server and processor. So, here energy consumption is reduced without violating SLA and to meet certain level of QoS parameters. Server consolidation is performed with minimum number of VM migration. Here, maximum utilization of re-sources is tried to achieve, but utilization of resources is not compared with the existing scheme. Our scheme may show different better result for different configuration of the data center for the same data set. Problem is formulated as a knapsack problem. Pro-posed scheme inherits some feature from heuristics approach like BF, FF, BFD, and FFD.These are used for greedy-bin-packing problem. For simulation, input data set is taken as random value. These random values are general data set used in real scenario and by the existing scheme. From simulation, it is found that proposed model is achieving the desired objectives for a number of data set, and for another data set, some percentage loss of objectives is occurring

    Energy-Aware Compilation and Hardware Design for VLIW Embedded Systems

    Get PDF
    Tomorrow's embedded devices need to run multimedia applications demanding high computational power with low energy consumption constraints. In this context, the register file is a key source of power consumption and its inappropriate design and management severely affects system power. In this paper, we present a new approach to reduce the energy of shared register files in forthcoming embedded VLIW processors running real-life applications up to 60% without performance penalty. This approach relies on limited hardware extensions and a compiler-based energy-aware register assignment algorithm to deactivate at run-time parts of the register file (i.e., sub-banks) in an independent way

    Dynamic scheduling techniques for adaptive applications on real-time embedded systems

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Greedy Coordinate Descent CMP Multi-Level Cache Resizing

    Get PDF
    Hardware designers are constantly looking for ways to squeeze waste out of architectures to achieve better power efficiency. Cache resizing is a technique that can remove wasteful power consumption in caches. The idea is to determine the minimum cache a program needs to run at near-peak performance, and then reconfigure the cache to implement this efficient capacity. While there has been significant previous work on cache resizing, existing techniques have focused on controlling resizing for a single level of cache only. This sacrifices significant opportunities for power savings in modern CPU hierarchies which routinely employ 3 levels of cache. Moreover, as CMP scaling will likely continue for the foreseeable future, eliminating wasteful power consumption from a CMP multi-level cache hierarchy is crucial to achieve better power efficiency. In this dissertation, we propose a noble technique, greedy coordinate descent CMP multi-level cache resizing, that minimizes a power consumption while maintaining a high performance. We simutaneously resizes all caches in a modern CMP cache hierarchy to minimize the power consumption. Specifically, our approach predicts the power consumption and the performance level without direct evaluations. We also develop greedy coordinate descent method to search an optimal cache configuration utilizing power efficiency gain (PEG) that we propose in this dissertation. This dissertation makes three contributions for a CMP multi-level cache resizing. First, we discover the limits of power savings and performance. This limit study identifies the potential power savings in a CMP multi-level cache hierarchy when wasteful power consumption is eliminated. Second, we propose a prediction-based greedy coordinate descent (GCD) method to find an optimal cache configuration and to orchestrate them. Third, we implement online GCD techniques for a CMP multi-level cache resizing. Our approach exhibits 13.9% power savings and achieves 91% of the power savings of the static oracle cache hierarchy configuration

    Improved Observability for State Estimation in Active Distribution Grid Management

    Get PDF
    corecore