2,199 research outputs found

    Dynamic Energy Management for Chip Multi-processors under Performance Constraints

    Get PDF
    We introduce a novel algorithm for dynamic energy management (DEM) under performance constraints in chip multi-processors (CMPs). Using the novel concept of delayed instructions count, performance loss estimations are calculated at the end of each control period for each core. In addition, a Kalman filtering based approach is employed to predict workload in the next control period for which voltage-frequency pairs must be selected. This selection is done with a novel dynamic voltage and frequency scaling (DVFS) algorithm whose objective is to reduce energy consumption but without degrading performance beyond the user set threshold. Using our customized Sniper based CMP system simulation framework, we demonstrate the effectiveness of the proposed algorithm for a variety of benchmarks for 16 core and 64 core network-on-chip based CMP architectures. Simulation results show consistent energy savings across the board. We present our work as an investigation of the tradeoff between the achievable energy reduction via DVFS when predictions are done using the effective Kalman filter for different performance penalty thresholds

    Investigation of LSTM Based Prediction for Dynamic Energy Management in Chip Multiprocessors

    Get PDF
    In this paper, we investigate the effectiveness of using long short-term memory (LSTM) instead of Kalman filtering to do prediction for the purpose of constructing dynamic energy management (DEM) algorithms in chip multi-processors (CMPs). Either of the two prediction methods is employed to estimate the workload in the next control period for each of the processor cores. These estimates are then used to select voltage-frequency (VF) pairs for each core of the CMP during the next control period as part of a dynamic voltage and frequency scaling (DVFS) technique. The objective of the DVFS technique is to reduce energy consumption under performance constraints that are set by the user. We conduct our investigation using a custom Sniper system simulation framework. Simulation results for 16 and 64 core network-on-chip based CMP architectures and using several benchmarks demonstrate that the LSTM is slightly better than Kalman filtering

    Investigation of LSTM Based Prediction for Dynamic Energy Management in Chip Multiprocessors

    Get PDF
    In this paper, we investigate the effectiveness of using long short-term memory (LSTM) instead of Kalman filtering to do prediction for the purpose of constructing dynamic energy management (DEM) algorithms in chip multi-processors (CMPs). Either of the two prediction methods is employed to estimate the workload in the next control period for each of the processor cores. These estimates are then used to select voltage-frequency (VF) pairs for each core of the CMP during the next control period as part of a dynamic voltage and frequency scaling (DVFS) technique. The objective of the DVFS technique is to reduce energy consumption under performance constraints that are set by the user. We conduct our investigation using a custom Sniper system simulation framework. Simulation results for 16 and 64 core network-on-chip based CMP architectures and using several benchmarks demonstrate that the LSTM is slightly better than Kalman filtering

    Energy and performance-aware scheduling and shut-down models for efficient cloud-computing data centers.

    Get PDF
    This Doctoral Dissertation, presented as a set of research contributions, focuses on resource efficiency in data centers. This topic has been faced mainly by the development of several energy-efficiency, resource managing and scheduling policies, as well as the simulation tools required to test them in realistic cloud computing environments. Several models have been implemented in order to minimize energy consumption in Cloud Computing environments. Among them: a) Fifteen probabilistic and deterministic energy-policies which shut-down idle machines; b) Five energy-aware scheduling algorithms, including several genetic algorithm models; c) A Stackelberg game-based strategy which models the concurrency between opposite requirements of Cloud-Computing systems in order to dynamically apply the most optimal scheduling algorithms and energy-efficiency policies depending on the environment; and d) A productive analysis on the resource efficiency of several realistic cloud–computing environments. A novel simulation tool called SCORE, able to simulate several data-center sizes, machine heterogeneity, security levels, workload composition and patterns, scheduling strategies and energy-efficiency strategies, was developed in order to test these strategies in large-scale cloud-computing clusters. As results, more than fifty Key Performance Indicators (KPI) show that more than 20% of energy consumption can be reduced in realistic high-utilization environments when proper policies are employed.Esta Tesis Doctoral, que se presenta como compendio de artículos de investigación, se centra en la eficiencia en la utilización de los recursos en centros de datos de internet. Este problema ha sido abordado esencialmente desarrollando diferentes estrategias de eficiencia energética, gestión y distribución de recursos, así como todas las herramientas de simulación y análisis necesarias para su validación en entornos realistas de Cloud Computing. Numerosas estrategias han sido desarrolladas para minimizar el consumo energético en entornos de Cloud Computing. Entre ellos: 1. Quince políticas de eficiencia energética, tanto probabilísticas como deterministas, que apagan máquinas en estado de espera siempre que sea posible; 2. Cinco algoritmos de distribución de tareas que tienen en cuenta el consumo energético, incluyendo varios modelos de algoritmos genéticos; 3. Una estrategia basada en la teoría de juegos de Stackelberg que modela la competición entre diferentes partes de los centros de datos que tienen objetivos encontrados. Este modelo aplica dinámicamente las estrategias de distribución de tareas y las políticas de eficiencia energética dependiendo de las características del entorno; y 4. Un análisis productivo sobre la eficiencia en la utilización de recursos en numerosos escenarios de Cloud Computing. Una nueva herramienta de simulación llamada SCORE se ha desarrollado para analizar las estrategias antes mencionadas en clústers de Cloud Computing de grandes dimensiones. Los resultados obtenidos muestran que se puede conseguir un ahorro de energía superior al 20% en entornos realistas de alta utilización si se emplean las estrategias de eficiencia energética adecuadas. SCORE es open source y puede simular diferentes centros de datos con, entre otros muchos, los siguientes parámetros: Tamaño del centro de datos; heterogeneidad de los servidores; tipo, composición y patrones de carga de trabajo, estrategias de distribución de tareas y políticas de eficiencia energética, así como tres gestores de recursos centralizados: Monolítico, Two-level y Shared-state. Como resultados, esta herramienta de simulación arroja más de 50 Key Performance Indicators (KPI) de rendimiento general, de distribucin de tareas y de energía.Premio Extraordinario de Doctorado U

    Online health-conscious energy management strategy for a hybrid multi-stack fuel cell vehicle based on game theory

    Get PDF
    The use of multiple low-power fuel cells (FCs), instead of a high-power one, in the powertrain of a FC-hybrid electric vehicle (FC-HEV) has recently received considerable attention. This is mainly due to the fact that this configuration can lead to higher efficiency, durability, and reliability. However, the added degrees of freedom require an advanced multi-agent energy management strategy (EMS) for an effective power distribution among power sources. This paper puts forward an EMS based on game theory (GT) for a multi-stack FCHEV with three FCs and a battery pack. GT is a well-approved method for characterizing the interactions in multiagent systems. Unlike the other strategies, the proposed EMS is equipped with an online identification system to constantly update the time-varying characteristics of the power sources. The performance of the suggested strategy is investigated through two case studies. Firstly, a comparative study with two other EMSs, dynamic programming (offline), and a competent rule-based strategy (online), is conducted to realize the capability of GT. Secondly, to justify the necessity of online system identification, the degradation effect of each power source on the EMS performance is examined. The carried-out studies show that the total cost (hydrogen consumption and degradation) of the proposed strategy is almost 6% better than the rule-based EMS while keeping a reasonable difference with dynamic programming. Moreover, health unawareness of power sources can increase the hydrogen consumption up to 7% in the studied system

    Intelligent Management of Mobile Systems through Computational Self-Awareness

    Full text link
    Runtime resource management for many-core systems is increasingly complex. The complexity can be due to diverse workload characteristics with conflicting demands, or limited shared resources such as memory bandwidth and power. Resource management strategies for many-core systems must distribute shared resource(s) appropriately across workloads, while coordinating the high-level system goals at runtime in a scalable and robust manner. To address the complexity of dynamic resource management in many-core systems, state-of-the-art techniques that use heuristics have been proposed. These methods lack the formalism in providing robustness against unexpected runtime behavior. One of the common solutions for this problem is to deploy classical control approaches with bounds and formal guarantees. Traditional control theoretic methods lack the ability to adapt to (1) changing goals at runtime (i.e., self-adaptivity), and (2) changing dynamics of the modeled system (i.e., self-optimization). In this chapter, we explore adaptive resource management techniques that provide self-optimization and self-adaptivity by employing principles of computational self-awareness, specifically reflection. By supporting these self-awareness properties, the system can reason about the actions it takes by considering the significance of competing objectives, user requirements, and operating conditions while executing unpredictable workloads

    Mathematical programming-based models for the distribution networks' decarbonization

    Get PDF
    (English) Climate change is pushing to decarbonize worldwide economies and forcing fossil fuel-based power systems to evolve into power systems based mainly on renewable energies sources (RES). Thus, increasing the energy generated from renewables in the energy supply mix involves transversal challenges at operational, market, political and social levels due to the stochasticity associated with these technologies and their capacity to generate energy at a small scale close to the consumption point. In this regard, the power generation uncertainty can be handled through battery storage systems (BSS) that have become competitive over the last few years due to a significant price reduction and are a potential alternative to mitigate the technical network problems associated with the intermittency of the renewables, providing flexibility to store/supply energy when is required. On the other hand, the capacity of low-cost generation from small-scale power systems (distributed or decentralized generation (DG)) represents an opportunity for both customers and the power system operators. i.e., customers can generate their energy, reduce their network dependency, and participate actively in eventual local energy markets (LEM), while the power system operator can reduce the system losses and increase the power system quality against unexpected external failures. Nevertheless, incorporating these structures and operational frameworks into distribution networks (DN) requires developing sophisticated tools to support decision-making related to the optimal integration of the distributed energy resources (DER) and assessing the performance of new DNs with high DERs penetration under different operational scenarios. This thesis addresses the distribution networks' decarbonization challenge by developing novel algorithms and applying different optimization techniques through three subtopics. The first axis addresses the optimal sizing and allocation of DG and BSS into a DN from deterministic and stochastic approaches, considering the technical network limitation, the electric vehicle (EV) presence, the users capacity to modify their load consumption, and the DG capability to generate reactive power for voltage stability. Besides, a novel algorithm is developed to solve the deterministic and stochastic models for multiple scenarios providing an accurate DERs capacity that should be installed to decrease the external network dependency. The second subtopic assesses the DN capacity to face unlikely scenarios like primary grid failure or natural disasters preventing the energy supply through a deterministic model that modifies the unbalance DN topology into multiple virtual microgrids (VM) balanced, considering the power supplied by DG and the flexibility provided by the storage devices (SD) and demand response (DR). The third axis addresses the emerging transactive energy (TE) schemes in DNs with high DERs penetration at a residential level through two stochastic approaches to model a Peer-to-peer (P2P) energy trading. To this end, the capability of a P2P energy trading scheme to operate on different markets as day-ahead, intraday, flexibility, and ancillary services (AS) market is assessed, while an algorithm is developed to manage the users' information under a decentralized design.(Català) El cambio climático está obligando a descarbonizar las economías de todo el mundo forzando a los sistemas de energía basados en combustibles fósiles a evolucionar hacia sistemas de energía basados principalmente en fuentes de energía renovables (FER). Así, incrementar la energía generada a partir de renovables en el mix energético está implicando retos transversales a nivel operativo, de mercado, político y social debido a la estocasticidad asociada a estas tecnologías y su capacidad de generar electricidad a pequeña escala cerca al punto de consumo. En este sentido, la incertidumbre en la generación de energía eléctrica puede ser manejada a través de sistemas de almacenamiento en baterías (BSS) que se han vuelto competitivos en los últimos años debido a una importante reducción de precios y son una potencial alternativa para mitigar los problemas técnicos de red asociados a la intermitencia de las renovables, proporcionando flexibilidad para almacenar/suministrar energía cuando sea necesario. Por otro lado, la capacidad de generación a bajo costo a partir de sistemas eléctricos de pequeña escala (generación distribuida o descentralizada (GD)) representa una oportunidad tanto para los clientes como para los operadores del sistema eléctrico. Es decir, los clientes pueden generar su energía, reducir su dependencia de la red y participar activamente en eventuales mercados locales de energía (MLE), mientras que el operador del sistema eléctrico puede reducir las pérdidas del sistema y aumentar la calidad del sistema eléctrico frente a fallas externas inesperadas. Sin embargo, incorporar estas estructuras y marcos operativos en las redes de distribución (RD) requiere desarrollar herramientas sofisticadas para apoyar la toma de decisiones relacionadas con la integración óptima de los recursos energéticos distribuidos (RED) y evaluar el desempeño de las nuevas RD con alta penetración de RED bajo diferentes escenarios de operación. Esta tesis aborda el desafío de la descarbonización de las redes de distribución mediante el desarrollo de algoritmos novedosos y la aplicación de diferentes técnicas de optimización a través de tres dimensiones. El primer eje aborda el dimensionamiento y localización óptimos de GD y BSS en una RD desde enfoques determinísticos y estocásticos, considerando la limitación técnica de la red, la presencia de vehículos eléctricos (VE), la capacidad de los usuarios para modificar su consumo de carga y la capacidad de GD para generar potencia reactiva para la estabilidad del voltaje. Además, se desarrolla un algoritmo novedoso para resolver los modelos determinísticos y estocásticos para múltiples escenarios proporcionando una capacidad precisa de RED que debe instalarse para disminuir la dependencia de la red externa. El segundo subtema evalúa la capacidad de la RD para enfrentar escenarios improbables como fallas en la red primaria o desastres naturales que impidan el suministro de energía, a través de un modelo determinista que modifica la topología de la RD desequilibrada en múltiples microrredes virtuales (MV) balanceadas, considerando la potencia suministrada por GD y la flexibilidad proporcionada por los dispositivos de almacenamiento y respuesta a la demanda (DR). El tercer eje aborda los esquemas emergentes de energía transactiva en RDs con alta penetración de RED a nivel residencial a través de dos enfoques estocásticos para modelar un comercio de energía Peer-to-peer (P2P). Para ello, se evalúa la capacidad de un esquema de comercialización de energía P2P para operar en diferentes mercados como el mercado diario, intradiario, de flexibilidad y de servicios complementarios, a la vez que se desarrolla un algoritmo para gestionar la información de los usuarios bajo un esquema descentralizado.Postprint (published version

    Status and Future Perspectives for Lattice Gauge Theory Calculations to the Exascale and Beyond

    Full text link
    In this and a set of companion whitepapers, the USQCD Collaboration lays out a program of science and computing for lattice gauge theory. These whitepapers describe how calculation using lattice QCD (and other gauge theories) can aid the interpretation of ongoing and upcoming experiments in particle and nuclear physics, as well as inspire new ones.Comment: 44 pages. 1 of USQCD whitepapers

    Thermal-Aware Networked Many-Core Systems

    Get PDF
    Advancements in IC processing technology has led to the innovation and growth happening in the consumer electronics sector and the evolution of the IT infrastructure supporting this exponential growth. One of the most difficult obstacles to this growth is the removal of large amount of heatgenerated by the processing and communicating nodes on the system. The scaling down of technology and the increase in power density is posing a direct and consequential effect on the rise in temperature. This has resulted in the increase in cooling budgets, and affects both the life-time reliability and performance of the system. Hence, reducing on-chip temperatures has become a major design concern for modern microprocessors. This dissertation addresses the thermal challenges at different levels for both 2D planer and 3D stacked systems. It proposes a self-timed thermal monitoring strategy based on the liberal use of on-chip thermal sensors. This makes use of noise variation tolerant and leakage current based thermal sensing for monitoring purposes. In order to study thermal management issues from early design stages, accurate thermal modeling and analysis at design time is essential. In this regard, spatial temperature profile of the global Cu nanowire for on-chip interconnects has been analyzed. It presents a 3D thermal model of a multicore system in order to investigate the effects of hotspots and the placement of silicon die layers, on the thermal performance of a modern ip-chip package. For a 3D stacked system, the primary design goal is to maximise the performance within the given power and thermal envelopes. Hence, a thermally efficient routing strategy for 3D NoC-Bus hybrid architectures has been proposed to mitigate on-chip temperatures by herding most of the switching activity to the die which is closer to heat sink. Finally, an exploration of various thermal-aware placement approaches for both the 2D and 3D stacked systems has been presented. Various thermal models have been developed and thermal control metrics have been extracted. An efficient thermal-aware application mapping algorithm for a 2D NoC has been presented. It has been shown that the proposed mapping algorithm reduces the effective area reeling under high temperatures when compared to the state of the art.Siirretty Doriast
    corecore