27 research outputs found

    A Scheduler-Level Incentive Mechanism for Energy Efficiency in HPC

    Get PDF
    International audienceEnergy consumption has become one of the most important factors in High Performance Computing platforms. However, while there are various algorithmic and programming techniques to save energy, a user has currently no incentive to employ them, as they might result in worse performance. We propose to manage the energy budget of a supercomputer through EnergyFairShare (EFS), a FairShare-like scheduling algorithm. FairShare is a classic scheduling rule that prioritizes jobs belonging to users who were assigned small amount of CPU-second in the past. Similarly, EFS keeps track of users 'consumption of Watt-seconds and prioritizes those whom jobs consumed less energy. Therefore, EFS incentives users to optimize their code for energy efficiency. Having higher priority, jobs have smaller queuing times and, thus, smaller turn-around time. To validate this principle, we implemented EFS in a scheduling simulator and processed workloads from various HPC centers. The results show that, by reducing it energy consumption, auser will reduce it stretch (slowdown), compared to increasing it energy consumption. To validate the general feasibility odour approach, we also implemented EFS as an extension forSLURM, a popular HPC resource and job management system.We validated our plugin both by emulating a large scale platform, and by experiments upon a real cluster with monitored energy consumption. We observed smaller waiting times for energy efficient users

    Adaptive Resource and Job Management for Limited Power Consumption

    Get PDF
    International audienceThe last decades have been characterized by anever growing requirement in terms of computing and storage resources.This tendency has recently put the pressure on the abilityto efficiently manage the power required to operate the hugeamount of electrical components associated with state-of-the-arthigh performance computing systems. The power consumption ofa supercomputer needs to be adjusted based on varying powerbudget or electricity availabilities. As a consequence, Resourceand Job Management Systems have to be adequately adaptedin order to efficiently schedule jobs with optimized performancewhile limiting power usage whenever needed.We introduce in this paper a new scheduling strategy thatcan adapt the executed workload to a limited power budget. Theoriginality of this approach relies upon a combination of speedscaling and node shutdown techniques for power reductions. It isimplemented into the widely used resource and job managementsystem SLURM. Finally, it is validated through large scale emulationsusing real production workload traces of the supercomputerCurie

    Improving Backfilling by using Machine Learning to predict Running Times

    Get PDF
    International audienceThe job management system is the HPC middleware responsible for distributing computing power to applications. While such systems generate an ever increasing amount of data, they are characterized by uncertainties on some parameters like the job running times. The question raised in this work is: To what extent is it possible/useful to take into account predictions on the job running times for improving the global scheduling? We present a comprehensive study for answering this question assuming the popular EASY backfilling policy. More precisely, we rely on some classical methods in machine learning and propose new cost functions well-adapted to the problem. Then, we assess our proposed solutions through intensive simulations using several production logs. Finally, we propose a new scheduling algorithm that outperforms the popular EASY backfilling algorithm by 28% considering the average bounded slowdown objective

    Introducing Energy based fair-share scheduling

    No full text
    International audienceEnergy consumption has become one of the most important parameters in High Performance computing platforms. Fair-share scheduling is a widely used technique in job schedulers to prioritize jobs, depending to past users allocations. In practice this technique is mainly based on CPU-Time usage. Since power is managed as a new type of resources by Slurm and energy consumption can be charged independently, there is a real need for fairness in terms of energy consumption.This presentation will introduce fair-share scheduling based on past energy usage in Slurm. The new technique will allow users that have optimized their codes to be more energy efficient or make better usage of DVFS techniques to improve the stretch times of their workload

    Mechanisms of trophic partitioning within two fish communities associated with a tropical oceanic island

    No full text
    International audienceUnderstanding drivers of trophic partitioning at the community level is an essential prerequisite to the establishmentof ecosystem-based management of fisheries. In this study, we identify drivers of trophic partitioning within acommunity of epipelagic fish and a community of deep-water fishes off Reunion Island. Effects of intrinsic (speciesidentity, etc.) and environmental variables (fishing zone, month) on stomach content composition and stable isotoperatios were tested using regression trees and linear models respectively. Our results demonstrated firstly an independenceof both communities, with very few common prey although they occurred in similar localities, and secondly,very different patterns of resources partitioning among each community. The community of epipelagic fishsegregated into three trophic guilds composed of species foraging on a limited range of prey. This observation is notconsistent with the general view that these high trophic level species are opportunistic and generalist. The habitatseems to be the main driver of deep-water fishes feeding partitioning, which is in accordance with the sound-scatteringlayer interception hypothesis. Deep-water fishes would distribute in the water column at different depths and allspecies would feed on the same resources at each depth. The results of this study suggest that fisheries managementshould be very different for epipelagic (more species-centred) and deep-water fish (more habitat-centred)

    Introducing Energy based fair-share scheduling

    No full text
    International audienceEnergy consumption has become one of the most important parameters in High Performance computing platforms. Fair-share scheduling is a widely used technique in job schedulers to prioritize jobs, depending to past users allocations. In practice this technique is mainly based on CPU-Time usage. Since power is managed as a new type of resources by Slurm and energy consumption can be charged independently, there is a real need for fairness in terms of energy consumption.This presentation will introduce fair-share scheduling based on past energy usage in Slurm. The new technique will allow users that have optimized their codes to be more energy efficient or make better usage of DVFS techniques to improve the stretch times of their workload

    Introducing Power-capping in Slurm scheduling

    No full text
    International audienceThe last decades have been characterized by an ever growing requirement in terms of computing and storage resources. This tendency has recently put the pressure on the ability to efficiently manage the power required to operate the huge amount of electrical components associated with state-of-the-art computing and data centers. The power consumption of a supercomputer needs to be adjusted based on varying power budget or electricity availabilities. As a consequence, Resource and Job Management Systems have to be adequately adapted in order to efficiently schedule jobs with optimized performance while limiting power usage whenever needed. Our goal is to introduce a new power consumption adaptive scheduling strategy that provides the capability to autonomously adapt the executed workload to the available or planned power budget. The originality of this approach relies on a combination of DVFS (Dynamic Voltage and Frequency Scaling) and node shut-down techniques

    Comparison of immersive room and virtual reality, through the consumption episode "Eating a sandwich in a park"

    No full text
    International audienceConventional approaches for consumers’ tests, CLT and HUT, lead respectively to a high either internal or external validity. In recent years, several immersive approaches have been developed to achieve both internal and external validity at the same time, i.e. environments close to actual consumption context while keeping parameters under control. Virtual reality (VR) is one of the most promising one.In this study, we assess two immersive strategies in terms of internal and external validity for a consumption episode “having a sandwich for lunch in a park”. The two experimental conditions were an immersive room (N = 57, Fig. 1) and a VR environment (N = 55, Fig. 2). We add two control conditions: an actual park in summer (N = 56, Fig. 3) vs. sensory booths (N = 59, Fig. 4). For each condition, 4 sandwich recipes were assessed in between participants design. As one of the recipes was duplicated (for reliability assessment), participants were provided with 5 samples presented in a sequential monadic order. The samples were assessed on several hedonic criteria: liking, product-context appropriateness, and emotional responses. Participants ended the experience with a questionnaire measuring their level of immersion. We hypothesized that (1) immersive room and virtual reality would reach a higher internal validity than the actual park environment, and (2) the two immersive approaches would reach a higher external validity than the CLT condition.As expected, immersive conditions showed good external validity, where participants were more immersed and engaged than in sensory booths. However, for internal validity criteria, such as discrimination, both immersive conditions obtained lower results than actual park condition. This difference of internal validity could be linked to a novelty effect, that would have distracted participants from the product assessment task
    corecore