69 research outputs found

    Evaluating SLURM simulator with real-machine SLURM and vice versa

    Get PDF
    © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Having a precise and a fast job scheduler model that resembles the real-machine job scheduling software behavior is extremely important in the field of job scheduling. The idea behind SLURM simulator is preserving the original code of the core SLURM functions while allowing for all the advantages of a simulator. Since 2011, SLURM simulator has passed through several iterations of improvements in different research centers. In this work, we present our latest improvements of SLURM simulator and perform the first-ever validation of the simulator on the real machine. In particular, we improved the simulator's performance for about 2.6 times, made the simulator deterministic across several same set-up runs, and improved the simulator's accuracy; its deviation from the real-machine is lowered from previous 12% to at most 1.7%. Finally, we illustrate with several use cases the value of the simulator for job scheduling researchers, SLURM-system administrators, and SLURM developers.Peer ReviewedPostprint (author's final draft

    Memory demands in disaggregated HPC: How accurate do we need to be?

    Get PDF
    Disaggregated memory has recently been proposed as a way to allow flexible and fine-grained allocation of memory capacity, mitigating the mismatch between fixed per-node resource provisioning and the needs of the submitted jobs. By allowing the sharing of memory capacity among cluster nodes, overall HPC system throughput can be improved, due to the reduction of stranded and underutilized resources. A key parameter that is generally expected to be provided by the user at submission time is the job's memory capacity demand. It is unrealistic to expect this number to be precise. This paper makes an important step towards understanding the effect of overestimating the job memory requirements. We analyse the implications on overall system throughput and job response time. We leverage a disaggregated simulation infrastructure implemented on the popular Slurm resource manager. Our results show that even when the cost of a 60% increase in memory demands only increases a single job's user response time by 8%, the aggregate result of everybody doing so can be a 25% reduction in throughput and a 5 times increase in response time. These results show that GB-hours should be explicitly allocated in addition to core-hours.This work is part of a project that has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 754337 (EuroEXA); it has been supported by the Spanish Ministry of Science and Innovation (project TIN2015-65316-P and Ramon y Cajal fellowship RYC2018-025628-I), Generalitat de Catalunya (contracts 2014-SGR-1051 and 2014-SGR-1272), and the Severo Ochoa Programme (SEV-2015-0493).Peer ReviewedPostprint (author's final draft

    Holistic Slowdown Driven Scheduling and Resource Management for Malleable Jobs

    Get PDF
    In job scheduling, the concept of malleability has been explored since many years ago. Research shows that malleability improves system performance, but its utilization in HPC never became widespread. The causes are the difficulty in developing malleable applications, and the lack of support and integration of the different layers of the HPC software stack. However, in the last years, malleability in job scheduling is becoming more critical because of the increasing complexity of hardware and workloads. In this context, using nodes in an exclusive mode is not always the most efficient solution as in traditional HPC jobs, where applications were highly tuned for static allocations, but offering zero flexibility to dynamic executions. This paper proposes a new holistic, dynamic job scheduling policy, Slowdown Driven (SD-Policy), which exploits the malleability of applications as the key technology to reduce the average slowdown and response time of jobs. SD-Policy is based on backfill and node sharing. It applies malleability to running jobs to make room for jobs that will run with a reduced set of resources, only when the estimated slowdown improves over the static approach. We implemented SD-Policy in SLURM and evaluated it in a real production environment, and with a simulator using workloads of up to 198K jobs. Results show better resource utilization with the reduction of makespan, response time, slowdown, and energy consumption, up to respectively 7%, 50%, 70%, and 6%, for the evaluated workloads

    Improving HPC system throughput and response time using memory disaggregation

    Get PDF
    © 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.HPC clusters are cost-effective, well understood, and scalable, but the rigid boundaries between compute nodes may lead to poor utilization of compute and memory resources. HPC jobs may vary, by orders of magnitude, in memory consumption per core. Thus, even when the system is provisioned to accommodate normal and large capacity nodes, a mismatch between the system and the memory demands of the scheduled jobs can lead to inefficient usage of both memory and compute resources. Disaggregated memory has recently been proposed as a way to mitigate this problem by flexibly allocating memory capacity across cluster nodes. This paper presents a simulation approach for at-scale evaluation of job schedulers with disaggregated memories and it introduces a new disaggregated-aware job allocation policy for the Slurm resource manager. Our results show that using disaggregated memories, depending on the imbalance between the system and the submitted jobs, a similar throughput and job response time can be achieved on a system with up to 33% less total memory provisioning.This work is part of a project that has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 754337 (EuroEXA); it has been supported by the Spanish Ministry of Science and Innovation (project TIN2015-65316-P and Ramon y Cajal fellowship RYC2018-025628-I), Generalitat de Catalunya (contracts 2014-SGR-1051 and 2014-SGR-1272), and the Severo Ochoa Programme (SEV-2015-0493).Peer ReviewedPostprint (author's final draft

    Supercomputer Emulation For Evaluating Scheduling Algorithms

    Get PDF
    Scheduling algorithms have a significant impact on the optimal utilization of HPC facilities, yet the vast majority of the research in this area is done using simulations. In working with simulations, a great deal of factors that affect a real scheduler, such as its scheduling processing time, communication latencies and the scheduler intrinsic implementation complexity are not considered. As a result, despite theoretical improvements reported in several articles, practically no new algorithms proposed have been implemented in real schedulers, with HPC facilities still using the basic first-come-first-served (FCFS) with Backfill policy scheduling algorithm. A better approach could be, therefore, the use of real schedulers in an emulation environment to evaluate new algorithms. This thesis investigates two related challenges in emulations: computational cost and faithfulness of the results to real scheduling environments. It finds that the sampling, shrinking and shuffling of a trace must be done carefully to keep the classical metrics invariant or linear variant in relation to size and times of the original workload. This is accomplished by the careful control of the submission period and the consideration of drifts in the submission period and trace duration. This methodology can help researchers to better evaluate their scheduling algorithms and help HPC administrators to optimize the parameters of production schedulers. In order to assess the proposed methodology, we evaluated both the FCFS with Backfill and Suspend/Resume scheduling algorithms. The results strongly suggest that Suspend/Resume leads to a better utilization of a supercomputer when high priorities are given to big jobs

    GSaaS: A service to cloudify and schedule GPUs

    Get PDF
    Cloud technology is an attractive infrastructure solution that provides customers with an almost unlimited on-demand computational capacity using a pay-per-use approach, and allows data centers to increase their energy and economic savings by adopting a virtualized resource sharing model. However, resources such as graphics processing units (GPUs), have not been fully adapted to this model. Although, general-purpose computing on graphics processing units (GPGPU) is becoming more and more popular, cloud providers lack of flexibility to manage accelerators, because of the extended use of peripheral component interconnect (PCI) passthrough techniques to attach GPUs to virtual machines (VMs). For this reason, we design, develop, and evaluate a service that provides a complete management of cloudified GPUs (cGPUs) in public cloud platforms. Our solution enables an effective, anonymous, and transparent access from VMs to cGPUs that are previously scheduled and assigned by a full resource manager, taking into account new GPU selection policies and new working modes based on the locality of the physical accelerators and the exclusivity when accessing them. This easy-to-adopt tool improves the resource availability through different cGPUs configurations for end-users, whilst cloud providers are able to achieve a better utilization of their infrastructures and offer more competitive services. Scalability results in a real cloud environment demonstrate that our solution introduces a virtually null overhead in the deployment of VMs. Besides, performance experiments reveal that GPU-enabled clusters based on cloud infrastructures can benefit from our proposal not only exploiting better the accelerators, but also serving more jobs requests per unit of time

    Development of Data-Driven Dispatching Heuristics for Heterogeneous HPC Systems

    Get PDF
    Nell’ambito dei sistemi High-Performance Computing, l'uso di euristiche di dispatching efficaci, per lo scheduling e l'allocazione dei jobs in arrivo, è fondamentale al fine di ottenere buoni livelli di Quality of Service. In questo elaborato ci concentreremo sul design e l’analisi di euristiche di allocazione delle risorse, che saranno progettate per sistemi HPC eterogenei, nei quali i nodi possono essere equipaggiati con diverse tipologie di unità di elaborazione. Impiegheremo poi euristiche data-driven per la predizione della durata dei jobs, e valuteremo il tutto dal punto di vista del throughput di sistema. Considereremo in particolare Eurora, un sistema HPC eterogeneo realizzato da CINECA, oltre che un workload catturato dal relativo log di sistema, contenente jobs reali inviati dagli utenti. Tutto ciò è stato possibile grazie ad AccaSim, un simulatore di sistemi HPC sviluppato nel Dipartimento di Informatica - Scienza e Ingegneria (DISI) dell’Università di Bologna, ed al quale si è contribuito in modo sostanziale. Quest’elaborato mostra che l’impatto di diverse euristiche di allocazione sul throughput di un sistema HPC eterogeneo non è trascurabile, con variazioni in grado di raggiungere picchi di un ordine di grandezza, e più pronunciate considerando brevi intervalli temporali, dell'ordine dei mesi. Abbiamo inoltre osservato che l’impiego di euristiche per la predizione della durata dei jobs è di grande beneficio al throughput su tutte le euristiche di allocazione, e specialmente su quelle che integrano in maniera più profonda tali elementi data-driven. Infine, l’analisi effettuata ha permesso di caratterizzare integralmente il sistema Eurora ed il relativo workload, permettendoci di comprendere al meglio gli effetti su di esso dei diversi metodi di dispatching, nonché di estendere le nostre considerazioni anche ad altre classi di sistemi

    Market driven elastic secure infrastructure

    Full text link
    In today’s Data Centers, a combination of factors leads to the static allocation of physical servers and switches into dedicated clusters such that it is difficult to add or remove hardware from these clusters for short periods of time. This silofication of the hardware leads to inefficient use of clusters. This dissertation proposes a novel architecture for improving the efficiency of clusters by enabling them to add or remove bare-metal servers for short periods of time. We demonstrate by implementing a working prototype of the architecture that such silos can be broken and it is possible to share servers between clusters that are managed by different tools, have different security requirements, and are operated by tenants of the Data Center, which may not trust each other. Physical servers and switches in a Data Center are grouped for a combination of reasons. They are used for different purposes (staging, production, research, etc); host applications required for servicing specific workloads (HPC, Cloud, Big Data, etc); and/or configured to meet stringent security and compliance requirements. Additionally, different provisioning systems and tools such as Openstack-Ironic, MaaS, Foreman, etc that are used to manage these clusters take control of the servers making it difficult to add or remove the hardware from their control. Moreover, these clusters are typically stood up with sufficient capacity to meet anticipated peak workload. This leads to inefficient usage of the clusters. They are under-utilized during off-peak hours and in the cases where the demand exceeds capacity the clusters suffer from degraded quality of service (QoS) or may violate service level objectives (SLOs). Although today’s clouds offer huge benefits in terms of on-demand elasticity, economies of scale, and a pay-as-you-go model yet many organizations are reluctant to move their workloads to the cloud. Organizations that (i) needs total control of their hardware (ii) has custom deployment practices (iii) needs to match stringent security and compliance requirements or (iv) do not want to pay high costs incurred from running workloads in the cloud prefers to own its hardware and host it in a data center. This includes a large section of the economy including financial companies, medical institutions, and government agencies that continue to host their own clusters outside of the public cloud. Considering that all the clusters may not undergo peak demand at the same time provides an opportunity to improve the efficiency of clusters by sharing resources between them. The dissertation describes the design and implementation of the Market Driven Elastic Secure Infrastructure (MESI) as an alternative to the public cloud and as an architecture for the lowest layer of the public cloud to improve its efficiency. It allows mutually non-trusting physically deployed services to share the physical servers of a data center efficiently. The approach proposed here is to build a system composed of a set of services each fulfilling a specific functionality. A tenant of the MESI has to trust only a minimal functionality of the tenant that offers the hardware resources. The rest of the services can be deployed by each tenant themselves MESI is based on the idea of enabling tenants to share hardware they own with tenants they may not trust and between clusters with different security requirements. The architecture provides control and freedom of choice to the tenants whether they wish to deploy and manage these services themselves or use them from a trusted third party. MESI services fit into three layers that build on each other to provide: 1) Elastic Infrastructure, 2) Elastic Secure Infrastructure, and 3) Market-driven Elastic Secure Infrastructure. 1) Hardware Isolation Layer (HIL) – the bottommost layer of MESI is designed for moving nodes between multiple tools and schedulers used for managing the clusters. It defines HIL to control the layer 2 switches and bare-metal servers such that tenants can elastically adjust the size of the clusters in response to the changing demand of the workload. It enables the movement of nodes between clusters with minimal to no modifications required to the tools and workflow used for managing these clusters. (2) Elastic Secure Infrastructure (ESI) builds on HIL to enable sharing of servers between clusters with different security requirements and mutually non-trusting tenants of the Data Center. ESI enables the borrowing tenant to minimize its trust in the node provider and take control of trade-offs between cost, performance, and security. This enables sharing of nodes between tenants that are not only part of the same organization by can be organization tenants in a co-located Data Center. (3) The Bare-metal Marketplace is an incentive-based system that uses economic principles of the marketplace to encourage the tenants to share their servers with others not just when they do not need them but also when others need them more. It provides tenants the ability to define their own cluster objectives and sharing constraints and the freedom to decide the number of nodes they wish to share with others. MESI is evaluated using prototype implementations at each layer of the architecture. (i) The HIL prototype implemented with only 3000 Lines of Code (LOC) is able to support many provisioning tools and schedulers with little to no modification; adds no overhead to the performance of the clusters and is in active production use at MOC managing over 150 servers and 11 switches. (ii) The ESI prototype builds on the HIL prototype and adds to it an attestation service, a provisioning service, and a deterministically built open-source firmware. Results demonstrate that it is possible to build a cluster that is secure, elastic, and fairly quick to set up. The tenant requires only minimum trust in the provider for the availability of the node. (iii) The MESI prototype demonstrates the feasibility of having a one-of-kind multi-provider marketplace for trading bare-metal servers where providers also use the nodes. The evaluation of the MESI prototype shows that all the clusters benefit from participating in the marketplace. It uses agents to trade bare-metal servers in a marketplace to meet the requirements of their clusters. Results show that compared to operating as silos individual clusters see a 50% improvement in the total work done; up to 75% improvement (reduction) in waiting for queues and up to 60% improvement in the aggregate utilization of the test bed. This dissertation makes the following contributions: (i) It defines the architecture of MESI allows mutually non-trusting tenants of the data center to share resources between clusters with different security requirements. (ii) Demonstrates that it is possible to design a service that breaks the silos of static allocation of clusters yet has a small Trusted Computing Base (TCB) and no overhead to the performance of the clusters. (iii) Provides a unique architecture that puts the tenant in control of its own security and minimizes the trust needed in the provider for sharing nodes. (iv) A working prototype of a multi-provider marketplace for bare-metal servers which is a first proof-of-concept that demonstrates that it is possible to trade real bare-metal nodes at practical time scales such that moving nodes between clusters is sufficiently fast to be able to get some useful work done. (v) Finally results show that it is possible to encourage even mutually non-trusting tenants to share their nodes with each other without any central authority making allocation decisions. Many smart, dedicated engineers and researchers have contributed to this work over the years. I have jointly led the efforts to design the HIL and the ESI layer; led the design and implementation of the bare-metal marketplace and the overall MESI architecture

    HDeepRM: Deep Reinforcement Learning para la Gestión de Cargas de Trabajo en Clústeres Heterogéneos

    Get PDF
    ABSTRACT: High Performance Computing (HPC) environments offer users computational capability as a service. They are constituted by computing clusters, which are groups of resources available for processing jobs sent by the users. Heterogeneous configurations of these clusters allow for providing resources fitted to a wider spectrum of workloads, superior to that of traditional homogeneous approaches. This in turn improves the computational and energetic efficiency of the service. Scheduling of resources for incoming jobs is undertaken by a workload manager following a established policy. Classic policies have been developed for homogeneous environments, with literature focusing on improving job selection policies. Nevertheless, in heterogeneous configurations the resource selection is as relevant for optimizing the offered service. Complexity of scheduling policies grows with the number of resources and degree of heterogeneity in the service. Deep Reinforcement Learning (DRL) has been recently evaluated in homogeneous workload management scenarios as an alternative to deal with complex patterns. It introduces an artificial agent which estimates via learning the optimal scheduling policy for a given system. In this thesis, HDeepRM, a novel framework for the study of DRL agents in heterogeneous clusters is designed, implemented, tested and distributed. This leverages a state-of-the-art simulator, and offers users a clean interface for developing their own bespoke agents, as well as evaluating them before going into production. Evaluations have been undertaken to demonstrate the validity of the framework. Two agents based on well-known reinforcement learning algorithms are implemented over HDeepRM, and results show the research potential in this area for the scientific community.RESUMEN: Los entornos de High Performance Computing (HPC) ofrecen capacidad computacional como servicio a sus usuarios. Están formados por clústeres de cómputo, grupos de recursos que aceptan y procesan trabajos enviados por los usuarios. Las configuraciones heterogéneas permiten disponer de recursos adecuados a un espectro de cargas de trabajo superior al de los clústeres homogéneos tradicionales, mejorando la eficiencia computacional y energética del servicio. La asociación de trabajos con recursos del sistema es llevada a cabo por un gestor de cargas de trabajo siguiendo una política de planificación. Las políticas clásicas han sido desarrolladas para entornos homogéneos, y la literatura se centra en la selección del trabajo. Sin embargo, en entornos heterogéneos la selección del recurso es de relevancia para la optimización del servicio. La complejidad de las políticas de planificación crece con el número de recursos y la heterogeneidad del sistema. El Aprendizaje Profundo por Refuerzo o Deep Reinforcement Learning (DRL) ha sido recientemente objeto de estudio como alternativa para la gestión de cargas de trabajo. En él, se propone un agente artificial que estima mediante aprendizaje la política de planificación óptima para un determinado sistema. En esta tesis se describe el proceso de creación de HDeepRM, un nuevo marco de trabajo cuyo objetivo es el estudio de agentes basados en DRL para la estimación de políticas de planificación en clústeres heterogéneos. Implementado sobre un simulador actual, HDeepRM permite crear y evaluar nuevos agentes antes de llevarlos a producción. Se ha llevado a cabo el diseño, implementación, pruebas y empaquetado del software para poder distribuirlo a la comunidad científica. Finalmente, en las evaluaciones se demuestra la validez del marco de trabajo, y se implementan sobre él dos agentes basados en algoritmos de DRL. La comparación de estos con políticas clásicas muestra el potencial de investigación en este área.Máster en Ingeniería Informátic
    • …
    corecore