69 research outputs found

    ANALYZING SUPERCOMPUTER UTILIZATION UNDER QUEUING WITH A PRIORITY FORMULA AND A STRICT BACKFILL POLICY

    Get PDF
    Supercomputers have become increasingly important in recent years due to the growing amount of data available and the increasing demand for quicker results in the scientific community. Since supercomputers carry a high cost to build and maintain, efficiency becomes more important to the owners, administrators, and users of these supercomputers. One important factor in determining the efficiency of a supercomputer is the scheduling of jobs that are submitted by users of the system. Previous work has dealt with optimizing the schedule on the system’s end while the users are blinded from the process. The work presented in this thesis investigates a scheduling system that is implemented at the Oak Ridge National Laboratory (ORNL) supercomputer Kraken with a backfilling policy and attempts to outline the optimal methods from the user’s point of view in the scheduling system, along with using a simulation approach to optimize the priority formula. Normally the user has no idea which scheduling algorithms are used, but the users at ORNL not only know how the scheduling works but they can also view the current activity of the system. This gives an advantage to the users who are willing to benefit from this knowledge by utilizing some elementary game theory to optimize their strategies. The results will show a benefit to both the users, since they will be able to process their jobs sooner, and the system, since it will better utilized with little expense to the administrators, through competition. Queuing models and simulation have been well studied in almost all relevant aspects of the modern world. Higher efficiency is the goal of many researchers in several different fields; the supercomputer queues are no different. Efficient use of the resources makes the system administrator pleased while benefiting the users with more timely results. Studying these queuing models through simulation should help all parties involved by increasing utilization. The simulation will be validated and the utilization improvement will be measured and reported. User defined formulas will be developed for future users to help maximize utilization and minimize wait times

    An Elastic Scheduling Algorithm For Resource Co-Allocation Based on System Generated Predictions With Priority

    Get PDF
    Resource Co-Allocation is basically used to execute multiple site jobs in a large scale computing environments with secure, faultless and in transparent manner. To be precise we are actually allocating multiple resources for different jobs taking into account the time parameter. Here we make use of the Scheduling queue and Resource Co-Allocation to reduce the Turn-around time with an advanced concept of System Generated Prediction based on Priority. In existing works we are scheduling the resource co-allocation request from user runtime estimation. As user runtime estimations are usually very imprecise that is not clear. In proposed work we are scheduling the resource co-allocation request based on system generated predictions through Discovery service & Priority (fairness and user experience) through topological sorting technique. The system generated predictions are better parameters than user runtime estimates for Resource co-Allocation scheduling, because System generated predictions reduce the scheduling time through proxy ser based discovery service technique. The proposed work consider priorities like advanced reservation, system Generated Predictions, Negotiation, Co-scheduling, policy (SLA, Price, Trust) for resource Co-Allocation. The system generated predictions are better than user runtime estimates for Resource co- Allocation scheduling, using the experimental data’s we proved this concept. End User doesn’t want the grid and resource knowledge only submit job to the portal. This proposed portal will take care of all knowledge about the resource collocation automatically with fast and efficient manner

    Provisioning Spot Market Cloud Resources to Create Cost-effective Virtual Clusters

    Full text link
    Infrastructure-as-a-Service providers are offering their unused resources in the form of variable-priced virtual machines (VMs), known as "spot instances", at prices significantly lower than their standard fixed-priced resources. To lease spot instances, users specify a maximum price they are willing to pay per hour and VMs will run only when the current price is lower than the user's bid. This paper proposes a resource allocation policy that addresses the problem of running deadline-constrained compute-intensive jobs on a pool of composed solely of spot instances, while exploiting variations in price and performance to run applications in a fast and economical way. Our policy relies on job runtime estimations to decide what are the best types of VMs to run each job and when jobs should run. Several estimation methods are evaluated and compared, using trace-based simulations, which take real price variation traces obtained from Amazon Web Services as input, as well as an application trace from the Parallel Workload Archive. Results demonstrate the effectiveness of running computational jobs on spot instances, at a fraction (up to 60% lower) of the price that would normally cost on fixed priced resources.Comment: 14 pages, 4 figures, 11th International Conference on Algorithms and Architectures for Parallel Processing (ICA3PP-11); Lecture Notes in Computer Science, Vol. 7016, 201

    Using Unused: Non-Invasive Dynamic FaaS Infrastructure with HPC-Whisk

    Full text link
    Modern HPC workload managers and their careful tuning contribute to the high utilization of HPC clusters. However, due to inevitable uncertainty it is impossible to completely avoid node idleness. Although such idle slots are usually too short for any HPC job, they are too long to ignore them. Function-as-a-Service (FaaS) paradigm promisingly fills this gap, and can be a good match, as typical FaaS functions last seconds, not hours. Here we show how to build a FaaS infrastructure on idle nodes in an HPC cluster in such a way that it does not affect the performance of the HPC jobs significantly. We dynamically adapt to a changing set of idle physical machines, by integrating open-source software Slurm and OpenWhisk. We designed and implemented a prototype solution that allowed us to cover up to 90\% of the idle time slots on a 50k-core cluster that runs production workloads

    Supercomputing Frontiers

    Get PDF
    This open access book constitutes the refereed proceedings of the 6th Asian Supercomputing Conference, SCFA 2020, which was planned to be held in February 2020, but unfortunately, the physical conference was cancelled due to the COVID-19 pandemic. The 8 full papers presented in this book were carefully reviewed and selected from 22 submissions. They cover a range of topics including file systems, memory hierarchy, HPC cloud platform, container image configuration workflow, large-scale applications, and scheduling

    Matching Renewable Energy Supply and Demand in Green Datacenters

    Get PDF
    In this paper, we propose GreenSlot, a scheduler for parallel batch jobs in a datacenter powered by a photovoltaic solar array and the electrical grid (as a backup). GreenSlot predicts the amount of solar energy that will be available in the near future, and schedules the workload to maximize the green energy consumption while meeting the jobs’ deadlines. If grid energy must be used to avoid deadline violations, the scheduler selects times when it is cheap. Our results for both scientific computing workloads and data processing workloads demonstrate that GreenSlot can increase solar energy consumption by up to 117% and decrease energy cost by up to 39%, compared to conventional schedulers. Based on these positive results, we conclude that green datacenters and green-energy-aware scheduling can have a significant role in building a more sustainable IT ecosystem

    ADEPT Runtime/Scalability Predictor in support of Adaptive Scheduling

    Get PDF
    A job scheduler determines the order and duration of the allocation of resources, e.g. CPU, to the tasks waiting to run on a computer. Round-Robin and First-Come-First-Serve are examples of algorithms for making such resource allocation decisions. Parallel job schedulers make resource allocation decisions for applications that need multiple CPU cores, on computers consisting of many CPU cores connected by different interconnects. An adaptive parallel scheduler is a parallel scheduler that is capable of adjusting its resource allocation decisions based on the current resource usage and demand. Adaptive parallel schedulers that decide the numbers of CPU cores to allocate to a parallel job provide more flexibility and potentially improve performance significantly for both local and grid job scheduling compared to non-adaptive schedulers. A major reason why adaptive schedulers are not yet used practically is due to lack of knowledge of the scalability curves of the applications, and high cost of existing white-box approaches for scalability prediction. We show that a runtime and scalability prediction tool can be developed with 3 requirements: accuracy comparable to white-box methods, applicability, and robustness. Applicability depends only on knowledge feasible to gain in a production environment. Robustness addresses anomalous behaviour and unreliable predictions. We present ADEPT, a speedup and runtime prediction tool that satisfies all criteria for both single problem size and across different problem sizes of a parallel application. ADEPT is also capable of handling anomalies and judging reliability of its predictions. We demonstrate these using experiments with MPI and OpenMP implementations of NAS benchmarks and seven real applications
    corecore