6,975 research outputs found

    Topology-aware equipartitioning with coscheduling on multicore systems

    Get PDF
    Over the last decade, multicore architectures have become omnipresent. Today, they are used in the whole product range from server systems to handheld computers. The deployed software still undergoes the slow transition from sequential to parallel. This transition, however, is gaining more and more momentum due to the increased availability of more sophisticated parallel programming environments, which replace the some-times crude results of ad-hoc parallelization. Combined with the ever increasing complexity of multicore architectures, this results in a scheduling problem that is different from what it has been, because features such as non-uniform memory access, shared caches, or simultaneous multithreading have to be considered. In this paper, we compare different ways of scheduling multiple parallel applications. Due to emerging parallel programming environments, we only consider malleable applications, i. e., applications where the parallelism degree can be changed on the fly. We propose a topology-aware scheduling scheme that combines equipartitioning and coscheduling. It does not suffer from the drawbacks of the individual concepts and also allows to run applications at different degrees of parallelisms without compromising fairness. We find that topology-awareness increases performance for all evaluated workloads. The combination with coscheduling is more sensitive towards the executed workloads. However, the gained versatility allows new use cases to be explored, which were not possible before

    High-Throughput Computing on High-Performance Platforms: A Case Study

    Full text link
    The computing systems used by LHC experiments has historically consisted of the federation of hundreds to thousands of distributed resources, ranging from small to mid-size resource. In spite of the impressive scale of the existing distributed computing solutions, the federation of small to mid-size resources will be insufficient to meet projected future demands. This paper is a case study of how the ATLAS experiment has embraced Titan---a DOE leadership facility in conjunction with traditional distributed high- throughput computing to reach sustained production scales of approximately 52M core-hours a years. The three main contributions of this paper are: (i) a critical evaluation of design and operational considerations to support the sustained, scalable and production usage of Titan; (ii) a preliminary characterization of a next generation executor for PanDA to support new workloads and advanced execution modes; and (iii) early lessons for how current and future experimental and observational systems can be integrated with production supercomputers and other platforms in a general and extensible manner

    Enhancing the genetic-based scheduling in computational grids by a structured hierarchical population

    Get PDF
    Independent Job Scheduling is one of the most useful versions of scheduling in grid systems. It aims at computing efficient and optimal mapping of jobs and/or applications submitted by independent users to the grid resources. Besides traditional restrictions, mapping of jobs to resources should be computed under high degree of heterogeneity of resources, the large scale and the dynamics of the system. Because of the complexity of the problem, the heuristic and meta-heuristic approaches are the most feasible methods of scheduling in grids due to their ability to deliver high quality solutions in reasonable computing time. One class of such meta-heuristics is Hierarchic Genetic Strategy (HGS). It is defined as a variant of Genetic Algorithms (GAs) which differs from the other genetic methods by its capability of concurrent search of the solution space. In this work, we present an implementation of HGS for Independent Job Scheduling in dynamic grid environments. We consider the bi-objective version of the problem in which makespan and flowtime are simultaneously optimized. Based on our previous work, we improve the HGS scheduling strategy by enhancing its main branching operations. The resulting HGS-based scheduler is evaluated under the heterogeneity, the large scale and dynamics conditions using a grid simulator. The experimental study showed that the HGS implementation outperforms existing GA-based schedulers proposed in the literature.Peer ReviewedPostprint (author's final draft

    Workload Schedulers - Genesis, Algorithms and Comparisons

    Get PDF
    In this article we provide brief descriptions of three classes of schedulers: Operating Systems Process Schedulers, Cluster Systems, Jobs Schedulers and Big Data Schedulers. We describe their evolution from early adoptions to modern implementations, considering both the use and features of algorithms. In summary, we discuss differences between all presented classes of schedulers and discuss their chronological development. In conclusion, we highlight similarities in the focus of scheduling strategies design, applicable to both local and distributed systems
    • …
    corecore