617 research outputs found
Enhancing Job Scheduling of an Atmospheric Intensive Data Application
Nowadays, e-Science applications involve great deal of data to have more accurate analysis. One of its application domains is the Radio Occultation which manages satellite data. Grid Processing Management is a physical infrastructure geographically distributed based on Grid Computing, that is implemented for the overall processing Radio Occultation analysis. After a brief description of algorithms adopted to characterize atmospheric profiles, the paper presents an improvement of job scheduling in order to decrease processing time and optimize resource utilization. Extension of grid computing capacity is implemented by virtual machines in existing physical Grid in order to satisfy temporary job requests. Also scheduling plays an important role in the infrastructure that is handled by a couple of schedulers which are developed to manage data automaticall
Multi-core job submission and grid resource scheduling for ATLAS AthenaMP
AthenaMP is the multi-core implementation of the ATLAS software framework and allows the efficient sharing of memory pages between multiple threads of execution. This has now been validated for production and delivers a significant reduction on the overall application memory footprint with negligible CPU overhead. Before AthenaMP can be routinely run on the LHC Computing Grid it must be determined how the computing resources available to ATLAS can best exploit the notable improvements delivered by switching to this multi-process model. A study into the effectiveness and scalability of AthenaMP in a production environment will be presented. Best practices for configuring the main LRMS implementations currently used by grid sites will be identified in the context of multi-core scheduling optimisation
Workload Schedulers - Genesis, Algorithms and Comparisons
In this article we provide brief descriptions of three classes of schedulers: Operating Systems Process Schedulers, Cluster Systems, Jobs Schedulers and Big Data Schedulers. We describe their evolution from early adoptions to modern implementations, considering both the use and features of algorithms. In summary, we discuss differences between all presented classes of schedulers and discuss their chronological development. In conclusion, we highlight similarities in the focus of scheduling strategies design, applicable to both local and distributed systems
A Distributed Economics-based Infrastructure for Utility Computing
Existing attempts at utility computing revolve around two approaches. The
first consists of proprietary solutions involving renting time on dedicated
utility computing machines. The second requires the use of heavy, monolithic
applications that are difficult to deploy, maintain, and use.
We propose a distributed, community-oriented approach to utility computing.
Our approach provides an infrastructure built on Web Services in which modular
components are combined to create a seemingly simple, yet powerful system. The
community-oriented nature generates an economic environment which results in
fair transactions between consumers and providers of computing cycles while
simultaneously encouraging improvements in the infrastructure of the
computational grid itself.Comment: 8 pages, 1 figur
A new job migration algorithm to improve data center efficiency
The under exploitation of the available resources risks to be one of the main
problems for a computing center. The growing demand of computational power
necessarily entails more complex approaches in the management of the computing
resources, with particular attention to the batch queue system scheduler. In a
heterogeneous batch queue system, available for both serial single core
processes and parallel multi core jobs, it may happen that one or more
computational nodes composing the cluster are not fully occupied, running a
number of jobs lower than their actual capability. A typical case is
represented by more single core jobs running each one over a different multi
core server, while more parallel jobs - requiring all the available cores of a
host - are queued. A job rearrangement executed at runtime is able to free
extra resources, in order to host new processes. We present an efficient method
to improve the computing resources exploitation.Comment: 7 page
Priority-enabled Scheduling for Resizable Parallel Applications
In this paper, we illustrate the impact of dynamic resizability on parallel scheduling.
Our ReSHAPE framework includes an application scheduler that supports dynamic resizing of parallel applications. We propose and evaluate new scheduling policies made possible by our ReSHAPE framework. The framework also provides a platform to experiment with more interesting and sophisticated scheduling policies and scenarios for resizable parallel applications. The proposed policies support scheduling of parallel applications with and without user assigned priorities. Experimental results show that these scheduling policies significantly improve individual application turn around time as well as overall cluster utilization
Development of a tool to optimize the performance of a Maui Cluster Scheduler
The use of Linux cluster computing in a scientific and heterogeneous environment has been growing very fast in the past years. The often conflicting userās requests of shared resources are quite difficult to satisfy for the administrators, and, usually, lower the overall system efficiency. In this scenario a new tool to study and
optimize the Maui Cluster Scheduler has been developed together with a new set of metrics to evaluate any given configuration. The main idea is to use the Maui internal simulator, fed by workloads produced either by a real cluster than by an ad hoc one, to test several scheduler configurations and then, using a genetic algorithm, to choose the best solution. In this work the architecture of the proposed tool is described together with the first results
- ā¦