7,213 research outputs found

    Scheduling tasks sharing files from distributed repositories

    Get PDF
    This paper is devoted to scheduling a large collection of independent tasks onto a large distributed heterogeneous platform, which is composed of a set of servers. Each server is a processor cluster equipped with a file repository. The tasks to be scheduled depend upon (input) files which initially reside on the server repositories. A given file may well be shared by several tasks. For each task, the problem is to decide on which server to execute it, and to transfer the required files (those which the task depends upon) to that server repository. The objective is to find a task allocation, and to schedule the induced communications, so as to minimize the total execution time. The contribution of this paper is twofold. On the theoretical side, we establish complexity results that assess the difficulty of the problem. On the practical side, we design several new heuristics, including an extension of the heuristic to the decentralized framework, and several lower cost heuristics, which we compare through extensive simulations.Dans cet article nous nous intéressons à l'ordonnancement d'un grand nombre de tâches indépendantes sur une plateforme hétérogène distribuée composée d'un ensemble de serveurs. Chaque serveur est une grappe de processeurs doté d'un entrepôt de données. les taches à ordonnancer dépendent de fichiers ( d'entrée) qui son initialement stockés dans les entrepôts. Un fichier donnée peut être partagé par plusieurs tâches. Pour chaque tâche, notre problème est de décider sur quel serveur l'exécuter, et de transférer les fichiers nécessaires ( ceux dont dépend la tache) vers l'entrepôt de ce serveur. L'objectif est de trouver une allocation des tâches, et un ordonnancement des communications induites, qui minimisent le temps total d'exécution. La contribution de cet article est double. Sur le plan théorique, nous établissons des nouveaux résultats de complexité qui caractérisent la difficulté du problème. Sur le plan pratique, nous proposons plusieurs nouvelles heuristiques, dont une extension de l'heuristique min-min aux plateformes distribuées, et des heuristiques de moindre coût, que nous comparons grâce à des simulations

    Scheduling Tasks Sharing Files from Distributed Repositories (revised version)

    Get PDF
    This paper is devoted to scheduling a large collection of independent tasks onto a large distributed heterogeneous platform, which is composed of a set of servers. Each server is a processor cluster equipped with a file repository. The tasks to be scheduled depend upon (input) files which initially reside on the server repositories. A given file may well be shared by several tasks. For each task, the problem is to decide which server will execute it, and to transfer the required files (those which the task depends upon) to that server repository. The objective is to find a task allocation, and to schedule the induced communications, so as to minimize the total execution time. The contribution of this paper is twofold. On the theoretical side, we establish complexity results that assess the difficulty of the problem. On the practical side, we design several new heuristics, including an extension of the heuristic to the decentralized framework, and several lower cost heuristics, which we compare through extensive simulations. This report is a revised version of the LIP research report no. 2003-49 / INRIA research report no. 4976, which it replaces

    HTC Scientific Computing in a Distributed Cloud Environment

    Full text link
    This paper describes the use of a distributed cloud computing system for high-throughput computing (HTC) scientific applications. The distributed cloud computing system is composed of a number of separate Infrastructure-as-a-Service (IaaS) clouds that are utilized in a unified infrastructure. The distributed cloud has been in production-quality operation for two years with approximately 500,000 completed jobs where a typical workload has 500 simultaneous embarrassingly-parallel jobs that run for approximately 12 hours. We review the design and implementation of the system which is based on pre-existing components and a number of custom components. We discuss the operation of the system, and describe our plans for the expansion to more sites and increased computing capacity

    A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing

    Full text link
    Data Grids have been adopted as the platform for scientific communities that need to share, access, transport, process and manage large data collections distributed worldwide. They combine high-end computing technologies with high-performance networking and wide-area storage management techniques. In this paper, we discuss the key concepts behind Data Grids and compare them with other data sharing and distribution paradigms such as content delivery networks, peer-to-peer networks and distributed databases. We then provide comprehensive taxonomies that cover various aspects of architecture, data transportation, data replication and resource allocation and scheduling. Finally, we map the proposed taxonomy to various Data Grid systems not only to validate the taxonomy but also to identify areas for future exploration. Through this taxonomy, we aim to categorise existing systems to better understand their goals and their methodology. This would help evaluate their applicability for solving similar problems. This taxonomy also provides a "gap analysis" of this area through which researchers can potentially identify new issues for investigation. Finally, we hope that the proposed taxonomy and mapping also helps to provide an easy way for new practitioners to understand this complex area of research.Comment: 46 pages, 16 figures, Technical Repor
    • …
    corecore