8 research outputs found

    Scheduling in Grid Computing Environment

    Full text link
    Scheduling in Grid computing has been active area of research since its beginning. However, beginners find very difficult to understand related concepts due to a large learning curve of Grid computing. Thus, there is a need of concise understanding of scheduling in Grid computing area. This paper strives to present concise understanding of scheduling and related understanding of Grid computing system. The paper describes overall picture of Grid computing and discusses important sub-systems that enable Grid computing possible. Moreover, the paper also discusses concepts of resource scheduling and application scheduling and also presents classification of scheduling algorithms. Furthermore, the paper also presents methodology used for evaluating scheduling algorithms including both real system and simulation based approaches. The presented work on scheduling in Grid containing concise understandings of scheduling system, scheduling algorithm, and scheduling methodology would be very useful to users and researchersComment: Fourth International Conference on Advanced Computing & Communication Technologies (ACCT), 201

    Energy-aware simulation with DVFS

    Get PDF
    International audienceIn recent years, research has been conducted in the area of large systems models, especially distributed systems, to analyze and understand their behavior. Simulators are now commonly used in this area and are becoming more complex. Most of them provide frameworks for simulating application scheduling in various Grid infrastructures, others are specifically developed for modeling networks, but only a few of them simulate energy-efficient algorithms. This article describes which tools need to be implemented in a simulator in order to support energy-aware experimentation. The emphasis is on DVFS simulation, from its implementation in the simulator CloudSim to the whole methodology adopted to validate its functioning. In addition, a scientific application is used as a use case in both experiments and simulations, where the close relationship between DVFS efficiency and hardware architecture is highlighted. A second use case using Cloud applications represented by DAGs, which is also a new functionality of CloudSim, demonstrates that the DVFS efficiency also depends on the intrinsic middleware behavior

    Quality of service based data-aware scheduling

    Get PDF
    Distributed supercomputers have been widely used for solving complex computational problems and modeling complex phenomena such as black holes, the environment, supply-chain economics, etc. In this work we analyze the use of these distributed supercomputers for time sensitive data-driven applications. We present the scheduling challenges involved in running deadline sensitive applications on shared distributed supercomputers running large parallel jobs and introduce a ``data-aware\u27\u27 scheduling paradigm that overcomes these challenges by making use of Quality of Service classes for running applications on shared resources. We evaluate the new data-aware scheduling paradigm using an event-driven hurricane simulation framework which attempts to run various simulations modeling storm surge, wave height, etc. in a timely fashion to be used by first responders and emergency officials. We further generalize the work and demonstrate with examples how data-aware computing can be used in other applications with similar requirements

    Virtual Organization Clusters: Self-Provisioned Clouds on the Grid

    Get PDF
    Virtual Organization Clusters (VOCs) provide a novel architecture for overlaying dedicated cluster systems on existing grid infrastructures. VOCs provide customized, homogeneous execution environments on a per-Virtual Organization basis, without the cost of physical cluster construction or the overhead of per-job containers. Administrative access and overlay network capabilities are granted to Virtual Organizations (VOs) that choose to implement VOC technology, while the system remains completely transparent to end users and non-participating VOs. Unlike alternative systems that require explicit leases, VOCs are autonomically self-provisioned according to configurable usage policies. As a grid computing architecture, VOCs are designed to be technology agnostic and are implementable by any combination of software and services that follows the Virtual Organization Cluster Model. As demonstrated through simulation testing and evaluation of an implemented prototype, VOCs are a viable mechanism for increasing end-user job compatibility on grid sites. On existing production grids, where jobs are frequently submitted to a small subset of sites and thus experience high queuing delays relative to average job length, the grid-wide addition of VOCs does not adversely affect mean job sojourn time. By load-balancing jobs among grid sites, VOCs can reduce the total amount of queuing on a grid to a level sufficient to counteract the performance overhead introduced by virtualization

    The Inter-cloud meta-scheduling

    Get PDF
    Inter-cloud is a recently emerging approach that expands cloud elasticity. By facilitating an adaptable setting, it purposes at the realization of a scalable resource provisioning that enables a diversity of cloud user requirements to be handled efficiently. This study’s contribution is in the inter-cloud performance optimization of job executions using metascheduling concepts. This includes the development of the inter-cloud meta-scheduling (ICMS) framework, the ICMS optimal schemes and the SimIC toolkit. The ICMS model is an architectural strategy for managing and scheduling user services in virtualized dynamically inter-linked clouds. This is achieved by the development of a model that includes a set of algorithms, namely the Service-Request, Service-Distribution, Service-Availability and Service-Allocation algorithms. These along with resource management optimal schemes offer the novel functionalities of the ICMS where the message exchanging implements the job distributions method, the VM deployment offers the VM management features and the local resource management system details the management of the local cloud schedulers. The generated system offers great flexibility by facilitating a lightweight resource management methodology while at the same time handling the heterogeneity of different clouds through advanced service level agreement coordination. Experimental results are productive as the proposed ICMS model achieves enhancement of the performance of service distribution for a variety of criteria such as service execution times, makespan, turnaround times, utilization levels and energy consumption rates for various inter-cloud entities, e.g. users, hosts and VMs. For example, ICMS optimizes the performance of a non-meta-brokering inter-cloud by 3%, while ICMS with full optimal schemes achieves 9% optimization for the same configurations. The whole experimental platform is implemented into the inter-cloud Simulation toolkit (SimIC) developed by the author, which is a discrete event simulation framework

    Ordonnancement sous contraintes de qualité de service dans les clouds

    Get PDF
    In recent years, new issues have arisen in environmental considerations, increasingly pointed out in our society. In the field of Information Technology, data centers currently consume about 1.5% of world electricity. This increasing is due to changes in many areas, especially in Cloud Computing. Besides this environmental aspect, the management of energy consumption has become an important field of Quality of Service (QoS), in the responsibility of Cloud providers. These providers propose a QoS contract called SLA (Service Level Agreement), which specify the level of QoS given to users. The level of QoS offered directly influences the quality of the users' utilization, but also the overall energy consumption and performance of computing resources, which strongly affect profits of the Cloud providers. Cloud computing is intrinsically linked to the virtualization of computing resources. A model of hardware and software architecture is proposed in order to define the characteristics of the environment considered. Then, a detailed modeling of QoS parameters in terms of performance, dependability, security and cost is proposed. Therefore, QoS metrics, associated to these parameters are defined in order to extend the possibilities for evaluating the SLA. These models represent the first contribution of this thesis. Then, it is necessary to illustrate how the use and interpretation of several QoS metrics open the possibility of a more complex and precise analysis of algorithms' insight. This multi-criteria approach, that provides useful informations about the system's status can be analyzed to manage the QoS parameters' level. Thus, four antagonists metrics, including energy consumption, are selected and used together in several scheduling algorithms which allow to show their relevance, the enrichment given to these algorithms, and how a Cloud provider can take advantage of the results of this kind of multi-objective optimization. The second contribution presents a genetic algorithm (GA) and two greedy algorithms. The analysis of the genetic algorithm behavior allows to show different interests of a multi-criteria optimization applied to QoS metrics, usually ignored in studies dedicated to Cloud Computing. The third contribution of this thesis proposes a study of the impact of the use of QoS metrics in virtual machines scheduling. The simulator CloudSim has been used and expanded to improve its energy-aware tools. The DVFS (Dynamic Voltage & Frequency Scaling), providing a highly accurate dynamic management of CPU frequencies, the virtual machines reconfiguration, and the dynamic management of events have been included. The simulations involve all of these energy tools and placement algorithms, and evaluate each selected QoS metrics. These simulations allow to see the evolution in time of these metrics, depending on the algorithms used and the behavior of the GA in different optimizations configurations. This allows to analyze from different angles the behavior of greedy algorithms, the impact of optimizations GA, and the influence of these metrics one against the others.Ces derniĂšres annĂ©es, de nouvelles problĂ©matiques sont nĂ©es au vu des considĂ©rations Ă©cologiques de plus en plus prĂ©sentes dans notre sociĂ©tĂ©. Dans le domaine de la technologie de l'Information, les centres de calcul consomment actuellement environ 1.5% de l'Ă©lectricitĂ© mondiale. Cela ne cesse d’augmenter en raison de l'Ă©volution de nombreux domaines et particuliĂšrement du Cloud Computing. Outre cet aspect environnemental, le contrĂŽle de la consommation d’énergie fait dĂ©sormais partie intĂ©grante des paramĂštres de QualitĂ© de Service (QoS) incombant aux fournisseurs de services de Cloud Computing. En effet, ces fournisseurs de services Ă  la demande proposent Ă  leurs utilisateurs un contrat de QoS, appelĂ© SLA (Service Level Agreement), qui dĂ©finit de maniĂšre prĂ©cise la qualitĂ© de service qu’ils s’engagent Ă  respecter. Le niveau de QoS proposĂ© influence directement la qualitĂ© d’utilisation des services par les utilisateurs, mais aussi la consommation et le rendement gĂ©nĂ©ral de l’ensemble des ressources de calcul utilisĂ©es, impactant fortement les bĂ©nĂ©fices des fournisseurs de services.Le Cloud Computing Ă©tant intrinsĂšquement liĂ© Ă  la virtualisation des ressources de calcul, une Ă©laboration de modĂšles d’architecture matĂ©rielle et logicielle est proposĂ©e afin de dĂ©finir les caractĂ©ristiques de l’environnement considĂ©rĂ©. Ensuite, une modĂ©lisation dĂ©taillĂ©e de paramĂštres de QoS en termes de performance, de sĂ»retĂ© de fonctionnement, de sĂ©curitĂ© des donnĂ©es et de coĂ»ts est proposĂ©e. Des mĂ©triques associĂ©es Ă  ces paramĂštres sont dĂ©finies afin d’étendre les possibilitĂ©s d'Ă©valuation des SLA. Ces modĂ©lisations constituent la premiĂšre contribution de cette thĂšse.Il convient alors de dĂ©montrer comment l’utilisation et l’interprĂ©tation de plusieurs mĂ©triques de QoS ouvrent la possibilitĂ© d'une analyse plus complexe et plus fine de la perspicacitĂ© des algorithmes de placement. Cette approche multi-critĂšres leur apporte des informations importantes sur l’état de leur systĂšme qu’ils peuvent analyser afin de gĂ©rer le niveau de chaque paramĂštre de QoS. Ainsi, quatre mĂ©triques antagonistes, incluant la consommation Ă©nergĂ©tique, ont Ă©tĂ© sĂ©lectionnĂ©es et utilisĂ©es conjointement dans plusieurs algorithmes de placement de maniĂšre Ă  montrer leur pertinence, l’enrichissement qu’elles apportent Ă  ces algorithmes, et comment un fournisseur de service peut tirer profit des rĂ©sultats d’une optimisation multi-objectifs. Cette seconde contribution prĂ©sente un algorithme gĂ©nĂ©tique (GA) ainsi que deux algorithmes gloutons. L’analyse du comportement de l'algorithme gĂ©nĂ©tique a permis de dĂ©montrer diffĂ©rents intĂ©rĂȘts d’une optimisation multi-critĂšres appliquĂ©e Ă  des mĂ©triques de QoS habituellement ignorĂ©es dans les Ă©tudes dĂ©diĂ©es au Cloud Computing.La troisiĂšme contribution de cette thĂšse propose une Ă©tude de l’impact de l'utilisation des mĂ©triques de QoS sur l’ordonnancement de machines virtuelles au cours du temps. Pour cela, le simulateur CloudSim a Ă©tĂ© exploitĂ© et Ă©tendu afin d'amĂ©liorer ses fonctionnalitĂ©s de gestion de consommation Ă©nergĂ©tique. Tout d’abord par l’ajout du DVFS (Dynamic Voltage & Frequency Scaling) apportant une gestion dynamique trĂšs prĂ©cise des frĂ©quences de fonctionnement CPU, puis la possibilitĂ© de reconfiguration de machines virtuelles et enfin par la gestion dynamique des Ă©vĂšnements. Les simulations effectuĂ©es mettent en jeu l'ensemble de ces outils Ă©nergĂ©tiques ainsi que les algorithmes de placement et Ă©valuent chacune des mĂ©triques de QoS sĂ©lectionnĂ©es. Ces simulations donnent une vision temporelle de l’évolution de celles-ci, en fonction des algorithmes utilisĂ©s et de plusieurs configurations d’optimisation du GA. Cela permet d'analyser sous diffĂ©rents angles le comportement des algorithmes gloutons, l'impact des optimisations du GA, et l'influence des mĂ©triques les unes par rapport aux autres.Une collaboration a pu ĂȘtre Ă©tablie avec le laboratoire CLOUDS Laborartory de Melbourne, dirigĂ© par Prof. Rajkumar Buyya
    corecore