14 research outputs found

    An Elastic Scheduling Algorithm For Resource Co-Allocation Based on System Generated Predictions With Priority

    Get PDF
    Resource Co-Allocation is basically used to execute multiple site jobs in a large scale computing environments with secure, faultless and in transparent manner. To be precise we are actually allocating multiple resources for different jobs taking into account the time parameter. Here we make use of the Scheduling queue and Resource Co-Allocation to reduce the Turn-around time with an advanced concept of System Generated Prediction based on Priority. In existing works we are scheduling the resource co-allocation request from user runtime estimation. As user runtime estimations are usually very imprecise that is not clear. In proposed work we are scheduling the resource co-allocation request based on system generated predictions through Discovery service & Priority (fairness and user experience) through topological sorting technique. The system generated predictions are better parameters than user runtime estimates for Resource co-Allocation scheduling, because System generated predictions reduce the scheduling time through proxy ser based discovery service technique. The proposed work consider priorities like advanced reservation, system Generated Predictions, Negotiation, Co-scheduling, policy (SLA, Price, Trust) for resource Co-Allocation. The system generated predictions are better than user runtime estimates for Resource co- Allocation scheduling, using the experimental data’s we proved this concept. End User doesn’t want the grid and resource knowledge only submit job to the portal. This proposed portal will take care of all knowledge about the resource collocation automatically with fast and efficient manner

    Survey On Fault Tolerance In Grid Computing

    Full text link

    DRIVE: A Distributed Economic Meta-Scheduler for the Federation of Grid and Cloud Systems

    No full text
    The computational landscape is littered with islands of disjoint resource providers including commercial Clouds, private Clouds, national Grids, institutional Grids, clusters, and data centers. These providers are independent and isolated due to a lack of communication and coordination, they are also often proprietary without standardised interfaces, protocols, or execution environments. The lack of standardisation and global transparency has the effect of binding consumers to individual providers. With the increasing ubiquity of computation providers there is an opportunity to create federated architectures that span both Grid and Cloud computing providers effectively creating a global computing infrastructure. In order to realise this vision, secure and scalable mechanisms to coordinate resource access are required. This thesis proposes a generic meta-scheduling architecture to facilitate federated resource allocation in which users can provision resources from a range of heterogeneous (service) providers. Efficient resource allocation is difficult in large scale distributed environments due to the inherent lack of centralised control. In a Grid model, local resource managers govern access to a pool of resources within a single administrative domain but have only a local view of the Grid and are unable to collaborate when allocating jobs. Meta-schedulers act at a higher level able to submit jobs to multiple resource managers, however they are most often deployed on a per-client basis and are therefore concerned with only their allocations, essentially competing against one another. In a federated environment the widespread adoption of utility computing models seen in commercial Cloud providers has re-motivated the need for economically aware meta-schedulers. Economies provide a way to represent the different goals and strategies that exist in a competitive distributed environment. The use of economic allocation principles effectively creates an open service market that provides efficient allocation and incentives for participation. The major contributions of this thesis are the architecture and prototype implementation of the DRIVE meta-scheduler. DRIVE is a Virtual Organisation (VO) based distributed economic metascheduler in which members of the VO collaboratively allocate services or resources. Providers joining the VO contribute obligation services to the VO. These contributed services are in effect membership “dues” and are used in the running of the VOs operations – for example allocation, advertising, and general management. DRIVE is independent from a particular class of provider (Service, Grid, or Cloud) or specific economic protocol. This independence enables allocation in federated environments composed of heterogeneous providers in vastly different scenarios. Protocol independence facilitates the use of arbitrary protocols based on specific requirements and infrastructural availability. For instance, within a single organisation where internal trust exists, users can achieve maximum allocation performance by choosing a simple economic protocol. In a global utility Grid no such trust exists. The same meta-scheduler architecture can be used with a secure protocol which ensures the allocation is carried out fairly in the absence of trust. DRIVE establishes contracts between participants as the result of allocation. A contract describes individual requirements and obligations of each party. A unique two stage contract negotiation protocol is used to minimise the effect of allocation latency. In addition due to the co-op nature of the architecture and the use of secure privacy preserving protocols, DRIVE can be deployed in a distributed environment without requiring large scale dedicated resources. This thesis presents several other contributions related to meta-scheduling and open service markets. To overcome the perceived performance limitations of economic systems four high utilisation strategies have been developed and evaluated. Each strategy is shown to improve occupancy, utilisation and profit using synthetic workloads based on a production Grid trace. The gRAVI service wrapping toolkit is presented to address the difficulty web enabling existing applications. The gRAVI toolkit has been extended for this thesis such that it creates economically aware (DRIVE-enabled) services that can be transparently traded in a DRIVE market without requiring developer input. The final contribution of this thesis is the definition and architecture of a Social Cloud – a dynamic Cloud computing infrastructure composed of virtualised resources contributed by members of a Social network. The Social Cloud prototype is based on DRIVE and highlights the ease in which dynamic DRIVE markets can be created and used in different domains

    Scheduling and Dynamic Management of Applications over Grids

    Get PDF
    The work presented in this Thesis is about scheduling applications in computational Grids. We study how to better manage jobs in a grid middleware in order to improve the performance of the platform. Our solutions are designed to work at the middleware layer, thus allowing to keep the underlying architecture unmodified. First, we propose a reallocation mechanism to dynamically tackle errors that occur during the scheduling. Indeed, it is often necessary to provide a runtime estimation when submitting on a parallel computer so that it can compute a schedule. However, estimations are inherently inaccurate and scheduling decisions are based on incorrect data, and are therefore wrong. The reallocation mechanism we propose tackles this problem by moving waiting jobs between several parallel machines in order to reduce the scheduling errors due to inaccurate runtime estimates. Our second interest in the Thesis is the study of the scheduling of a climatology application on the Grid. To provide the best possible performances, we modeled the application as a Directed Acyclic Graph (DAG) and then proposed specific scheduling heuristics. To execute the application on the Grid, the middleware uses the knowledge of the application to find thebest schedule.Les travaux présentés dans cette thèse portent sur l'ordonnancement d'applications au sein d'un environnement de grille de calcul. Nous étudions comment mieux gérer les tâches au sein des intergiciels de grille, ceci dans l'objectif d'améliorer les performances globales de la plateforme. Les solutions que nous proposons se situent dans l'intergiciel, ce qui permet de conserver les architectures sous-jacentes sans les modifier. Dans un premier temps, nous proposons un mécanisme de réallocation permettant de prendre en compte dynamiquement les erreurs d'ordonnancement commises lors de la soumission de calculs. En effet, lors de la soumission sur une machine parallèle, il est souvent nécessaire de fournir une estimation du temps d'exécution afin que celle-ci puisse effectuer un ordonnancement. Cependant, les estimations ne sont pas précises et les décisions d'ordonnancement sont sans cesse remises en question. Le mécanisme de réallocation proposé permet de prendre en compte ces changements en déplaçant des calculs d'une machine parallèle à une autre. Le second point auquel nous nous intéressons dans cette thèse est l'ordonnancement d'une application de climatologie sur la grille. Afin de fournir les meilleures performances possibles nous avons modélisé l'application puis proposé des heuristiques spécifiques. Pour exécuter l'application sur une grille de calcul, l'intergiciel utilise ces connaissances sur l'application pour fournir le meilleur ordonnancement possible

    A Flexible Resource Co-Allocation Model based on Advance Reservations with Rescheduling Support

    No full text
    Several parallel and distributed applications require simultaneous access to resources located in multiple administrative domains. Current research on resource co-allocation relies on either rigid advance reservations or non-booking-in-advance mechanisms. The first approach leads to high fragmentation inside the resource provider’s scheduling queue, whereas the second approach offers no starting time guarantees of user applications. In this work, we propose a new model for resource co-allocation based on flexible advance reservations and processor remapping. The model allows the metascheduler to reschedule the co-allocation requests by modifying the starting time of each subtask and remapping the number of processors used by them in each resource provider. We evaluate our model and algorithms in a scenario where users are not able to provide accurate runtime estimations of their applications—using job response time and system utilization as metrics. The results show that rescheduling co-allocation requests brings benefits for both local and multi-site applications especially when the runtime estimation quality is low and there is a reduced number of small jobs in the system.

    On-demand distributed image processing over an adaptive Campus-Grid

    Get PDF
    This thesis explores how scientific applications, which are based upon short jobs (seconds and minutes) can capitalize upon the idle workstations of a Campus-Grid. These resources are donated on a voluntary basis, and consequently, the Campus-Grid is constantly adapting and the availability of workstations changes. Typically, to utilize these resources a Condor system or equivalent would be used. However, such systems are designed with different trade-offs and incentives in mind and therefore do not provide intrinsic support for short jobs. The motivation for creating a provisioning scenario for short jobs is that Image Processing, as well as other areas of scientific analysis, are typically composed of short running jobs, but still require parallel solutions. Much of the literature in this area comments on the challenges of performing such analysis efficiently and effectively even when dedicated resources are in use. The main challenges are: latency and scheduling penalties, granularity and the potential for very short jobs. A volunteer Grid retains these challenges but also adds further challenges. These can be summarized as: unpredictable re source availability and longevity, multiple machine owners and administrators who directly affect the operating environment. Ultimately, this creates the requirement for well conceived and effective fault management strategies. However, these are typically not in place to enable transparent fault-free job administration for the user. This research demonstrates that these challenges are answerable, and that in doing so opportunistically sourced Campus-Grid resources can host disparate applications constituted of short running jobs, of as little as one second in length. This is demonstrated by the significant improvements in performance when the system presented here was compared to a well established Condor system. Here, improvements are increased job efficiency from 60–70% to 95%–100%, up to a 99% reduction in application makespan and up to a 13000% increase in the efficiency of resource utilization. The Condor pool in use is approximately 1,600 workstations distributed across 27 administrative domains of Cardiff University. The application domain of this research is Matlab-based image processing, and the application area used to demonstrate the approach is the analysis of Magnetic Resonance Imagery (MRI). However, the presented approach is generalizable to any application domain with similar characteristics
    corecore