82 research outputs found

    Workflow scheduling for service oriented cloud computing

    Get PDF
    Service Orientation (SO) and grid computing are two computing paradigms that when put together using Internet technologies promise to provide a scalable yet flexible computing platform for a diverse set of distributed computing applications. This practice gives rise to the notion of a computing cloud that addresses some previous limitations of interoperability, resource sharing and utilization within distributed computing. In such a Service Oriented Computing Cloud (SOCC), applications are formed by composing a set of services together. In addition, hierarchical service layers are also possible where general purpose services at lower layers are composed to deliver more domain specific services at the higher layer. In general an SOCC is a horizontally scalable computing platform that offers its resources as services in a standardized fashion. Workflow based applications are a suitable target for SOCC where workflow tasks are executed via service calls within the cloud. One or more workflows can be deployed over an SOCC and their execution requires scheduling of services to workflow tasks as the task become ready following their interdependencies. In this thesis heuristics based scheduling policies are evaluated for scheduling workflows over a collection of services offered by the SOCC. Various execution scenarios and workflow characteristics are considered to understand the implication of the heuristic based workflow scheduling

    Dual-phase just-in-time workflow scheduling in P2P grid systems

    Get PDF
    This paper presents a fully decentralized justin-time workflow scheduling method in a P2P Grid system. The proposed solution allows each peer node to autonomously dispatch inter-dependent tasks of workflows to run on geographically distributed computers. To reduce the workflow completion time and enhance the overall execution efficiency, not only does each node perform as a scheduler to distribute its tasks to execution nodes (or resource nodes), but the resource nodes will also set the execution priorities for the received tasks. By taking into account the unpredictability of tasks' finish time, we devise an efficient task scheduling heuristic, namely dynamic shortest makespan first (DSMF), which could be applied at both scheduling phases for determining the priority of the workflow tasks. We compare the performance of the proposed algorithm against seven other heuristics by simulation. Our algorithm achieves 20%~60% reduction on the average completion time and 37.5%~90% improvement on the average workflow execution efficiency over other decentralized algorithms. © 2010 IEEE.published_or_final_versionProcessing (ICPP 2010), San Diego, CA., 13-16 September 2010. In Proceedings of the 39th ICCP, 2010, p. 238-24

    STaRS: A scalable task routing approach to distributed scheduling

    Get PDF
    La planificación de muchas tareas en entornos de millones de nodos no confiables representa un gran reto. Las plataformas de computación más conocidas normalmente confían en poder gestionar en un elemento centralizado todo el estado tanto de los nodos como de las aplicaciones. Esto limita su escalabilidad y capacidad para tolerar fallos. Un modelo descentralizado puede superar estos problemas pero, por lo que sabemos, ninguna solución propuesta hasta el momento ofrece resultados satisfactorios. En esta tesis, presentamos un modelo de planificación descentralizado con tres objetivos: que escale hasta millones de nodos, sin una pérdida de prestaciones que lo inhabilite; que tolere altas tasas de fallos; y que permita la implementación de varias políticas de planificación para diferentes situaciones. Nuestra propuesta consta de tres elementos principales: un modelo de datos genérico para representar la disponibilidad de los nodos de ejecución; un esquema de agregación que propaga esta información por una capa de red jerárquica; y un algoritmo de reexpedición que, usando la información agregada, encamina tareas hacia los nodos de ejecución más apropiados. Estos tres elementos son fácilmente extensibles para proporcionar diversas políticas de planificación. En concreto, nosotros hemos implementado cinco. Una política que simplemente asigna tareas a nodos desocupados; una política que minimiza el tiempo de finalización del trabajo global; una política que cumple con los requerimientos de fecha límite de aplicaciones tipo "saco de tareas"; una política que cumple con los requerimientos de fecha límite de aplicaciones tipo "workflow"; y una política que otorga una porción equitativa de la plataforma a cada aplicación. La escalabilidad se consigue a través del esquema de agregación, que provee de suficiente información de disponibilidad a los niveles altos de la jerarquía sin inundarlos, y el algoritmo de reexpedición, que busca nodos de ejecución en varias ramas de la jerarquía de manera concurrente. Como consecuencia, los costes de comunicación están acotados y los de asignación muestran un comportamiento casi logarítmico con el tamaño del sistema. Un millar de tareas se asignan en una red de 100.000 nodos en menos de 3,5 segundos, así que podemos plantearnos utilizar nuestro modelo incluso con tareas de tan solo unos minutos de duración. Por lo que sabemos, ningún trabajo similar ha sido probado con más de 10.000 nodos. Los fallos se gestionan con una estrategia de mejor esfuerzo. Cuando se detecta el fallo de un nodo, las tareas que estaba ejecutando son reenviadas por sus propietarios y la información de disponibilidad que gestionaba es reconstruida por sus vecinos. De esta manera, nuestro modelo es capaz de degradar sus prestaciones de manera proporcional al número de nodos fallidos y recuperar toda su funcionalidad. Para demostrarlo, hemos realizado pruebas de tasa media de fallos y de fallos catastróficos. Incluso con nodos fallando con un periodo mediano de solo 5 minutos, nuestro planificador es capaz de continuar dando servicio. Al mismo tiempo, es capaz de recuperarse del fallo de una fracción importante de los nodos, siempre que la capa de red jerárquico que sustenta el sistema pueda soportarlo. Después de comprobar que es factible implementar políticas con muy distintos objetivos usando nuestro modelo de planificación, también hemos probado sus prestaciones. Hemos comparado cada política con una versión centralizada que tiene pleno conocimiento del estado de cada nodo de ejecución. El resultado es que tienen unas prestaciones cercanas a las de una implementación centralizada, incluso en entornos de gran escala y con altas tasas de fallo

    Cluster Computing: A Novel Peer-to-Peer Cluster for Generic Application Sharing

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    A distributed platform for the volunteer execution of workflows on a local area network

    Get PDF
    Thesis submitted in fulfilment of the requirements for the Degree of Master of Science in Computer ScienceAlbatroz Engineering has developed a framework for over-head power lines inspection data acquisition and analysis, which includes hardware and software. The framework’s software components include inspection data analysis and reporting tools, commonly known as PLMI2 application/platform. In PLMI2, the analysis of over-head power line maintenance inspection data consists of a sequence of Automatic Tasks (ATs) interleaved with Manual Tasks (MTs). An AT consists of a set of algorithms that receives as input one or more datasets, processes them and returns new datasets. In turn, an MT enables human supervisors (also known as lines inspection operators) to correct, improve and validate the results of ATs. ATs run faster than MTs and in the overall work cycle, ATs take less than 10% of total processing time, but still take a few minutes. There is data flow dependency among tasks, which can be modelled with a workflow and even if MTs are omitted from this workflow, it is possible to carry the sequence of ATs, postponing MTs. In fact, if the computing cost and waiting time are negligible, it may be advantageous to run ATs earlier in the workflow, prior to validation. To address this opportunity, Albatroz Engineering has invested in a new procedure to stream the data through all ATs fully unattended. Considering these scenarios, it could be useful to have a system capable of detecting available workstations at a given instant and subsequently distribute the ATs to them. In this way, operators could schedule the execution of future ATs for a given inspection data, while they are performing MTs of another. The requirements of the system to implement fall within the field Volunteer Computing Systems and we will address some of the challenges posed by these kinds of systems, namely the hosts volatility and failures. Volunteer Computing is a type of distributed computing which exploits idle CPU cycles from computing resources donated by volunteers and connected through the Internet/Intranet to compute large-scale simulations. This thesis proposes and designs a new distributed task scheduling system in the context of Volunteer Computing Systems, able to schedule the ATs of PLMI2 and exploit idle CPU cycles from workstations within the company’s local area network (LAN) to accelerate the data analysis, being aware of data flow interdependencies. To evaluate the proposed system, a prototype has been implemented, and the simulations results have shown that it is scalable and supports fault-tolerance of tasks execution, by employing the rescheduling mechanism

    Contributions to Desktop Grid Computing : From High Throughput Computing to Data-Intensive Sciences on Hybrid Distributed Computing Infrastructures

    Get PDF
    Since the mid 90’s, Desktop Grid Computing - i.e the idea of using a large number of remote PCs distributed on the Internet to execute large parallel applications - has proved to be an efficient paradigm to provide a large computational power at the fraction of the cost of a dedicated computing infrastructure.This document presents my contributions over the last decade to broaden the scope of Desktop Grid Computing. My research has followed three different directions. The first direction has established new methods to observe and characterize Desktop Grid resources and developed experimental platforms to test and validate our approach in conditions close to reality. The second line of research has focused on integrating Desk- top Grids in e-science Grid infrastructure (e.g. EGI), which requires to address many challenges such as security, scheduling, quality of service, and more. The third direction has investigated how to support large-scale data management and data intensive applica- tions on such infrastructures, including support for the new and emerging data-oriented programming models.This manuscript not only reports on the scientific achievements and the technologies developed to support our objectives, but also on the international collaborations and projects I have been involved in, as well as the scientific mentoring which motivates my candidature for the Habilitation `a Diriger les Recherches

    Advances in Grid Computing

    Get PDF
    This book approaches the grid computing with a perspective on the latest achievements in the field, providing an insight into the current research trends and advances, and presenting a large range of innovative research papers. The topics covered in this book include resource and data management, grid architectures and development, and grid-enabled applications. New ideas employing heuristic methods from swarm intelligence or genetic algorithm and quantum encryption are considered in order to explain two main aspects of grid computing: resource management and data management. The book addresses also some aspects of grid computing that regard architecture and development, and includes a diverse range of applications for grid computing, including possible human grid computing system, simulation of the fusion reaction, ubiquitous healthcare service provisioning and complex water systems
    corecore