97 research outputs found
Checkpointing Orchestrated Web Services
Web Services are built on service-oriented architecture which is based on the notion of building applications by discovering and orchestrating services available on the web. Complex business processes can be realized by discovering and orchestrating already available services on the web. In order to make these orchestrated web services resilient to faults; we proposed a simple and elegant checkpointing policy called Call based Global Checkpointing of Orchestrated web services which specifies that when a web service calls another web service the calling web service has to save its state. But performance of the web services implementing this policy reduces due to checkpointing overhead. In an effort to improvise this policy, we propose in this paper, a checkpointing policy which uses Predicted Execution Time and Mean Time Between Failures of the called web services to make checkpointing decisions. This policy aims at reducing the required number of Call based Checkpoints but at the same time maintains the resilience of web services to faults
Execution Time Prediction for a Web Service Instance
Availability of services on Internet has provided unique opportunity to customers as well as providers for conducting e-business. This new business paradigm can succeed provided selection of services is accomplished to customer satisfaction in terms of service delivery time as well as service quality. Instead of leaving it to service provider to declare, we propose a strategy for forecasting execution time of the web service being called. The paper deals with model details and proposes a framework for implementation of the model. We also demonstrate its usability in scenarios like checkpointing web services
Run-time application migration using checkpoint/restore in userspace
This paper presents an empirical study on the feasibility of using
Checkpoint/Restore In Userspace (CRIU) for run-time application migration
between hosts, with a particular focus on edge computing and cloud
infrastructures. The paper provides experimental support for CRIU in Docker and
offers insights into the impact of application memory usage on checkpoint size,
time, and resources. Through a series of tests, we find that the time to
checkpoint is linearly proportional to the size of the memory allocation of the
container, while the restore is less so. Our findings contribute to the
understanding of CRIU's performance and its potential use in edge computing
scenarios. To obtain accurate and meaningful findings, we monitored system
telemetry while using CRIU to observe its impact on the host machine's CPU and
RAM. Although our results may not be groundbreaking, they offer a good overview
and a technical report on the feasibility of using CRIU on edge devices. This
study's findings and experimental support for CRIU in Docker could serve as a
useful reference for future research on performance optimization and
application migration using CRIU
Decentralized Orchestration of Open Services- Achieving High Scalability and Reliability with Continuation-Passing Messaging
The papers of this thesis are not available in Munin.
Paper I: Yu, W.,Haque, A. A. M. “Decentralised web- services orchestration with continuation-passing messaging”. Available in International Journal of Web and Grid Services 2011, 7(3):304–330.
Paper II: Haque, A. A. M., Yu, W.: “Peer-to-peer orchestration of web mashups”. Available in
International Journal of Adaptive, Resilient and
Autonomic Systems 2014, 5(3):40-60.
Paper V: Haque, A. A. M., Yu, W.: “Decentralized and reliable orchestration of open services”. In:Service Computation 2014. International Academy, Research and Industry Association (IARIA) 2014 ISBN 978-1-61208-337-7.An ever-increasing number of web applications are providing open services to a wide range of applications. Whilst traditional centralized approaches to services orchestration are successful for enterprise service-oriented systems, they are subject to serious limitations for orchestrating the wider range of open services. Dealing with these limitations calls for decentralized approaches. However, decentralized approaches are themselves faced with a number of challenges, including the possibility of loss of dynamic run-time states that are spread over the distributed environment. This thesis presents a fully decentralized approach to orchestration of open services. Our flow-aware dynamic replication scheme supports both exceptional handling, failure of orchestration agents and recovers from fail situations. During execution, open services are conducted by a network of orchestration agents which collectively orchestrate open services using continuation-passing messaging. Our performance study showed that decentralized orchestration improves the scalability and enhances the reliability of open services. Our orchestration approach has a clear performance advantage over traditional centralized orchestration as well as over the current practice of web mashups where application servers themselves conduct the execution of the composition of open web services. Finally, in our empirical study we presented the overhead of the replication approach for services orchestration
An Architecture for Programming Distributed Applications on Fog to Cloud Systems
This paper presents a framework to develop and execute applications in distributed and highly dynamic computing systems composed of cloud resources and fog devices such as mobile phones, cloudlets, and micro-clouds. The work builds on the COMPSs programming framework, which includes a programming model and a runtime already validated in HPC and cloud environments for the transparent execution of parallel applications. As part of the proposed contribution, COMPSs has been enhanced to support the execution of applications on mobile platforms that offer GPUs and CPUs. The scheduling component of COMPSs is under design to be able to offload the computation to other fog devices in the same level of the hierarchy and to cloud resources when more computational power is required. The framework has been tested executing a sample application on a mobile phone offloading task to a laptop and a private cloud.This work is partly supported by the Spanish Ministry of Science and Technology through project TIN2015-65316-P and grant BES-2013-067167, by the Generalitat de Catalunya under contracts 2014-SGR-1051 and 2014-SGR-1272, and by the European Union through the Horizon 2020 research
and innovation programme under grant 730929 (mF2C Project).Peer ReviewedPostprint (author's final draft
A Taxonomy of Workflow Management Systems for Grid Computing
With the advent of Grid and application technologies, scientists and
engineers are building more and more complex applications to manage and process
large data sets, and execute scientific experiments on distributed resources.
Such application scenarios require means for composing and executing complex
workflows. Therefore, many efforts have been made towards the development of
workflow management systems for Grid computing. In this paper, we propose a
taxonomy that characterizes and classifies various approaches for building and
executing workflows on Grids. We also survey several representative Grid
workflow systems developed by various projects world-wide to demonstrate the
comprehensiveness of the taxonomy. The taxonomy not only highlights the design
and engineering similarities and differences of state-of-the-art in Grid
workflow systems, but also identifies the areas that need further research.Comment: 29 pages, 15 figure
Triggerflow: Trigger-based Orchestration of Serverless Workflows
As more applications are being moved to the Cloud thanks to serverless
computing, it is increasingly necessary to support native life cycle execution
of those applications in the data center. But existing systems either focus on
short-running workflows (like IBM Composer or Amazon Express Workflows) or
impose considerable overheads for synchronizing massively parallel jobs (Azure
Durable Functions, Amazon Step Functions, Google Cloud Composer). None of them
are open systems enabling extensible interception and optimization of custom
workflows. We present Triggerflow: an extensible Trigger-based Orchestration
architecture for serverless workflows built on top of Knative Eventing and
Kubernetes technologies. We demonstrate that Triggerflow is a novel serverless
building block capable of constructing different reactive schedulers (State
Machines, Directed Acyclic Graphs, Workflow as code). We also validate that it
can support high-volume event processing workloads, auto-scale on demand and
transparently optimize scientific workflows.Comment: The 14th ACM International Conference on Distributed and Event-based
Systems (DEBS 2020
- …