181 research outputs found

    Decentralized Orchestration of Open Services- Achieving High Scalability and Reliability with Continuation-Passing Messaging

    Get PDF
    The papers of this thesis are not available in Munin. Paper I: Yu, W.,Haque, A. A. M. “Decentralised web- services orchestration with continuation-passing messaging”. Available in International Journal of Web and Grid Services 2011, 7(3):304–330. Paper II: Haque, A. A. M., Yu, W.: “Peer-to-peer orchestration of web mashups”. Available in International Journal of Adaptive, Resilient and Autonomic Systems 2014, 5(3):40-60. Paper V: Haque, A. A. M., Yu, W.: “Decentralized and reliable orchestration of open services”. In:Service Computation 2014. International Academy, Research and Industry Association (IARIA) 2014 ISBN 978-1-61208-337-7.An ever-increasing number of web applications are providing open services to a wide range of applications. Whilst traditional centralized approaches to services orchestration are successful for enterprise service-oriented systems, they are subject to serious limitations for orchestrating the wider range of open services. Dealing with these limitations calls for decentralized approaches. However, decentralized approaches are themselves faced with a number of challenges, including the possibility of loss of dynamic run-time states that are spread over the distributed environment. This thesis presents a fully decentralized approach to orchestration of open services. Our flow-aware dynamic replication scheme supports both exceptional handling, failure of orchestration agents and recovers from fail situations. During execution, open services are conducted by a network of orchestration agents which collectively orchestrate open services using continuation-passing messaging. Our performance study showed that decentralized orchestration improves the scalability and enhances the reliability of open services. Our orchestration approach has a clear performance advantage over traditional centralized orchestration as well as over the current practice of web mashups where application servers themselves conduct the execution of the composition of open web services. Finally, in our empirical study we presented the overhead of the replication approach for services orchestration

    A resource-oriented architecture for business process systems

    Full text link
    Background: The REpresentational State Transfer (REST) design principles treat all concepts in the world as link-connected resources, and support ROA (Resource-Oriented Architecture) for the Web applications. REST and ROA are responsible for the adaptability achieved in the Web. Some design approaches of Web-based business process systems recently evolved towards RESTful to inherit adaptability. However, none of the approaches can improve the adaptability of the produced systems. Aims: Propose a systematic approach for design and execution of Web-based business processes to improve adaptability of the produced systems. Methods: This research followed an empirical research methodology, which evaluates research solutions with real-world cases. On one hand, the research solution was derived by 1) tailoring the REST principles towards business process systems; 2) proposing REST annotations on existing business process modelling; 3) mapping the concepts of business process to HTTP/URI specifications; and 4) designing a format for process context information. On the other hand, the research solution was evaluated through three real-world case studies. Two of the case studies conducted comparative analysis in terms of adaptability of the systems produced by the proposed approach and two alternatives, namely, SOA and MEST (MESsage Transfer). The analysis is based on metrics, including LOC difference, change locality, coupling and cohesion, and an analysis framework called BASE. Results: The research solution is ROA4BP, which includes 1) an architecting approach for design and implementation of Web-based business processes to provide a development guideline; 2) a set of REST-related annotations on existing process modelling to ensure the compatibility with existing techniques; 3) A systematic mapping between business process and HTTP/URI specifications to utilize the advanced mechanisms provided by the Web infrastructure; and 4) a communication format to exchange structured process context information during runtime among process participants. A modelling tool, a programming API and a runtime engine were implemented to support the approach and simplify the implementation of case studies. The case studies demonstrated that ROA4BP can produce more adaptable business process systems compared to the other two alternatives. Conclusion: ROA4BP can help to design and execute RESTful business process systems with better adaptability at design-time and runtime

    GinFlow: A Decentralised Adaptive Workflow Execution Manager

    Get PDF
    International audienceWorkflow-based computing has become a dominant paradigm to design and execute scientific applications. After the initial breakthrough of now standard workflow management systems, several approaches have recently proposed to decentralise the coordination of the execution. In particular, shared space-based coordination has been shown to provide appropriate building blocks for such a decentralised execution. Uncertainty is also still a major concern in scientific workflows. The ability to adapt the workflow, change its shape and switch for alternate scenarios on-the-fly is still missing in workflow management systems. In this paper, based on the shared space model, we firstly devise a programmatic way to specify such adaptive workflows. We use a reactive, rule-based programming model to modify the workflow description by changing its associated directacyclic graph on-the-fly without needing to stop and restart the execution from the beginning. Secondly, we present the GinFlow middleware, a resilient decentralised workflow execution manager implementing these concepts. Through a set of deployments of adaptive workflows of different characteristics, we discuss the GinFlow performance and resilience and show the limited overhead of the adaptiveness mechanism, making it a promising decentralised adaptive workflow execution manager

    A Chemistry-Inspired Workflow Management System for a Decentralized Composite Service Execution

    Get PDF
    With the recent widespread adoption of service-oriented architecture, the dynamic composition of such services is now a crucial issue in the area of distributed computing. The coordination and execution of composite Web services are today typically conducted by heavyweight centralized workflow engines, leading to an increasing probability of processing and communication bottleneck and failures. In addition, centralization induces higher deployment costs, such as the computing infrastructure to support the workflow engine, which is not affordable for a large number of small businesses and end-users. Last but not least, central workflow engines leads to diverse inadequate consequences dealing with privacy or energy consumption. In a world where platforms are more and more dynamic and elastic as promised by cloud computing, decentralized and dynamic interaction schemes are required. Addressing the characteristics of such platforms, nature-inspired analogies recently regained attention to provide autonomous service coordination on top of dynamic large scale platforms. In this report, we propose a decentralized approach for the execution of composite Web services based on an unconventional programming paradigm that relies on the chemical metaphor. It provides a high-level execution model that allows executing composite services in a fully decentralized manner. Composed of services communicating through a persistent shared space containing control and data flows between services, our architecture allows to distribute the composition among nodes without the need for any centralized coordination. A proof of concept is given, through the deployment of a software prototype implementing these concepts, showing the viability of an autonomic vision of service composition.Suite Ă  l'adoption grandissante des architectures orientĂ©es service, la composition dynamique de services est devenu un problĂšme important de la construction de plates-formes de calcul distribuĂ©. La coordination et l'exĂ©cutiuon de Web Service composites sont aujourd'hui typiquement conduits par des moteurs de "workflows" (graphes de composition de services, formant un "service composite") centralisĂ©s, entrainant diffĂ©rents problĂšmes, et notamment une probabilitĂ© grandissante d'apparition d'Ă©checs ou de goulots d'Ă©tranglement. Dans un monde oĂč les plate-formes sont de plus en plus dynamiques (ou "Ă©lastiques", comme envisagĂ© par les "clouds", de nouveaux mĂ©canismes de coordination dynamiques sont requis. Dans ce contexte, des mĂ©taphores naturelles ont gagnĂ© une attention particuliĂšre rĂ©cemment, car elles fournissent des abstractions pour la coordination autonome d'entitĂ©s (commes les services.) Dans ce rapport, une approche dĂ©centralisĂ©e pour l'exĂ©cution de Web Services composites fondĂ©e sur la mĂ©taphore chimique, qui fournit un modĂšle d'exĂ©cution haut-niveau pour l'exĂ©cution dĂ©centralisĂ©e, est prĂ©sentĂ©e. Dans cette architecture, les services communiquent Ă  travers un espace virtuellement partagĂ© persistant contenant l'information sur les flux de contrĂŽle et de donnĂ©es, permettant une coordination dĂ©centralisĂ©e des services. Un prototype logiciel a Ă©tĂ© dĂ©veloppĂ© et expĂ©rimentĂ©. Les rĂ©sultats de ces expĂ©riences sont prĂ©sentĂ©s Ă  la fin de ce rapport

    An Embryonics Inspired Architecture for Resilient Decentralised Cloud Service Delivery

    Get PDF
    Data-driven artificial intelligence applications arising from Internet of Things technologies can have profound wide-reaching societal benefits at the cross-section of the cyber and physical domains. Usecases are expanding rapidly. For example, smart-homes and smart-buildings provide intelligent monitoring, resource optimisation, safety, and security for their inhabitants. Smart cities can manage transport, waste, energy, and crime on large scales. Whilst smart-manufacturing can autonomously produce goods through the self-management of factories and logistics. As these use-cases expand further, the requirement to ensure data is processed accurately and timely is ever crucial, as many of these applications are safety critical. Where loss off life and economic damage is a likely possibility in the event of system failure. While the typical service delivery paradigm, cloud computing, is strong due to operating upon economies of scale, their physical proximity to these applications creates network latency which is incompatible with these safety critical applications. To complicate matters further, the environments they operate in are becoming increasingly hostile. With resource-constrained and mobile wireless networking, commonplace. These issues drive the need for new service delivery architectures which operate closer to, or even upon, the network devices, sensors and actuators which compose these IoT applications at the network edge. These hostile and resource constrained environments require adaptation of traditional cloud service delivery models to these decentralised mobile and wireless environments. Such architectures need to provide persistent service delivery within the face of a variety of internal and external changes or: resilient decentralised cloud service delivery. While the current state of the art proposes numerous techniques to enhance the resilience of services in this manner, none provide an architecture which is capable of providing data processing services in a cloud manner which is inherently resilient. Adopting techniques from autonomic computing, whose characteristics are resilient by nature, this thesis presents a biologically-inspired platform modelled on embryonics. Embryonic systems have an ability to self-heal and self-organise whilst showing capacity to support decentralised data processing. An initial model for embryonics-inspired resilient decentralised cloud service delivery is derived according to both the decentralised cloud, and resilience requirements given for this work. Next, this model is simulated using cellular automata, which illustrate the embryonic concept’s ability to provide self-healing service delivery under varying system component loss. This highlights optimisation techniques, including: application complexity bounds, differentiation optimisation, self-healing aggression, and varying system starting conditions. All attributes of which can be adjusted to vary the resilience performance of the system depending upon different resource capabilities and environmental hostilities. Next, a proof-of-concept implementation is developed and validated which illustrates the efficacy of the solution. This proof-of-concept is evaluated on a larger scale where batches of tests highlighted the different performance criteria and constraints of the system. One key finding was the considerable quantity of redundant messages produced under successful scenarios which were helpful in terms of enabling resilience yet could increase network contention. Therefore balancing these attributes are important according to use-case. Finally, graph-based resilience algorithms were executed across all tests to understand the structural resilience of the system and whether this enabled suitable measurements or prediction of the application’s resilience. Interestingly this study highlighted that although the system was not considered to be structurally resilient, the applications were still being executed in the face of many continued component failures. This highlighted that the autonomic embryonic functionality developed was succeeding in executing applications resiliently. Illustrating that structural and application resilience do not necessarily coincide. Additionally, one graph metric, assortativity, was highlighted as being predictive of application resilience, although not structural resilience

    Mobiilse vÀrkvÔrgu protsessihaldus

    Get PDF
    VĂ€rkvĂ”rk, ehk Asjade Internet (Internet of Things, lĂŒh IoT) edendab lahendusi nagu nn tark linn, kus meid igapĂ€evaselt ĂŒmbritsevad objektid on ĂŒhendatud infosĂŒsteemidega ja ka ĂŒksteisega. Selliseks nĂ€iteks vĂ”ib olla teekatete seisukorra monitoorimissĂŒsteem. VĂ”rku ĂŒhendatud sĂ”idukitelt (nt bussidelt) kogutakse videomaterjali, mida seejĂ€rel töödeldakse, et tuvastada löökauke vĂ”i lume kogunemist. Tavaliselt hĂ”lmab selline lahendus keeruka tsentraalse sĂŒsteemi ehitamist. Otsuste langetamiseks (nt milliseid sĂ”idukeid parasjagu protsessi kaasata) vajab keskne sĂŒsteem pidevat ĂŒhendust kĂ”igi IoT seadmetega. Seadmete hulga kasvades vĂ”ib keskne lahendus aga muutuda pudelikaelaks. Selliste protsesside disaini, haldust, automatiseerimist ja seiret hĂ”lbustavad mĂ€rkimisvÀÀrselt Ă€riprotsesside halduse (Business Process Management, lĂŒh BPM) valdkonna standardid ja tööriistad. Paraku ei ole BPM tehnoloogiad koheselt kasutatavad uute paradigmadega nagu Udu- ja Servaarvutus, mis tuleviku vĂ€rkvĂ”rgu jaoks vajalikud on. Nende puhul liigub suur osa otsustustest ja arvutustest ĂŒksikutest andmekeskustest servavĂ”rgu seadmetele, mis asuvad lĂ”ppkasutajatele ja IoT seadmetele lĂ€hemal. Videotöötlust vĂ”iks teostada mini-andmekeskustes, mis on paigaldatud ĂŒle linna, nĂ€iteks bussipeatustesse. Arvestades IoT seadmete ĂŒha suurenevat hulka, vĂ€hendab selline koormuse jaotamine vĂ€hendab riski, et tsentraalne andmekeskust ĂŒlekoormamist. Doktoritöö uurib, kuidas mobiilsusega seonduvaid IoT protsesse taoliselt ĂŒmber korraldada, kohanedes pidevalt muutlikule, liikuvate seadmetega tĂ€idetud servavĂ”rgule. Nimelt on ĂŒhendused katkendlikud, mistĂ”ttu otsuste langetus ja planeerimine peavad arvestama muuhulgas mobiilseadmete liikumistrajektoore. Töö raames valminud prototĂŒĂŒpe testiti Android seadmetel ja simulatsioonides. Lisaks valmis tööriistakomplekt STEP-ONE, mis vĂ”imaldab teadlastel hĂ”lpsalt simuleerida ja analĂŒĂŒsida taolisi probleeme erinevais realistlikes stsenaariumites nagu seda on tark linn.The Internet of Things (IoT) promotes solutions such as a smart city, where everyday objects connect with info systems and each other. One example is a road condition monitoring system, where connected vehicles, such as buses, capture video, which is then processed to detect potholes and snow build-up. Building such a solution typically involves establishing a complex centralised system. The centralised approach may become a bottleneck as the number of IoT devices keeps growing. It relies on constant connectivity to all involved devices to make decisions, such as which vehicles to involve in the process. Designing, automating, managing, and monitoring such processes can greatly be supported using the standards and software systems provided by the field of Business Process Management (BPM). However, BPM techniques are not directly applicable to new computing paradigms, such as Fog Computing and Edge Computing, on which the future of IoT relies. Here, a lot of decision-making and processing is moved from central data-centers to devices in the network edge, near the end-users and IoT sensors. For example, video could be processed in mini-datacenters deployed throughout the city, e.g., at bus stops. This load distribution reduces the risk of the ever-growing number of IoT devices overloading the data center. This thesis studies how to reorganise the process execution in this decentralised fashion, where processes must dynamically adapt to the volatile edge environment filled with moving devices. Namely, connectivity is intermittent, so decision-making and planning need to involve factors such as the movement trajectories of mobile devices. We examined this issue in simulations and with a prototype for Android smartphones. We also showcase the STEP-ONE toolset, allowing researchers to conveniently simulate and analyse these issues in different realistic scenarios, such as those in a smart city.  https://www.ester.ee/record=b552551

    An Approach to Automatically Distribute and Access Knowledge within Networked Embedded Systems in Factory Automation

    Get PDF
    This thesis presents a novel approach for automatically distribute and access knowledge within factory automation systems built by networked embedded systems. Developments on information, communication and computational technologies are making possible the distribution of tasks within different control resources, resources which are networked and working towards a common objective optimizing desired parameters. A fundamental task for introducing autonomy to these systems, is the option for represent knowledge, distributed within the automation network and to ensure its access by providing access mechanisms. This research work focuses on the processes for automatically distribute and access the knowledge.Recently, the industrial world has embraced service-oriented as architectural (SOA) patterns for relaxing the software integration costs of factory automation systems. This pattern defines a services provider offering a particular functionality, and service requesters which are entities looking for getting their needs satisfied. Currently, there are a few technologies allowing to implement a SOA solution, among those, Web Technologies are gaining special attention for their solid presence in other application fields. Providers and services using Web technologies for expressing their needs and skills are called Web Services. One of the main advantage of services is the no need for the service requester to know how the service provider is accomplishing the functionality or where the execution of the service is taking place. This benefit is recently stressed by the irruption of Cloud Computing, allowing the execution of certain process by the cloud resources.The caption of human knowledge and the representation of that knowledge in a machine interpretable manner has been an interesting research topic for the last decades. A well stablished mechanism for the representation of knowledge is the utilization of Ontologies. This mechanism allows machines to access that knowledge and use reasoning engines in order to create reasoning machines. The presence of a knowledge base allows as clearly the better identification of the web services, which is achievable by adding semantic notations to the service descriptors. The resulting services are called semantic web services.With the latest advances on computational resources, system can be built by a large number of constrained devices, yet easily connected, building a network of computational nodes, nodes that will be dedicated to execute control and communication tasks for the systems. These tasks are commanded by high level commanding systems, such as Manufacturing Execution Systems (MES) and Enterprise Resource Planning (ERP) modules. The aforementioned technologies allow a vertical approach for communicating commanding options from MES and ERP directly to the control nodes. This scenario allows to break down monolithic MES systems into small distributed functionalities, if these functionalities use Web standards for interacting and a knowledge base as main input for information, then we are arriving to the concept of Open KnowledgeDriven MES Systems (OKD-MES).The automatic distribution of the knowledge base in an OKD-MES mechanism and the accomplishment of the reasoning process in a distributed manner are the main objectives for this research. Thus, this research work describes the decentralization and management of knowledge descriptions which are currently handled by the Representation Layer (RPL) of the OKD-MES framework. This is achieved within the encapsulation of ontology modules which may be integrated by a distributed reasoning process on incoming requests. Furthermore, this dissertation presents the concept, principles and architecture for implementing Private Local Automation Clouds (PLACs), built by CPS.The thesis is an article thesis and is composed by 9 original and referred articles and supported by 7 other articles presented by the author

    Multiparty session types for dynamic verification of distributed systems

    Get PDF
    In large-scale distributed systems, each application is realised through interactions among distributed components. To guarantee safe communication (no deadlocks and communication mismatches) we need programming languages and tools that structure, manage, and policy-check these interactions. Multiparty session types (MPST), a typing discipline for structured interactions between communicating processes, offers a promising approach. To date, however, session types applications have been limited to static verification, which is not always feasible and is often restrictive in terms of programming API and specifying policies. This thesis investigates the design and implementation of a runtime verification framework, ensuring conformance between programs and specifications. Specifications are written in Scribble, a protocol description language formally founded on MPST. The central idea of the approach is a dynamic monitor, which takes a form of a communicating finite state machine, automatically generated from Scribble specifications, and a communication runtime stipulating a message format. We extend and apply Scribble-based runtime verification in manifold ways. First, we implement a Python library, facilitated with session primitives and verification runtime. We integrate the library in a large cyber-infrastructure project for oceanography. Second, we examine multiple communication patterns, which reveal and motivate two novel extensions, asynchronous interrupts for verification of exception handling behaviours, and time constraints for enforcement of realtime protocols. Third, we apply the verification framework to actor programming by augmenting an actor library in Python with protocol annotations. For both implementations, measurements show Scribble-based dynamic checking delivers minimal overhead and allows expressive specifications. Finally, we explore a static analysis of Scribble specifications as to efficiently compute a safe global state from which a monitored system of interacting processes can be recovered after a failure. We provide an implementation of a verification framework for recovery in Erlang. Benchmarks show our recovery strategy outperforms a built-in static recovery strategy, in Erlang, on a number of use cases.Open Acces

    An Agent-based Approach for Improving the Performance of Distributed Business Processes in Maritime Port Community

    Get PDF
    In the recent years, the concept of “port community” has been adopted by the maritime transport industry in order to achieve a higher degree of coordination and cooperation amongst organizations involved in the transfer of goods through the port area. The business processes of the port community supply chain form a complicated process which involves several process steps, multiple actors, and numerous information exchanges. One of the widely used applications of ICT in ports is the Port Community System (PCS) which is implemented in ports in order to reduce paperwork and to facilitate the information flow related to port operations and cargo clearance. However, existing PCSs are limited in functionalities that facilitate the management and coordination of material, financial, and information flows within the port community supply chain. This research programme addresses the use of agent technology to introduce business process management functionalities, which are vital for port communities, aiming to the enhancement of the performance of the port community supply chain. The investigation begins with an examination of the current state in view of the business perspective and the technical perspective. The business perspective focuses on understanding the nature of the port community, its main characteristics, and its problems. Accordingly, a number of requirements are identified as essential amendments to information systems in seaports. On the other hand, the technical perspective focuses on technologies that are convenient for solving problems in business process management within port communities. The research focuses on three technologies; the workflow technology, agent technology, and service orientation. An analysis of information systems across port communities enables an examination of the current PCSs with regard to their coordination and workflow management capabilities. The most important finding of this analysis is that the performance of the business processes, and in particular the performance of the port community supply chain, is not in the scope of the examined PCSs. Accordingly, the Agent-Based Middleware for Port Community Management (ABMPCM) is proposed as an approach for providing essential functionalities that would facilitate collaborative planning and business process management. As a core component of the ABMPCM, the Collaborative Planning Facility (CPF) is described in further details. A CPF prototype has been developed as an agent-based system for the domain of inland transport of containers to demonstrate its practical effectiveness. To evaluate the practical application of the CPF, a simulation environment is introduced in order to facilitate the evaluation process. The research started with the definition of a multi-agent simulation framework for port community supply chain. Then, a prototype has been implemented and employed for the evaluation of the CPF. The results of the simulation experiments demonstrate that our agent-based approach effectively enhances the performance of business process in the port community
    • 

    corecore