541 research outputs found

    Distributed Web Service Coordination for Collaboration Applications and Biological Workflows

    Get PDF
    In this dissertation work, we have investigated the main research thrust of decentralized coordination of workflows over web services. To address distributed workflow coordination, first we have developed “Web Coordination Bonds” as a capable set of dependency modeling primitives that enable each web service to manage its own dependencies. Web bond primitives are as powerful as extended Petri nets and have sufficient modeling and expressive capabilities to model workflow dependencies. We have designed and prototyped our “Web Service Coordination Management Middleware” (WSCMM) system that enhances current web services infrastructure to accommodate web bond enabled web services. Finally, based on core concepts of web coordination bonds and WSCMM, we have developed the “BondFlow” system that allows easy configuration distributed coordination of workflows. The footprint of the BonFlow runtime is 24KB and the additional third party software packages, SOAP client and XML parser, account for 115KB

    A System for Rapid Configuration of Distributed Workflows over Web Services and their Handheld-Based Coordination

    Get PDF
    Web services technology has lately stirred tremendous interest in industry as well as the academia. Web services are self-contained, platform independent functionality which is available over the internet. Web services are defined, discovered & accessed using a standard protocols like WSDL, UDDI & SOAP. With the advent of Service-Oriented Architecture and need for more complex application, it became eminent to have a way in which these independent entities could collaborate in a coherent manner to provide a high level functionality. But the problem of service composition is not an easy one. One reason being the self-contained and loosely coupled interaction style, which happens to be the single most important reason for its popularity. We are proposing a prototype system for distributed coordination of web services. This system is based on the Web Bonds model for coordination. The system, dubbed BondFlow system, allows configuration and execution of workflows configured over web services. Presently BondFlow system allows both centralized as well as distributed coordination of workflows over handhelds, which we claim as an engineering feet and is currently a unique work in this area

    Interaction Expressions - A Powerful Formalism for Describing Inter-Workflow Dependencies

    Get PDF
    Current workflow management technology does not provide adequate means for describing and implementing workflow ensembles, i. e., dynamically evolving collections of more or less independent workflows which have to synchronize only now and then. Interaction expressions are proposed as a simple yet powerful formalism to remedy this shortcoming. Besides operators for sequential composition, iteration, and selection, which are well-known from regular expressions, they provide parallel composition and iteration, conjunction, and several advanced features like parametric expressions, multipliers, and quantifiers. The paper introduces interaction expressions semi-formally, gives examples of their typical use, and describes their implementation and integration with state-of-the-art workflow technology. Major design principles, such as orthogonality and implicit and predictive choice, are discussed and compared with several related approaches

    A language and toolkit for the specification, execution and monitoring of dependable distributed applications

    Get PDF
    PhD ThesisThis thesis addresses the problem of specifying the composition of distributed applications out of existing applications, possibly legacy ones. With the automation of business processes on the increase, more and more applications of this kind are being constructed. The resulting applications can be quite complex, usually long-lived and are executed in a heterogeneous environment. In a distributed environment, long-lived activities need support for fault tolerance and dynamic reconfiguration. Indeed, it is likely that the environment where they are run will change (nodes may fail, services may be moved elsewhere or withdrawn) during their execution and the specification will have to be modified. There is also a need for modularity, scalability and openness. However, most of the existing systems only consider part of these requirements. A new area of research, called workflow management has been trying to address these issues. This work first looks at what needs to be addressed to support the specification and execution of these new applications in a heterogeneous, distributed environment. A co- ordination language (scripting language) is developed that fulfils the requirements of specifying the composition and inter-dependencies of distributed applications with the properties of dynamic reconfiguration, fault tolerance, modularity, scalability and openness. The architecture of the overall workflow system and its implementation are then presented. The system has been implemented as a set of CORBA services and the execution environment is built using a transactional workflow management system. Next, the thesis describes the design of a toolkit to specify, execute and monitor distributed applications. The design of the co-ordination language and the toolkit represents the main contribution of the thesis.UK Engineering and Physical Sciences Research Council, CaberNet, Northern Telecom (Nortel)

    Data Management in Microservices: State of the Practice, Challenges, and Research Directions

    Full text link
    We are recently witnessing an increased adoption of microservice architectures by the industry for achieving scalability by functional decomposition, fault-tolerance by deployment of small and independent services, and polyglot persistence by the adoption of different database technologies specific to the needs of each service. Despite the accelerating industrial adoption and the extensive research on microservices, there is a lack of thorough investigation on the state of the practice and the major challenges faced by practitioners with regard to data management. To bridge this gap, this paper presents a detailed investigation of data management in microservices. Our exploratory study is based on the following methodology: we conducted a systematic literature review of articles reporting the adoption of microservices in industry, where more than 300 articles were filtered down to 11 representative studies; we analyzed a set of 9 popular open-source microservice-based applications, selected out of more than 20 open-source projects; furthermore, to strengthen our evidence, we conducted an online survey that we then used to cross-validate the findings of the previous steps with the perceptions and experiences of over 120 practitioners and researchers. Through this process, we were able to categorize the state of practice and reveal several principled challenges that cannot be solved by software engineering practices, but rather need system-level support to alleviate the burden of practitioners. Based on the observations we also identified a series of research directions to achieve this goal. Fundamentally, novel database systems and data management tools that support isolation for microservices, which include fault isolation, performance isolation, data ownership, and independent schema evolution across microservices must be built to address the needs of this growing architectural style

    A Multiagent System for the Reliable Execution of Automatically Composed Ad-hoc Processes

    Get PDF
    This article presents an architecture to automatically create ad-hoc processes for complex value-added services and to execute them in a reliable way. The uniqueness of ad-hoc processes is to support users not only in standardized situations like traditional workflows do, but also in unique non-recurring situations. Based on user requirements, a service composition engine generates such ad-hoc processes, which integrate individual services in order to provide the desired functionality. Our infrastructure executes ad-hoc processes by transactional agents in a peer-to-peer style. The process execution is thereby performed under transactional guarantees. Moreover, the service composition engine is used to re-plan in the case of execution failure

    A programming system for process coordination in virtual organisations

    Get PDF
    PhD thesisDistributed business applications are increasingly being constructed by composing them from services provided by various online businesses. Typically, this leads to trading partners coming together to form virtual organizations (VOs). Each member of a VO maintains their autonomy, except with respect to their agreed goals. The structure of the Virtual Organisation may contain one dominant organisation who dictates the method of achieving the goals or the members may be considered peers of equal importance. The goals of VOs can be defined by the shared global business processes they contain. To be able to execute these business processes, VOs require a flexible enactment model as there may be no single ‘owner’ of the business process and therefore no natural place to enact the business processes. One solution is centralised enactment using a trusted third party, but in some cases this may not be acceptable (for instance because of security reasons). This thesis will present a programming system that allows centralised as well as distributed enactment where each organisation enacts part of the business process. To achieve distributed enactment we must address the problem of specifying the business process in a manner that is amenable to distribution. The first contribution of this thesis is the presentation of the Task Model, a set of languages and notations for describing workflows that can be enacted in a centralised or decentralised manner. The business processes that we specify will coordinate the services that each organisation owns. The second contribution of this thesis is the presentation of a method of describing the observable behaviour of these services. The language we present, SSDL, provides a flexible and extensible way of describing the messaging behaviour of Web Services. We present a method for checking that a set of services described in SSDL are compatible with each other and also that a workflow interacts with a service in the desired manner. The final contribution of this thesis is the presentation of an abstract architecture and prototype implementation of a decentralised workflow engine. The prototype is able to enact workflows described in the Task Model notation in either a centralised or decentralised scenario

    Enhancing Data Processing on Clouds with Hadoop/HBase

    Get PDF
    In the current information age, large amounts of data are being generated and accumulated rapidly in various industrial and scientific domains. This imposes important demands on data processing capabilities that can extract sensible and valuable information from the large amount of data in a timely manner. Hadoop, the open source implementation of Google's data processing framework (MapReduce, Google File System and BigTable), is becoming increasingly popular and being used to solve data processing problems in various application scenarios. However, being originally designed for handling very large data sets that can be divided easily in parts to be processed independently with limited inter-task communication, Hadoop lacks applicability to a wider usage case. As a result, many projects are under way to enhance Hadoop for different application needs, such as data warehouse applications, machine learning and data mining applications, etc. This thesis is one such research effort in this direction. The goal of the thesis research is to design novel tools and techniques to extend and enhance the large-scale data processing capability of Hadoop/HBase on clouds, and to evaluate their effectiveness in performance tests on prototype implementations. Two main research contributions are described. The first contribution is a light-weight computational workflow system called "CloudWF" for Hadoop. The second contribution is a client library called "HBaseSI" supporting transactional snapshot isolation (SI) in HBase, Hadoop's database component. CloudWF addresses the problem of automating the execution of scientific workflows composed of both MapReduce and legacy applications on clouds with Hadoop/HBase. CloudWF is the first computational workflow system built directly using Hadoop/HBase. It uses novel methods in handling workflow directed acyclic graph decomposition, storing and querying dependencies in HBase sparse tables, transparent file staging, and decentralized workflow execution management relying on the MapReduce framework for task scheduling and fault tolerance. HBaseSI addresses the problem of maintaining strong transactional data consistency in HBase tables. This is the first SI mechanism developed for HBase. HBaseSI uses novel methods in handling distributed transactional management autonomously by individual clients. These methods greatly simplify the design of HBaseSI and can be generalized to other column-oriented stores with similar architecture as HBase. As a result of the simplicity in design, HBaseSI adds low overhead to HBase performance and directly inherits many desirable properties of HBase. HBaseSI is non-intrusive to existing HBase installations and user data, and is designed to work with a large cloud in terms of data size and the number of nodes in the cloud
    • 

    corecore