191 research outputs found

    A Multiagent System for the Reliable Execution of Automatically Composed Ad-hoc Processes

    Get PDF
    This article presents an architecture to automatically create ad-hoc processes for complex value-added services and to execute them in a reliable way. The uniqueness of ad-hoc processes is to support users not only in standardized situations like traditional workflows do, but also in unique non-recurring situations. Based on user requirements, a service composition engine generates such ad-hoc processes, which integrate individual services in order to provide the desired functionality. Our infrastructure executes ad-hoc processes by transactional agents in a peer-to-peer style. The process execution is thereby performed under transactional guarantees. Moreover, the service composition engine is used to re-plan in the case of execution failure

    Recovery within long running transactions

    Get PDF
    As computer systems continue to grow in complexity, the possibilities of failure increase. At the same time, the increase in computer system pervasiveness in day-to-day activities brought along increased expectations on their reliability. This has led to the need for effective and automatic error recovery techniques to resolve failures. Transactions enable the handling of failure propagation over concurrent systems due to dependencies, restoring the system to the point before the failure occurred. However, in various settings, especially when interacting with the real world, reversal is not possible. The notion of compensations has been long advocated as a way of addressing this issue, through the specification of activities which can be executed to undo partial transactions. Still, there is no accepted standard theory; the literature offers a plethora of distinct formalisms and approaches. In this survey, we review the compensations from a theoretical point of view by: (i) giving a historic account of the evolution of compensating transactions; (ii) delineating and describing a number of design options involved; (iii) presenting a number of formalisms found in the literature, exposing similarities and differences; (iv) comparing formal notions of compensation correctness; (v) giving insights regarding the application of compensations in practice; and (vi) discussing current and future research trends in the area.peer-reviewe

    Adaptive Composition in Dynamic Service Environments

    Get PDF
    Due to distribution, participant autonomy and lack of local control, service-based systems operate in highly dynamic and uncertain environments. In the face of such dynamism and volatility, the ability to manage service changes and exceptions during composite service execution is a vital requirement. Most current adaptive composition approaches, however, fail to address service changes without causing undesirable disruptions in execution or considerably degrading the quality of the composite application. In response, this paper presents a novel adaptive execution approach, which efficiently handles service changes occurring at execution time, for both repair and optimisation purposes. The adaptation is performed as soon as possible and in parallel with the execution process, thus reducing interruption time, increasing the chance of a successful recovery, and producing the most optimal solution according to the current environment state. The effectiveness of the proposed approach is demonstrated both analytically and empirically through a case study evaluation applied in the framework of learning object composition. In particular, the results show that, even with frequent changes (e.g. 20 changes per service execution), or in the cases where interference with execution is non-preventable (e.g., when an executed service delivers unanticipated quality values), our approach manages to recover from the situation with minimal interruption

    Long Running Transactions Within Enterprise Resource Planning Systems

    Get PDF
    Recently, one of the major problems in various countries is the management of complicated organisations to cope with the increasingly competitive marketplace. This problem can be solved using Enterprise Resource Planning (ERP) systems which can offer an integrated view of the whole business process within an organisation in real-time. However, those systems have complicated workflow, are costly to be analysed to manage the whole business process in those systems. Thus, Long Running Transaction (LRTs) models have been proposed as optimal solutions, which can be used to simplify the analysis of ERP systems workflow to manage the whole organiational process and ensure that completed transactions in a business process are not processed in any other process. Practically, LRTs models have various problems, such as the rollback and check-pointing activities. This led to the use of Communication Closed Layers (CCLs) for decomposing processes into layers to be analysed easily using sequential programs. Therefore, the purpose of this work is to develop an advanced approach to implement and analyse the workflow of an organisation in order to deal with failures in Long Running Transaction (LRTs) within Enterprise Resource Planning (ERP) systems using Communication Closed Layers (CCLs). Furthermore, it aims to examine the possible enhancements for the available methodology for ERP systems based on studying the LRT suitability and applicability to model the ERP workflows and offer simple and elegant constructs for implementing those complex and expensive ERP workflow systems. The implemented model in this thesis offers a solution for two main challenges; incompatibilities that result from the application of transitional transaction processing concepts to the ERP context and the complexity of ERP workflow. The first challenge is addressed based on offering new semantics to allow modelling of concepts, such as rollbacks and check-points through various constraints, while the second is addressed through the use of the Communication Closed Layer (CCL) approach. The implemented computational reconfigurable model of an ERP workflow system in this work is able to simulate real ERP workflow systems and allows obtaining more understanding of the use of ERP system in enterprise environments. Moreover, a case study is introduced to evaluate the application of the implemented model using three scenarios. The conducted evaluation stage explores the effectiveness of executable ERP computational models and offers a simple methodology that can be used to build those systems using novel approaches. Based on comparing the current model with two previous models, it can be concluded that the new model outperforms previous models based on benefiting from their features and solving their limitations which make them inappropriate to be used in the context of ERP workflow models

    Knowledge-infused and Consistent Complex Event Processing over Real-time and Persistent Streams

    Full text link
    Emerging applications in Internet of Things (IoT) and Cyber-Physical Systems (CPS) present novel challenges to Big Data platforms for performing online analytics. Ubiquitous sensors from IoT deployments are able to generate data streams at high velocity, that include information from a variety of domains, and accumulate to large volumes on disk. Complex Event Processing (CEP) is recognized as an important real-time computing paradigm for analyzing continuous data streams. However, existing work on CEP is largely limited to relational query processing, exposing two distinctive gaps for query specification and execution: (1) infusing the relational query model with higher level knowledge semantics, and (2) seamless query evaluation across temporal spaces that span past, present and future events. These allow accessible analytics over data streams having properties from different disciplines, and help span the velocity (real-time) and volume (persistent) dimensions. In this article, we introduce a Knowledge-infused CEP (X-CEP) framework that provides domain-aware knowledge query constructs along with temporal operators that allow end-to-end queries to span across real-time and persistent streams. We translate this query model to efficient query execution over online and offline data streams, proposing several optimizations to mitigate the overheads introduced by evaluating semantic predicates and in accessing high-volume historic data streams. The proposed X-CEP query model and execution approaches are implemented in our prototype semantic CEP engine, SCEPter. We validate our query model using domain-aware CEP queries from a real-world Smart Power Grid application, and experimentally analyze the benefits of our optimizations for executing these queries, using event streams from a campus-microgrid IoT deployment.Comment: 34 pages, 16 figures, accepted in Future Generation Computer Systems, October 27, 201

    Sixth Workshop and Tutorial on Practical Use of Coloured Petri Nets and the CPN Tools Aarhus, Denmark, October 24-26, 2005

    Get PDF
    This booklet contains the proceedings of the Sixth Workshop on Practical Use of Coloured Petri Nets and the CPN Tools, October 24-26, 2005. The workshop is organised by the CPN group at the Department of Computer Science, University of Aarhus, Denmark. The papers are also available in electronic form via the web pages: http://www.daimi.au.dk/CPnets/workshop0

    Adaptive object management for distributed systems

    Get PDF
    This thesis describes an architecture supporting the management of pluggable software components and evaluates it against the requirement for an enterprise integration platform for the manufacturing and petrochemical industries. In a distributed environment, we need mechanisms to manage objects and their interactions. At the least, we must be able to create objects in different processes on different nodes; we must be able to link them together so that they can pass messages to each other across the network; and we must deliver their messages in a timely and reliable manner. Object based environments which support these services already exist, for example ANSAware(ANSA, 1989), DEC's Objectbroker(ACA,1992), Iona's Orbix(Orbix,1994)Yet such environments provide limited support for composing applications from pluggable components. Pluggability is the ability to install and configure a component into an environment dynamically when the component is used, without specifying static dependencies between components when they are produced. Pluggability is supported to a degree by dynamic binding. Components may be programmed to import references to other components and to explore their interfaces at runtime, without using static type dependencies. Yet thus overloads the component with the responsibility to explore bindings. What is still generally missing is an efficient general-purpose binding model for managing bindings between independently produced components. In addition, existing environments provide no clear strategy for dealing with fine grained objects. The overhead of runtime binding and remote messaging will severely reduce performance where there are a lot of objects with complex patterns of interaction. We need an adaptive approach to managing configurations of pluggable components according to the needs and constraints of the environment. Management is made difficult by embedding bindings in component implementations and by relying on strong typing as the only means of verifying and validating bindings. To solve these problems we have built a set of configuration tools on top of an existing distributed support environment. Specification tools facilitate the construction of independent pluggable components. Visual composition tools facilitate the configuration of components into applications and the verification of composite behaviours. A configuration model is constructed which maintains the environmental state. Adaptive management is made possible by changing the management policy according to this state. Such policy changes affect the location of objects, their bindings, and the choice of messaging system
    corecore