347 research outputs found

    Programming distributed and adaptable autonomous components--the GCM/ProActive framework

    Get PDF
    International audienceComponent-oriented software has become a useful tool to build larger and more complex systems by describing the application in terms of encapsulated, loosely coupled entities called components. At the same time, asynchronous programming patterns allow for the development of efficient distributed applications. While several component models and frameworks have been proposed, most of them tightly integrate the component model with the middleware they run upon. This intertwining is generally implicit and not discussed, leading to entangled, hard to maintain code. This article describes our efforts in the development of the GCM/ProActive framework for providing distributed and adaptable autonomous components. GCM/ProActive integrates a component model designed for execution on large-scale environments, with a programming model based on active objects allowing a high degree of distribution and concurrency. This new integrated model provides a more powerful development, composition, and execution environment than other distributed component frameworks. We illustrate that GCM/ProActive is particularly adapted to the programming of autonomic component systems, and to the integration into a service-oriented environment

    Distributed aop middleware for large-scale scenarios

    Get PDF
    En aquesta tesi doctoral presentem una proposta de middleware distribuït pel desenvolupament d'aplicacions de gran escala. La nostra motivació principal és permetre que les responsabilitats distribuïdes d'aquestes aplicacions, com per exemple la replicació, puguin integrar-se de forma transparent i independent. El nostre enfoc es basa en la implementació d'aquestes responsabilitats mitjançant el paradigma d'aspectes distribuïts i es beneficia dels substrats de les xarxes peer-to-peer (P2P) i de la programació orientada a aspectes (AOP) per realitzar-ho de forma descentralitzada, desacoblada, eficient i transparent. La nostra arquitectura middleware es divideix en dues capes: un model de composició i una plataforma escalable de desplegament d'aspectes distribuïts. Per últim, es demostra la viabilitat i aplicabilitat del nostre model mitjançant la implementació i experimentació de prototipus en xarxes de gran escala reals.In this PhD dissertation we present a distributed middleware proposal for large-scale application development. Our main aim is to separate the distributed concerns of these applications, like replication, which can be integrated independently and transparently. Our approach is based on the implementation of these concerns using the paradigm of distributed aspects. In addition, our proposal benefits from the peer-to-peer (P2P) networks and aspect-oriented programming (AOP) substrates to provide these concerns in a decentralized, decoupled, efficient, and transparent way. Our middleware architecture is divided into two layers: a composition model and a scalable deployment platform for distributed aspects. Finally, we demonstrate the viability and applicability of our model via implementation and experimentation of prototypes in real large-scale networks

    Service Oriented Mobile Computing

    Get PDF
    La diffusione di concetti quali Pervasive e Mobile Computing introduce nell'ambito dei sistemi distribuiti due aspetti fondamentali: la mobilità dell'utente e l'interazione con l'ambiente circostante, favorite anche dal crescente utilizzo di dispositivi mobili dotati di connettività wireless come prodotti di consumo. Per estendere le funzionalità introdotte nell'ambito dei sistemi distribuiti dalle Architetture Orientate ai Servizi (SOA) e dal paradigma peer-to-peer anche a dispositivi dalle risorse limitate (in termini di capacità computazionale, memoria e batteria), è necessario disporre di un middleware leggero e progettato tenendo in considerazione tali caratteristiche. In questa tesi viene presentato NAM (Networked Autonomic Machine), un formalismo che descrive in modo esaustivo un sistema di questo tipo; si tratta di un modello teorico per la definizione di entità hardware e software in grado di condividere le proprie risorse in modo completamente altruistico. In particolare, il lavoro si concentra sulla definizione e gestione di un determinato tipo di risorse, i servizi, che possono essere offerti ed utilizzati da dispositivi mobili, mediante meccanismi di composizione e migrazione. NSAM (Networked Service-oriented Autonomic Machine) è una specializzazione di NAM per la condivisione di servizi in una rete peer-to-peer, ed è basato su tre concetti fondamentali: schemi di overlay, composizione dinamica di servizi e auto-configurazione dei peer. Nella tesi vengono presentate anche diverse attività applicative, che fanno riferimento all'utilizzo di due middleware sviluppati dal gruppo di Sistemi Distribuiti (DSG) dell'Università di Parma: SP2A (Service Oriented Peer-to-peer Architecture), framework per lo sviluppo di applicazioni distribuite attraverso la condivisione di risorse in una rete peer-to-peer, e Jxta-Soap che consente la condivisione di Web Services in una rete peer-to-peer JXTA. Le applicazioni realizzate spaziano dall'ambito della logistica, alla creazione di comunità per l'e-learning, all'Ambient Intelligence alla gestione delle emergenze, ed hanno come denominatore comune la creazione di reti eterogenee e la condivisione di risorse anche tra dispositivi mobili. Viene inoltre messo in evidenza come tali applicazioni possano essere ottimizzate mediante l'introduzione del framework NAM descritto, per consentire la condivisione di diversi tipi di risorse in modo efficiente e proattivo

    How To Touch a Running System

    Get PDF
    The increasing importance of distributed and decentralized software architectures entails more and more attention for adaptive software. Obtaining adaptiveness, however, is a difficult task as the software design needs to foresee and cope with a variety of situations. Using reconfiguration of components facilitates this task, as the adaptivity is conducted on an architecture level instead of directly in the code. This results in a separation of concerns; the appropriate reconfiguration can be devised on a coarse level, while the implementation of the components can remain largely unaware of reconfiguration scenarios. We study reconfiguration in component frameworks based on formal theory. We first discuss programming with components, exemplified with the development of the cmc model checker. This highly efficient model checker is made of C++ components and serves as an example for component-based software development practice in general, and also provides insights into the principles of adaptivity. However, the component model focuses on high performance and is not geared towards using the structuring principle of components for controlled reconfiguration. We thus complement this highly optimized model by a message passing-based component model which takes reconfigurability to be its central principle. Supporting reconfiguration in a framework is about alleviating the programmer from caring about the peculiarities as much as possible. We utilize the formal description of the component model to provide an algorithm for reconfiguration that retains as much flexibility as possible, while avoiding most problems that arise due to concurrency. This algorithm is embedded in a general four-stage adaptivity model inspired by physical control loops. The reconfiguration is devised to work with stateful components, retaining their data and unprocessed messages. Reconfiguration plans, which are provided with a formal semantics, form the input of the reconfiguration algorithm. We show that the algorithm achieves perceived atomicity of the reconfiguration process for an important class of plans, i.e., the whole process of reconfiguration is perceived as one atomic step, while minimizing the use of blocking of components. We illustrate the applicability of our approach to reconfiguration by providing several examples like fault-tolerance and automated resource control

    Cooperation Patterns and Adaptation Patterns for Service-Based Inter-Organizational Workflows

    Get PDF
    International audienceModernization is an effective approach to making existing mainframe and distributed systems more responsive to business needs. SOA (service-oriented architecture) is an adequate paradigm that allows companies to tap into the business value in their current systems and position IT for rapid future changes to the business model. In our research works, we focus on the use of SOA to implement Inter- Organizational WorkFlows (IOWF). The goal is to take benefits from the advantages offered by the SOA paradigm like interoperability, reusability and flexibility in order to deal with workflow models easily adaptable, evolvable and reusable. This paper focuses on two specific architectures of IOWF which are the "chained execution" and the "subcontracting"; the first issue of this work is to define Service-Based Cooperation Patterns (SBCP) suitable to the two architectures considered. A SBCP is based on SOA; it is defined through three main dimensions: the distribution of services among the partner's sites, the control of instance execution and the structure of interaction between the workflows involved in the cooperation. The second issue of the paper consists of adaptation and evolution of IOWF process models obeying to the defined SBCP. Then, we state the main operations of adaptation that can be applied on these models; we focus on adaptation at process and interactional levels. Conformably to the three dimensions of SBCP, we define three classes of adaptation patterns: "service adaptation", "control flow adaptation" and "interaction adaptation" patterns. Also, we particularly distinguish some operations of adaptation called evolution of process models based on two perspectives: the expansion of the global functionality of the process and the expansion of the cooperation; we show that some evolutions are realized by reuse of existing IOWF models. For implementation, we consider IOWF process models specified with BPEL

    A Model for Scientific Workflows with Parallel and Distributed Computing

    Get PDF
    In the last decade we witnessed an immense evolution of the computing infrastructures in terms of processing, storage and communication. On one hand, developments in hardware architectures have made it possible to run multiple virtual machines on a single physical machine. On the other hand, the increase of the available network communication bandwidth has enabled the widespread use of distributed computing infrastructures, for example based on clusters, grids and clouds. The above factors enabled different scientific communities to aim for the development and implementation of complex scientific applications possibly involving large amounts of data. However, due to their structural complexity, these applications require decomposition models to allow multiple tasks running in parallel and distributed environments. The scientific workflow concept arises naturally as a way to model applications composed of multiple activities. In fact, in the past decades many initiatives have been undertaken to model application development using the workflow paradigm, both in the business and in scientific domains. However, despite such intensive efforts, current scientific workflow systems and tools still have limitations, which pose difficulties to the development of emerging large-scale, distributed and dynamic applications. This dissertation proposes the AWARD model for scientific workflows with parallel and distributed computing. AWARD is an acronym for Autonomic Workflow Activities Reconfigurable and Dynamic. The AWARD model has the following main characteristics. It is based on a decentralized execution control model where multiple autonomic workflow activities interact by exchanging tokens through input and output ports. The activities can be executed separately in diverse computing environments, such as in a single computer or on multiple virtual machines running on distributed infrastructures, such as clusters and clouds. It provides basic workflow patterns for parallel and distributed application decomposition and other useful patterns supporting feedback loops and load balancing. The model is suitable to express applications based on a finite or infinite number of iterations, thus allowing to model long-running workflows, which are typical in scientific experimention. A distintive contribution of the AWARD model is the support for dynamic reconfiguration of long-running workflows. A dynamic reconfiguration allows to modify the structure of the workflow, for example, to introduce new activities, modify the connections between activity input and output ports. The activity behavior can also be modified, for example, by dynamically replacing the activity algorithm. In addition to the proposal of a new workflow model, this dissertation presents the implementation of a fully functional software architecture that supports the AWARD model. The implemented prototype was used to validate and refine the model across multiple workflow scenarios whose usefulness has been demonstrated in practice clearly, through experimental results, demonstrating the advantages of the major characteristics and contributions of the AWARD model. The implemented prototype was also used to develop application cases, such as a workflow to support the implementation of the MapReduce model and a workflow to support a text mining application developed by an external user. The extensive experimental work confirmed the adequacy of the AWARD model and its implementation for developing applications that exploit parallelism and distribution using the scientific workflows paradigm

    Self-managed Workflows for Cyber-physical Systems

    Get PDF
    Workflows are a well-established concept for describing business logics and processes in web-based applications and enterprise application integration scenarios on an abstract implementation-agnostic level. Applying Business Process Management (BPM) technologies to increase autonomy and automate sequences of activities in Cyber-physical Systems (CPS) promises various advantages including a higher flexibility and simplified programming, a more efficient resource usage, and an easier integration and orchestration of CPS devices. However, traditional BPM notations and engines have not been designed to be used in the context of CPS, which raises new research questions occurring with the close coupling of the virtual and physical worlds. Among these challenges are the interaction with complex compounds of heterogeneous sensors, actuators, things and humans; the detection and handling of errors in the physical world; and the synchronization of the cyber-physical process execution models. Novel factors related to the interaction with the physical world including real world obstacles, inconsistencies and inaccuracies may jeopardize the successful execution of workflows in CPS and may lead to unanticipated situations. This thesis investigates properties and requirements of CPS relevant for the introduction of BPM technologies into cyber-physical domains. We discuss existing BPM systems and related work regarding the integration of sensors and actuators into workflows, the development of a Workflow Management System (WfMS) for CPS, and the synchronization of the virtual and physical process execution as part of self-* capabilities for WfMSes. Based on the identified research gap, we present concepts and prototypes regarding the development of a CPS WFMS w.r.t. all phases of the BPM lifecycle. First, we introduce a CPS workflow notation that supports the modelling of the interaction of complex sensors, actuators, humans, dynamic services and WfMSes on the business process level. In addition, the effects of the workflow execution can be specified in the form of goals defining success and error criteria for the execution of individual process steps. Along with that, we introduce the notion of Cyber-physical Consistency. Following, we present a system architecture for a corresponding WfMS (PROtEUS) to execute the modelled processes-also in distributed execution settings and with a focus on interactive process management. Subsequently, the integration of a cyber-physical feedback loop to increase resilience of the process execution at runtime is discussed. Within this MAPE-K loop, sensor and context data are related to the effects of the process execution, deviations from expected behaviour are detected, and compensations are planned and executed. The execution of this feedback loop can be scaled depending on the required level of precision and consistency. Our implementation of the MAPE-K loop proves to be a general framework for adding self-* capabilities to WfMSes. The evaluation of our concepts within a smart home case study shows expected behaviour, reasonable execution times, reduced error rates and high coverage of the identified requirements, which makes our CPS~WfMS a suitable system for introducing workflows on top of systems, devices, things and applications of CPS.:1. Introduction 15 1.1. Motivation 15 1.2. Research Issues 17 1.3. Scope & Contributions 19 1.4. Structure of the Thesis 20 2. Workflows and Cyber-physical Systems 21 2.1. Introduction 21 2.2. Two Motivating Examples 21 2.3. Business Process Management and Workflow Technologies 23 2.4. Cyber-physical Systems 31 2.5. Workflows in CPS 38 2.6. Requirements 42 3. Related Work 45 3.1. Introduction 45 3.2. Existing BPM Systems in Industry and Academia 45 3.3. Modelling of CPS Workflows 49 3.4. CPS Workflow Systems 53 3.5. Cyber-physical Synchronization 58 3.6. Self-* for BPM Systems 63 3.7. Retrofitting Frameworks for WfMSes 69 3.8. Conclusion & Deficits 71 4. Modelling of Cyber-physical Workflows with Consistency Style Sheets 75 4.1. Introduction 75 4.2. Workflow Metamodel 76 4.3. Knowledge Base 87 4.4. Dynamic Services 92 4.5. CPS-related Workflow Effects 94 4.6. Cyber-physical Consistency 100 4.7. Consistency Style Sheets 105 4.8. Tools for Modelling of CPS Workflows 106 4.9. Compatibility with Existing Business Process Notations 111 5. Architecture of a WfMS for Distributed CPS Workflows 115 5.1. Introduction 115 5.2. PROtEUS Process Execution System 116 5.3. Internet of Things Middleware 124 5.4. Dynamic Service Selection via Semantic Access Layer 125 5.5. Process Distribution 126 5.6. Ubiquitous Human Interaction 130 5.7. Towards a CPS WfMS Reference Architecture for Other Domains 137 6. Scalable Execution of Self-managed CPS Workflows 141 6.1. Introduction 141 6.2. MAPE-K Control Loops for Autonomous Workflows 141 6.3. Feedback Loop for Cyber-physical Consistency 148 6.4. Feedback Loop for Distributed Workflows 152 6.5. Consistency Levels, Scalability and Scalable Consistency 157 6.6. Self-managed Workflows 158 6.7. Adaptations and Meta-adaptations 159 6.8. Multiple Feedback Loops and Process Instances 160 6.9. Transactions and ACID for CPS Workflows 161 6.10. Runtime View on Cyber-physical Synchronization for Workflows 162 6.11. Applicability of Workflow Feedback Loops to other CPS Domains 164 6.12. A Retrofitting Framework for Self-managed CPS WfMSes 165 7. Evaluation 171 7.1. Introduction 171 7.2. Hardware and Software 171 7.3. PROtEUS Base System 174 7.4. PROtEUS with Feedback Service 182 7.5. Feedback Service with Legacy WfMSes 213 7.6. Qualitative Discussion of Requirements and Additional CPS Aspects 217 7.7. Comparison with Related Work 232 7.8. Conclusion 234 8. Summary and Future Work 237 8.1. Summary and Conclusion 237 8.2. Advances of this Thesis 240 8.3. Contributions to the Research Area 242 8.4. Relevance 243 8.5. Open Questions 245 8.6. Future Work 247 Bibliography 249 Acronyms 277 List of Figures 281 List of Tables 285 List of Listings 287 Appendices 28

    Analysis of current middleware used in peer-to-peer and grid implementations for enhancement by catallactic mechanisms

    Get PDF
    This deliverable describes the work done in task 3.1, Middleware analysis: Analysis of current middleware used in peer-to-peer and grid implementations for enhancement by catallactic mechanisms from work package 3, Middleware Implementation. The document is divided in four parts: The introduction with application scenarios and middleware requirements, Catnets middleware architecture, evaluation of existing middleware toolkits, and conclusions. -- Die Arbeit definiert Anforderungen an Grid und Peer-to-Peer Middleware Architekturen und analysiert diese auf ihre Eignung für die prototypische Umsetzung der Katallaxie. Eine Middleware-Architektur für die Umsetzung der Katallaxie in Application Layer Netzwerken wird vorgestellt.Grid Computing
    corecore