269 research outputs found

    Applications of Blockchain in Business Processes: A Comprehensive Review

    Get PDF
    Blockchain (BC), as an emerging technology, is revolutionizing Business Process Management (BPM) in multiple ways. The main adoption is to serve as a trusted infrastructure to guarantee the trust of collaborations among multiple partners in trustless environments. Especially, BC enables trust of information by using Distributed Ledger Technology (DLT). With the power of smart contracts, BC enforces the obligations of counterparties that transact in a business process (BP) by programming the contracts as transactions. This paper aims to study the state-of-the-art of BC technologies by (1) exploring its applications in BPM with the focus on how BC provides the trust of BPs in their lifecycles; (2) identifying the relations of BPM as the need and BC as the solution with the assessment towards BPM characteristics; (3) discussing the up-to-date progresses of critical BC in BPM; (4) identifying the challenges and research directions for future advancement in the domain. The main conclusions of our comprehensive review are (1) the study of adopting BC in BPM has attracted a great deal of attention that has been evidenced by a rapidly growing number of relevant articles. (2) The paradigms of BPM over Internet of Things (IoT) have been shifted from persistent to transient, from static to dynamic, and from centralized to decentralized, and new enabling technologies are highly demanded to fulfill some emerging functional requirements (FRs) at the stages of design, configuration, diagnosis, and evaluation of BPs in their lifecycles. (3) BC has been intensively studied and proven as a promising solution to assure the trustiness for both of business processes and their executions in decentralized BPM. (4) Most of the reported BC applications are at their primary stages, future research efforts are needed to meet the technical challenges involved in interoperation, determination of trusted entities, confirmation of time-sensitive execution, and support of irreversibility

    Self-managed Workflows for Cyber-physical Systems

    Get PDF
    Workflows are a well-established concept for describing business logics and processes in web-based applications and enterprise application integration scenarios on an abstract implementation-agnostic level. Applying Business Process Management (BPM) technologies to increase autonomy and automate sequences of activities in Cyber-physical Systems (CPS) promises various advantages including a higher flexibility and simplified programming, a more efficient resource usage, and an easier integration and orchestration of CPS devices. However, traditional BPM notations and engines have not been designed to be used in the context of CPS, which raises new research questions occurring with the close coupling of the virtual and physical worlds. Among these challenges are the interaction with complex compounds of heterogeneous sensors, actuators, things and humans; the detection and handling of errors in the physical world; and the synchronization of the cyber-physical process execution models. Novel factors related to the interaction with the physical world including real world obstacles, inconsistencies and inaccuracies may jeopardize the successful execution of workflows in CPS and may lead to unanticipated situations. This thesis investigates properties and requirements of CPS relevant for the introduction of BPM technologies into cyber-physical domains. We discuss existing BPM systems and related work regarding the integration of sensors and actuators into workflows, the development of a Workflow Management System (WfMS) for CPS, and the synchronization of the virtual and physical process execution as part of self-* capabilities for WfMSes. Based on the identified research gap, we present concepts and prototypes regarding the development of a CPS WFMS w.r.t. all phases of the BPM lifecycle. First, we introduce a CPS workflow notation that supports the modelling of the interaction of complex sensors, actuators, humans, dynamic services and WfMSes on the business process level. In addition, the effects of the workflow execution can be specified in the form of goals defining success and error criteria for the execution of individual process steps. Along with that, we introduce the notion of Cyber-physical Consistency. Following, we present a system architecture for a corresponding WfMS (PROtEUS) to execute the modelled processes-also in distributed execution settings and with a focus on interactive process management. Subsequently, the integration of a cyber-physical feedback loop to increase resilience of the process execution at runtime is discussed. Within this MAPE-K loop, sensor and context data are related to the effects of the process execution, deviations from expected behaviour are detected, and compensations are planned and executed. The execution of this feedback loop can be scaled depending on the required level of precision and consistency. Our implementation of the MAPE-K loop proves to be a general framework for adding self-* capabilities to WfMSes. The evaluation of our concepts within a smart home case study shows expected behaviour, reasonable execution times, reduced error rates and high coverage of the identified requirements, which makes our CPS~WfMS a suitable system for introducing workflows on top of systems, devices, things and applications of CPS.:1. Introduction 15 1.1. Motivation 15 1.2. Research Issues 17 1.3. Scope & Contributions 19 1.4. Structure of the Thesis 20 2. Workflows and Cyber-physical Systems 21 2.1. Introduction 21 2.2. Two Motivating Examples 21 2.3. Business Process Management and Workflow Technologies 23 2.4. Cyber-physical Systems 31 2.5. Workflows in CPS 38 2.6. Requirements 42 3. Related Work 45 3.1. Introduction 45 3.2. Existing BPM Systems in Industry and Academia 45 3.3. Modelling of CPS Workflows 49 3.4. CPS Workflow Systems 53 3.5. Cyber-physical Synchronization 58 3.6. Self-* for BPM Systems 63 3.7. Retrofitting Frameworks for WfMSes 69 3.8. Conclusion & Deficits 71 4. Modelling of Cyber-physical Workflows with Consistency Style Sheets 75 4.1. Introduction 75 4.2. Workflow Metamodel 76 4.3. Knowledge Base 87 4.4. Dynamic Services 92 4.5. CPS-related Workflow Effects 94 4.6. Cyber-physical Consistency 100 4.7. Consistency Style Sheets 105 4.8. Tools for Modelling of CPS Workflows 106 4.9. Compatibility with Existing Business Process Notations 111 5. Architecture of a WfMS for Distributed CPS Workflows 115 5.1. Introduction 115 5.2. PROtEUS Process Execution System 116 5.3. Internet of Things Middleware 124 5.4. Dynamic Service Selection via Semantic Access Layer 125 5.5. Process Distribution 126 5.6. Ubiquitous Human Interaction 130 5.7. Towards a CPS WfMS Reference Architecture for Other Domains 137 6. Scalable Execution of Self-managed CPS Workflows 141 6.1. Introduction 141 6.2. MAPE-K Control Loops for Autonomous Workflows 141 6.3. Feedback Loop for Cyber-physical Consistency 148 6.4. Feedback Loop for Distributed Workflows 152 6.5. Consistency Levels, Scalability and Scalable Consistency 157 6.6. Self-managed Workflows 158 6.7. Adaptations and Meta-adaptations 159 6.8. Multiple Feedback Loops and Process Instances 160 6.9. Transactions and ACID for CPS Workflows 161 6.10. Runtime View on Cyber-physical Synchronization for Workflows 162 6.11. Applicability of Workflow Feedback Loops to other CPS Domains 164 6.12. A Retrofitting Framework for Self-managed CPS WfMSes 165 7. Evaluation 171 7.1. Introduction 171 7.2. Hardware and Software 171 7.3. PROtEUS Base System 174 7.4. PROtEUS with Feedback Service 182 7.5. Feedback Service with Legacy WfMSes 213 7.6. Qualitative Discussion of Requirements and Additional CPS Aspects 217 7.7. Comparison with Related Work 232 7.8. Conclusion 234 8. Summary and Future Work 237 8.1. Summary and Conclusion 237 8.2. Advances of this Thesis 240 8.3. Contributions to the Research Area 242 8.4. Relevance 243 8.5. Open Questions 245 8.6. Future Work 247 Bibliography 249 Acronyms 277 List of Figures 281 List of Tables 285 List of Listings 287 Appendices 28

    Business Process Management and Process Mining within a Real Business Environment: An Empirical Analysis of Event Logs Data in a Consulting Project

    Get PDF
    Il presente elaborato esplora l’attitudine delle organizzazioni nei confronti dei processi di business che le sostengono: dalla semi-assenza di struttura, all’organizzazione funzionale, fino all’avvento del Business Process Reengineering e del Business Process Management, nato come superamento dei limiti e delle problematiche del modello precedente. All’interno del ciclo di vita del BPM, trova spazio la metodologia del process mining, che permette un livello di analisi dei processi a partire dagli event data log, ossia dai dati di registrazione degli eventi, che fanno riferimento a tutte quelle attività supportate da un sistema informativo aziendale. Il process mining può essere visto come naturale ponte che collega le discipline del management basate sui processi (ma non data-driven) e i nuovi sviluppi della business intelligence, capaci di gestire e manipolare l’enorme mole di dati a disposizione delle aziende (ma che non sono process-driven). Nella tesi, i requisiti e le tecnologie che abilitano l’utilizzo della disciplina sono descritti, cosi come le tre tecniche che questa abilita: process discovery, conformance checking e process enhancement. Il process mining è stato utilizzato come strumento principale in un progetto di consulenza da HSPI S.p.A. per conto di un importante cliente italiano, fornitore di piattaforme e di soluzioni IT. Il progetto a cui ho preso parte, descritto all’interno dell’elaborato, ha come scopo quello di sostenere l’organizzazione nel suo piano di improvement delle prestazioni interne e ha permesso di verificare l’applicabilità e i limiti delle tecniche di process mining. Infine, nell’appendice finale, è presente un paper da me realizzato, che raccoglie tutte le applicazioni della disciplina in un contesto di business reale, traendo dati e informazioni da working papers, casi aziendali e da canali diretti. Per la sua validità e completezza, questo documento è stata pubblicato nel sito dell'IEEE Task Force on Process Mining

    Towards Practical Access Control and Usage Control on the Cloud using Trusted Hardware

    Get PDF
    Cloud-based platforms have become the principle way to store, share, and synchronize files online. For individuals and organizations alike, cloud storage not only provides resource scalability and on-demand access at a low cost, but also eliminates the necessity of provisioning and maintaining complex hardware installations. Unfortunately, because cloud-based platforms are frequent victims of data breaches and unauthorized disclosures, data protection obliges both access control and usage control to manage user authorization and regulate future data use. Encryption can ensure data security against unauthorized parties, but complicates file sharing which now requires distributing keys to authorized users, and a mechanism that prevents revoked users from accessing or modifying sensitive content. Further, as user data is stored and processed on remote ma- chines, usage control in a distributed setting requires incorporating the local environmental context at policy evaluation, as well as tamper-proof and non-bypassable enforcement. Existing cryptographic solutions either require server-side coordination, offer limited flexibility in data sharing, or incur significant re-encryption overheads on user revocation. This combination of issues are ill-suited within large-scale distributed environments where there are a large number of users, dynamic changes in user membership and access privileges, and resources are shared across organizational domains. Thus, developing a robust security and privacy solution for the cloud requires: fine-grained access control to associate the largest set of users and resources with variable granularity, scalable administration costs when managing policies and access rights, and cross-domain policy enforcement. To address the above challenges, this dissertation proposes a practical security solution that relies solely on commodity trusted hardware to ensure confidentiality and integrity throughout the data lifecycle. The aim is to maintain complete user ownership against external hackers and malicious service providers, without losing the scalability or availability benefits of cloud storage. Furthermore, we develop a principled approach that is: (i) portable across storage platforms without requiring any server-side support or modifications, (ii) flexible in allowing users to selectively share their data using fine-grained access control, and (iii) performant by imposing modest overheads on standard user workloads. Essentially, our system must be client-side, provide end-to-end data protection and secure sharing, without significant degradation in performance or user experience. We introduce NeXUS, a privacy-preserving filesystem that enables cryptographic protection and secure file sharing on existing network-based storage services. NeXUS protects the confidentiality and integrity of file content, as well as file and directory names, while mitigating against rollback attacks of the filesystem hierarchy. We also introduce Joplin, a secure access control and usage control system that provides practical attribute-based sharing with decentralized policy administration, including efficient revocation, multi-domain policies, secure user delegation, and mandatory audit logging. Both systems leverage trusted hardware to prevent the leakage of sensitive material such as encryption keys and access control policies; they are completely client-side, easy to install and use, and can be readily deployed across remote storage platforms without requiring any server-side changes or trusted intermediary. We developed prototypes for NeXUS and Joplin, and evaluated their respective overheads in isolation and within a real-world environment. Results show that both prototypes introduce modest overheads on interactive workloads, and achieve portability across storage platforms, including Dropbox and AFS. Together, NeXUS and Joplin demonstrate that a client-side solution employing trusted hardware such as Intel SGX can effectively protect remotely stored data on existing file sharing services

    Data Management Architecture for Serviceoriented Maritime Testbeds

    Get PDF
    In recent years, numerous new approaches that rely on data-intensive methods have been developed for maritime assistance systems, leading to a compelling need for more elaborate verification and validation procedures. Modern testbeds that can meet these demands are often developed separately from the system itself and provided as generically usable services. However, the joint usage of such testbeds by multiple stakeholders from research and industry confronts them with various challenges in terms of data management: Data control and protection is required to preserve possible competitive advantages or comply with legal framework conditions. The resulting decentralization in data management complicates collaboration, especially in the joint processing and analysis of testbed data. In this paper, we present a decentralized software system, which can deal with these challenges by modelling interrelationships between the stakeholders in a data space, considering their various interests. With the help of a modular data management architecture, the organization of a testbed data basis, as well as the support of verification and validation processes and the evaluation of data streams is made possible. This is achieved with a workflow model for mapping complex and distributed data processing steps. We demonstrate the applicability of the system in an application scenario for the development of a maritime assistance system

    Access and information flow control to secure mobile web service compositions in resource constrained environments

    Get PDF
    The growing use of mobile web services such as electronic health records systems and applications like twitter, Facebook has increased interest in robust mechanisms for ensuring security for such information sharing services. Common security mechanisms such as access control and information flow control are either restrictive or weak in that they prevent applications from sharing data usefully, and/or allow private information leaks when used independently. Typically, when services are composed there is a resource that some or all of the services involved in the composition need to share. However, during service composition security problems arise because the resulting service is made up of different services from different security domains. A key issue that arises and that we address in this thesis is that of enforcing secure information flow control during service composition to prevent illegal access and propagation of information between the participating services. This thesis describes a model that combines access control and information flow control in one framework. We specifically consider a case study of an e-health service application, and consider how constraints like location and context dependencies impact on authentication and authorization. Furthermore, we consider how data sharing applications such as the e-health service application handle issues of unauthorized users and insecure propagation of information in resource constrained environmentsÂą. Our framework addresses this issue of illegitimate information access and propagation by making use of the concept of program dependence graphs (PDGs). Program dependence graphs use path conditions as necessary conditions for secure information flow control. The advantage of this approach to securing information sharing is that, information is only propagated if the criteria for data sharing are verified. Our solution proposes or offers good performance, fast authentication taking into account bandwidth limitations. A security analysis shows the theoretical improvements our scheme offers. Results obtained confirm that the framework accommodates the CIA-triad (which is the confidentiality, integrity and availability model designed to guide policies of information security) of our work and can be used to motivate further research work in this field

    Decentralized Orchestration of Open Services- Achieving High Scalability and Reliability with Continuation-Passing Messaging

    Get PDF
    The papers of this thesis are not available in Munin. Paper I: Yu, W.,Haque, A. A. M. “Decentralised web- services orchestration with continuation-passing messaging”. Available in International Journal of Web and Grid Services 2011, 7(3):304–330. Paper II: Haque, A. A. M., Yu, W.: “Peer-to-peer orchestration of web mashups”. Available in International Journal of Adaptive, Resilient and Autonomic Systems 2014, 5(3):40-60. Paper V: Haque, A. A. M., Yu, W.: “Decentralized and reliable orchestration of open services”. In:Service Computation 2014. International Academy, Research and Industry Association (IARIA) 2014 ISBN 978-1-61208-337-7.An ever-increasing number of web applications are providing open services to a wide range of applications. Whilst traditional centralized approaches to services orchestration are successful for enterprise service-oriented systems, they are subject to serious limitations for orchestrating the wider range of open services. Dealing with these limitations calls for decentralized approaches. However, decentralized approaches are themselves faced with a number of challenges, including the possibility of loss of dynamic run-time states that are spread over the distributed environment. This thesis presents a fully decentralized approach to orchestration of open services. Our flow-aware dynamic replication scheme supports both exceptional handling, failure of orchestration agents and recovers from fail situations. During execution, open services are conducted by a network of orchestration agents which collectively orchestrate open services using continuation-passing messaging. Our performance study showed that decentralized orchestration improves the scalability and enhances the reliability of open services. Our orchestration approach has a clear performance advantage over traditional centralized orchestration as well as over the current practice of web mashups where application servers themselves conduct the execution of the composition of open web services. Finally, in our empirical study we presented the overhead of the replication approach for services orchestration

    Enforcement of entailment constraints in distributed service-based business processes

    Get PDF
    Abstract Context: A distributed business process is executed in a distributed computing environment. The service-oriented architecture (SOA) paradigm is a popular option for the integration of software services and execution of distributed business processes. Entailment constraints, such as mutual exclusion and binding constraints, are important means to control process execution. Mutually exclusive tasks result from the division of powerful rights and responsibilities to prevent fraud and abuse. In contrast, binding constraints define that a subject who performed one task must also perform the corresponding bound task(s). Objective: We aim to provide a model-driven approach for the specification and enforcement of task-based entailment constraints in distributed servicebased business processes. Method: Based on a generic metamodel, we define a domain-specific language (DSL) that maps the different modeling-level artifacts to the implementation-level. The DSL integrates elements from role-based access control (RBAC) with the tasks that are performed in a business process. Process definitions are annotated using the DSL, and our software platform uses automated model transformations to produce executable WS-BPEL specifications which enforce the entailment constraints. We evaluate the impact of constraint enforcement on runtime performance for five selected service-based processes from existing literature. Results: Our evaluation demonstrates that the approach correctly enforces task-based entailment constraints at runtime. The performance experiments illustrate that the runtime enforcement operates with an overhead that scales well up to the order of several ten thousand logged invocations. Using our DSL annotations, the user-defined process definition remains declarative and clean of security enforcement code. Conclusion: Our approach decouples the concerns of (non-technical) domain experts from technical details of entailment constraint enforcement. The developed framework integrates seamlessly with WS-BPEL and the Web services technology stack. Our prototype implementation shows the feasibility of the approach, and the evaluation points to future work and further performance optimizations
    • …
    corecore