15 research outputs found

    Blockchain for Business Process Enactment: A Taxonomy and Systematic Literature Review

    Full text link
    Blockchain has been proposed to facilitate the enactment of interorganisational business processes. For such processes, blockchain can guarantee the enforcement of rules and the integrity of execution traces - without the need for a centralised trusted party. However, the enactment of interorganisational processes pose manifold challenges. In this work, we ask what answers the research field offers in response to those challenges. To do so, we conduct a systematic literature review (SLR). As our guiding question, we investigate the guarantees and capabilities of blockchain-based enactment approaches. Based on resulting empirical evidence, we develop a taxonomy for blockchain-based enactment. We find that a wide range of approaches support traceability and correctness; however, research focusing on flexibility and scalability remains nascent. For all challenges, we point towards future research opportunities.Comment: Preprint, Accepted at BPM 2022, Blockchain Foru

    Blockchain Support for Collaborative Business Processes

    Get PDF
    Blockchain technology provides basic building blocks to support the execution of collaborative business processes involving mutually untrusted parties in a decentralized environment. Several research proposals have demonstrated the feasibility of designing blockchain-based collaborative business processes using a high-level notation, such as the Business Process Model and Notation (BPMN), and thereon automatically generating the code artifacts required to execute these processes on a blockchain platform. In this paper, we present the conceptual foundations of model-driven approaches for blockchain-based collaborative process execution and we compare two concrete approaches, namely Caterpillar and Lorikeet

    Visualizing Provenance In A Supply chain Using Ethereum Blockchain

    Get PDF
    Visualization is a widely used in different fields of studies such as supply chain management when there is a need to communicate information to general users. However, there are multiple limitations and problems with visualizing information within traditional systems. In traditional systems, data is in control of one single authority; so data is mutable and there is no guarantee that system administer does not change the data to achieve a desired result. Besides, such systems are not transparent and users do not have any access to the data flow. In this thesis, the main goal was to visualize information that has been saved on top of a new technology named blockchain to overcome the aforementioned problems. All the records in the system are saved on the blockchain and data is pulled out from blockchain to be used in visualization. To have a better insight, a review has been done on relevant studies about blockchain, supply chain and visualization. After identifying the gap in literature review, an architecture was proposed that was used in the implementation. The implementation contains, a system on top of ethereum blockchain and front-end which allows users to interact with the system. In the system, all the information about products and all the transactions that ever happened in the system, are recorded on the blockchain. Then, data was retrieved from the blockchain and used to visualize provenance of products on Google Map API. After implementing the system, the performance was evaluated to make sure that it can handle different situations where various number of clients sending request to the system simultaneously. The performance was as expected in which system responds longer when number of clients sending requests were growing. The proposed solution fill the gap that was identified in the literature review. By adding provenance visualization users can explore previous owners and locations of a product in a trustable manner. Future research can focus on analysis of data which will allow organizations to make informed decisions on choosing popular products to sell

    Using Blockchain to Improve Collaborative Business Process Management: Systematic Literature Review

    Get PDF
    BlockChain Technology (BCT) has appeared with strength and promises an authentic revolution on business, management, and organizational strategies related to utilization of advanced software systems. In fact, BCT promotes a decentralized architecture to process management and the collaborative work between entities when these ones are working together in a business process. This paper aims to know what proposals exist to improve any stage of business process management using BCT because this technology could provide benefits in this management. For this purpose, this paper presents a systematic literature review in area of Collaborative Business Processes (CBP) in BCT domain to identify opportunities and gaps for further research. This paper concludes there is a rapid and growing interest of public bodies, scientific community and software industries to know opportunities that BCT offers to improve CBP management in a decentralized manner. However, although the topic is in early stages, there are very promising lines of research and relevant open issues, but there also is lack of scientific rigor in validation process into the different studies.2019-2

    A Framework for Verifiable and Auditable Collaborative Anomaly Detection

    Get PDF
    Collaborative and Federated Leaning are emerging approaches to manage cooperation between a group of agents for the solution of Machine Learning tasks, with the goal of improving each agent's performance without disclosing any data. In this paper we present a novel algorithmic architecture that tackle this problem in the particular case of Anomaly Detection (or classification of rare events), a setting where typical applications often comprise data with sensible information, but where the scarcity of anomalous examples encourages collaboration. We show how Random Forests can be used as a tool for the development of accurate classifiers with an effective insight-sharing mechanism that does not break the data integrity. Moreover, we explain how the new architecture can be readily integrated in a blockchain infrastructure to ensure the verifiable and auditable execution of the algorithm. Furthermore, we discuss how this work may set the basis for a more general approach for the design of collaborative ensemble-learning methods beyond the specific task and architecture discussed in this paper

    Immersive Telepresence: A framework for training and rehearsal in a postdigital age

    Get PDF

    Engineering Self-Adaptive Collective Processes for Cyber-Physical Ecosystems

    Get PDF
    The pervasiveness of computing and networking is creating significant opportunities for building valuable socio-technical systems. However, the scale, density, heterogeneity, interdependence, and QoS constraints of many target systems pose severe operational and engineering challenges. Beyond individual smart devices, cyber-physical collectives can provide services or solve complex problems by leveraging a “system effect” while coordinating and adapting to context or environment change. Understanding and building systems exhibiting collective intelligence and autonomic capabilities represent a prominent research goal, partly covered, e.g., by the field of collective adaptive systems. Therefore, drawing inspiration from and building on the long-time research activity on coordination, multi-agent systems, autonomic/self-* systems, spatial computing, and especially on the recent aggregate computing paradigm, this thesis investigates concepts, methods, and tools for the engineering of possibly large-scale, heterogeneous ensembles of situated components that should be able to operate, adapt and self-organise in a decentralised fashion. The primary contribution of this thesis consists of four main parts. First, we define and implement an aggregate programming language (ScaFi), internal to the mainstream Scala programming language, for describing collective adaptive behaviour, based on field calculi. Second, we conceive of a “dynamic collective computation” abstraction, also called aggregate process, formalised by an extension to the field calculus, and implemented in ScaFi. Third, we characterise and provide a proof-of-concept implementation of a middleware for aggregate computing that enables the development of aggregate systems according to multiple architectural styles. Fourth, we apply and evaluate aggregate computing techniques to edge computing scenarios, and characterise a design pattern, called Self-organising Coordination Regions (SCR), that supports adjustable, decentralised decision-making and activity in dynamic environments.Con lo sviluppo di informatica e intelligenza artificiale, la diffusione pervasiva di device computazionali e la crescente interconnessione tra elementi fisici e digitali, emergono innumerevoli opportunità per la costruzione di sistemi socio-tecnici di nuova generazione. Tuttavia, l'ingegneria di tali sistemi presenta notevoli sfide, data la loro complessità—si pensi ai livelli, scale, eterogeneità, e interdipendenze coinvolti. Oltre a dispositivi smart individuali, collettivi cyber-fisici possono fornire servizi o risolvere problemi complessi con un “effetto sistema” che emerge dalla coordinazione e l'adattamento di componenti fra loro, l'ambiente e il contesto. Comprendere e costruire sistemi in grado di esibire intelligenza collettiva e capacità autonomiche è un importante problema di ricerca studiato, ad esempio, nel campo dei sistemi collettivi adattativi. Perciò, traendo ispirazione e partendo dall'attività di ricerca su coordinazione, sistemi multiagente e self-*, modelli di computazione spazio-temporali e, specialmente, sul recente paradigma di programmazione aggregata, questa tesi tratta concetti, metodi, e strumenti per l'ingegneria di ensemble di elementi situati eterogenei che devono essere in grado di lavorare, adattarsi, e auto-organizzarsi in modo decentralizzato. Il contributo di questa tesi consiste in quattro parti principali. In primo luogo, viene definito e implementato un linguaggio di programmazione aggregata (ScaFi), interno al linguaggio Scala, per descrivere comportamenti collettivi e adattativi secondo l'approccio dei campi computazionali. In secondo luogo, si propone e caratterizza l'astrazione di processo aggregato per rappresentare computazioni collettive dinamiche concorrenti, formalizzata come estensione al field calculus e implementata in ScaFi. Inoltre, si analizza e implementa un prototipo di middleware per sistemi aggregati, in grado di supportare più stili architetturali. Infine, si applicano e valutano tecniche di programmazione aggregata in scenari di edge computing, e si propone un pattern, Self-Organising Coordination Regions, per supportare, in modo decentralizzato, attività decisionali e di regolazione in ambienti dinamici

    End-to-End Trust Fulfillment of Big Data Workflow Provisioning over Competing Clouds

    Get PDF
    Cloud Computing has emerged as a promising and powerful paradigm for delivering data- intensive, high performance computation, applications and services over the Internet. Cloud Computing has enabled the implementation and success of Big Data, a relatively recent phenomenon consisting of the generation and analysis of abundant data from various sources. Accordingly, to satisfy the growing demands of Big Data storage, processing, and analytics, a large market has emerged for Cloud Service Providers, offering a myriad of resources, platforms, and infrastructures. The proliferation of these services often makes it difficult for consumers to select the most suitable and trustworthy provider to fulfill the requirements of building complex workflows and applications in a relatively short time. In this thesis, we first propose a quality specification model to support dual pre- and post-cloud workflow provisioning, consisting of service provider selection and workflow quality enforcement and adaptation. This model captures key properties of the quality of work at different stages of the Big Data value chain, enabling standardized quality specification, monitoring, and adaptation. Subsequently, we propose a two-dimensional trust-enabled framework to facilitate end-to-end Quality of Service (QoS) enforcement that: 1) automates cloud service provider selection for Big Data workflow processing, and 2) maintains the required QoS levels of Big Data workflows during runtime through dynamic orchestration using multi-model architecture-driven workflow monitoring, prediction, and adaptation. The trust-based automatic service provider selection scheme we propose in this thesis is comprehensive and adaptive, as it relies on a dynamic trust model to evaluate the QoS of a cloud provider prior to taking any selection decisions. It is a multi-dimensional trust model for Big Data workflows over competing clouds that assesses the trustworthiness of cloud providers based on three trust levels: (1) presence of the most up-to-date cloud resource verified capabilities, (2) reputational evidence measured by neighboring users and (3) a recorded personal history of experiences with the cloud provider. The trust-based workflow orchestration scheme we propose aims to avoid performance degradation or cloud service interruption. Our workflow orchestration approach is not only based on automatic adaptation and reconfiguration supported by monitoring, but also on predicting cloud resource shortages, thus preventing performance degradation. We formalize the cloud resource orchestration process using a state machine that efficiently captures different dynamic properties of the cloud execution environment. In addition, we use a model checker to validate our monitoring model in terms of reachability, liveness, and safety properties. We evaluate both our automated service provider selection scheme and cloud workflow orchestration, monitoring and adaptation schemes on a workflow-enabled Big Data application. A set of scenarios were carefully chosen to evaluate the performance of the service provider selection, workflow monitoring and the adaptation schemes we have implemented. The results demonstrate that our service selection outperforms other selection strategies and ensures trustworthy service provider selection. The results of evaluating automated workflow orchestration further show that our model is self-adapting, self-configuring, reacts efficiently to changes and adapts accordingly while enforcing QoS of workflows
    corecore