3,151 research outputs found

    Controlling Internet of Things devices with Read-Write Linked Data Interfaces using Data-Driven Workflows

    Get PDF
    We present a first rough prototype for our approach to specify and execute agents on Read-Write Linked Data that are given as data-driven workflows, specifically we build on Guard-Stage-Milestone workflows. We showcase our approach in a setting from the Internet of Things, specifically building automation

    Specifying and Executing User Agents in an Environment of Reasoning and RESTful Systems Using the Guard-Stage-Milestone Approach

    Get PDF
    For Read-Write Linked Data, an environment of reasoning and RESTful interaction, we investigate the use of the Guard-Stage-Milestone approach for specifying and executing user agents. We present an ontology to specify user agents. Moreover, we give operational semantics to the ontology in a rule language that allows for executing user agents on Read-Write Linked Data. We evaluate our approach formally and regarding performance. Our work shows that despite different assumptions of this environment in contrast to the traditional environment of workflow management systems, the Guard-Stage-Milestone approach can be transferred and successfully applied on the web of Read-Write Linked Data

    Context-aware management of multi-device services in the home

    Get PDF
    MPhilMore and more functionally complex digital consumer devices are becoming embedded or scattered throughout the home, networked in a piecemeal fashion and supporting more ubiquitous device services. For example, activities such as watching a home video may require video to be streamed throughout the home and for multiple devices to be orchestrated and coordinated, involving multiple user interactions via multiple remote controls. The main aim of this project is to research and develop a service-oriented multidevice framework to support user activities in the home, easing the operation and management of multi-device services though reducing explicit user interaction. To do this, user contexts i.e., when and where a user activity takes place, and device orchestration using pre-defined rules, are being utilised. A service-oriented device framework has been designed in four phases. First, a simple framework is designed to utilise OSGi and UPnP functionality in order to orchestrate simple device operation involving device discovery and device interoperability. Second, the framework is enhanced by adding a dynamic user interface portal to access virtual orchestrated services generated through combining multiple devices. Third the framework supports context-based device interaction and context-based task initiation. Context-aware functionality combines information received from several sources such as from sensors that can sense the physical and user environment, from user-device interaction and from user contexts derived from calendars. Finally, the framework supports a smart home SOA lifecycle using pre-defined rules, a rule engine and workflows

    An ontology supporting planning, analysis, and simulation of evolving Digital Ecosystems

    Get PDF
    This paper introduces the PERICLES model-driven approach for the long-term preservation of digital objects in evolving ecosystems. It describes the Digital Ecosystem Model (DEM), an ontology to model those complex digital entities (DEs) and the EcoBuilder tool to support scenario experts in modelling aspects of interests of their DEs with the DEM for further investigation and maintenance

    Linked Research on the Decentralised Web

    Get PDF
    This thesis is about research communication in the context of the Web. I analyse literature which reveals how researchers are making use of Web technologies for knowledge dissemination, as well as how individuals are disempowered by the centralisation of certain systems, such as academic publishing platforms and social media. I share my findings on the feasibility of a decentralised and interoperable information space where researchers can control their identifiers whilst fulfilling the core functions of scientific communication: registration, awareness, certification, and archiving. The contemporary research communication paradigm operates under a diverse set of sociotechnical constraints, which influence how units of research information and personal data are created and exchanged. Economic forces and non-interoperable system designs mean that researcher identifiers and research contributions are largely shaped and controlled by third-party entities; participation requires the use of proprietary systems. From a technical standpoint, this thesis takes a deep look at semantic structure of research artifacts, and how they can be stored, linked and shared in a way that is controlled by individual researchers, or delegated to trusted parties. Further, I find that the ecosystem was lacking a technical Web standard able to fulfill the awareness function of research communication. Thus, I contribute a new communication protocol, Linked Data Notifications (published as a W3C Recommendation) which enables decentralised notifications on the Web, and provide implementations pertinent to the academic publishing use case. So far we have seen decentralised notifications applied in research dissemination or collaboration scenarios, as well as for archival activities and scientific experiments. Another core contribution of this work is a Web standards-based implementation of a clientside tool, dokieli, for decentralised article publishing, annotations and social interactions. dokieli can be used to fulfill the scholarly functions of registration, awareness, certification, and archiving, all in a decentralised manner, returning control of research contributions and discourse to individual researchers. The overarching conclusion of the thesis is that Web technologies can be used to create a fully functioning ecosystem for research communication. Using the framework of Web architecture, and loosely coupling the four functions, an accessible and inclusive ecosystem can be realised whereby users are able to use and switch between interoperable applications without interfering with existing data. Technical solutions alone do not suffice of course, so this thesis also takes into account the need for a change in the traditional mode of thinking amongst scholars, and presents the Linked Research initiative as an ongoing effort toward researcher autonomy in a social system, and universal access to human- and machine-readable information. Outcomes of this outreach work so far include an increase in the number of individuals self-hosting their research artifacts, workshops publishing accessible proceedings on the Web, in-the-wild experiments with open and public peer-review, and semantic graphs of contributions to conference proceedings and journals (the Linked Open Research Cloud). Some of the future challenges include: addressing the social implications of decentralised Web publishing, as well as the design of ethically grounded interoperable mechanisms; cultivating privacy aware information spaces; personal or community-controlled on-demand archiving services; and further design of decentralised applications that are aware of the core functions of scientific communication

    Self-managed Workflows for Cyber-physical Systems

    Get PDF
    Workflows are a well-established concept for describing business logics and processes in web-based applications and enterprise application integration scenarios on an abstract implementation-agnostic level. Applying Business Process Management (BPM) technologies to increase autonomy and automate sequences of activities in Cyber-physical Systems (CPS) promises various advantages including a higher flexibility and simplified programming, a more efficient resource usage, and an easier integration and orchestration of CPS devices. However, traditional BPM notations and engines have not been designed to be used in the context of CPS, which raises new research questions occurring with the close coupling of the virtual and physical worlds. Among these challenges are the interaction with complex compounds of heterogeneous sensors, actuators, things and humans; the detection and handling of errors in the physical world; and the synchronization of the cyber-physical process execution models. Novel factors related to the interaction with the physical world including real world obstacles, inconsistencies and inaccuracies may jeopardize the successful execution of workflows in CPS and may lead to unanticipated situations. This thesis investigates properties and requirements of CPS relevant for the introduction of BPM technologies into cyber-physical domains. We discuss existing BPM systems and related work regarding the integration of sensors and actuators into workflows, the development of a Workflow Management System (WfMS) for CPS, and the synchronization of the virtual and physical process execution as part of self-* capabilities for WfMSes. Based on the identified research gap, we present concepts and prototypes regarding the development of a CPS WFMS w.r.t. all phases of the BPM lifecycle. First, we introduce a CPS workflow notation that supports the modelling of the interaction of complex sensors, actuators, humans, dynamic services and WfMSes on the business process level. In addition, the effects of the workflow execution can be specified in the form of goals defining success and error criteria for the execution of individual process steps. Along with that, we introduce the notion of Cyber-physical Consistency. Following, we present a system architecture for a corresponding WfMS (PROtEUS) to execute the modelled processes-also in distributed execution settings and with a focus on interactive process management. Subsequently, the integration of a cyber-physical feedback loop to increase resilience of the process execution at runtime is discussed. Within this MAPE-K loop, sensor and context data are related to the effects of the process execution, deviations from expected behaviour are detected, and compensations are planned and executed. The execution of this feedback loop can be scaled depending on the required level of precision and consistency. Our implementation of the MAPE-K loop proves to be a general framework for adding self-* capabilities to WfMSes. The evaluation of our concepts within a smart home case study shows expected behaviour, reasonable execution times, reduced error rates and high coverage of the identified requirements, which makes our CPS~WfMS a suitable system for introducing workflows on top of systems, devices, things and applications of CPS.:1. Introduction 15 1.1. Motivation 15 1.2. Research Issues 17 1.3. Scope & Contributions 19 1.4. Structure of the Thesis 20 2. Workflows and Cyber-physical Systems 21 2.1. Introduction 21 2.2. Two Motivating Examples 21 2.3. Business Process Management and Workflow Technologies 23 2.4. Cyber-physical Systems 31 2.5. Workflows in CPS 38 2.6. Requirements 42 3. Related Work 45 3.1. Introduction 45 3.2. Existing BPM Systems in Industry and Academia 45 3.3. Modelling of CPS Workflows 49 3.4. CPS Workflow Systems 53 3.5. Cyber-physical Synchronization 58 3.6. Self-* for BPM Systems 63 3.7. Retrofitting Frameworks for WfMSes 69 3.8. Conclusion & Deficits 71 4. Modelling of Cyber-physical Workflows with Consistency Style Sheets 75 4.1. Introduction 75 4.2. Workflow Metamodel 76 4.3. Knowledge Base 87 4.4. Dynamic Services 92 4.5. CPS-related Workflow Effects 94 4.6. Cyber-physical Consistency 100 4.7. Consistency Style Sheets 105 4.8. Tools for Modelling of CPS Workflows 106 4.9. Compatibility with Existing Business Process Notations 111 5. Architecture of a WfMS for Distributed CPS Workflows 115 5.1. Introduction 115 5.2. PROtEUS Process Execution System 116 5.3. Internet of Things Middleware 124 5.4. Dynamic Service Selection via Semantic Access Layer 125 5.5. Process Distribution 126 5.6. Ubiquitous Human Interaction 130 5.7. Towards a CPS WfMS Reference Architecture for Other Domains 137 6. Scalable Execution of Self-managed CPS Workflows 141 6.1. Introduction 141 6.2. MAPE-K Control Loops for Autonomous Workflows 141 6.3. Feedback Loop for Cyber-physical Consistency 148 6.4. Feedback Loop for Distributed Workflows 152 6.5. Consistency Levels, Scalability and Scalable Consistency 157 6.6. Self-managed Workflows 158 6.7. Adaptations and Meta-adaptations 159 6.8. Multiple Feedback Loops and Process Instances 160 6.9. Transactions and ACID for CPS Workflows 161 6.10. Runtime View on Cyber-physical Synchronization for Workflows 162 6.11. Applicability of Workflow Feedback Loops to other CPS Domains 164 6.12. A Retrofitting Framework for Self-managed CPS WfMSes 165 7. Evaluation 171 7.1. Introduction 171 7.2. Hardware and Software 171 7.3. PROtEUS Base System 174 7.4. PROtEUS with Feedback Service 182 7.5. Feedback Service with Legacy WfMSes 213 7.6. Qualitative Discussion of Requirements and Additional CPS Aspects 217 7.7. Comparison with Related Work 232 7.8. Conclusion 234 8. Summary and Future Work 237 8.1. Summary and Conclusion 237 8.2. Advances of this Thesis 240 8.3. Contributions to the Research Area 242 8.4. Relevance 243 8.5. Open Questions 245 8.6. Future Work 247 Bibliography 249 Acronyms 277 List of Figures 281 List of Tables 285 List of Listings 287 Appendices 28

    HELIN LIbrary Consortium ILS Task Force Recommendation

    Get PDF
    Final report and recommendation of the HELIN ILS task force to migrate the consortium to OCLC WorldShare Management Services.

    makeSense: Simplifying the Integration of Wireless Sensor Networks into Business Processes

    Get PDF
    A wide gap exists between the state of the art in developing Wireless Sensor Network (WSN) software and current practices concerning the design, execution, and maintenance of business processes. WSN software is most often developed based on low-level OS abstractions, whereas business process development leverages high-level languages and tools. This state of affairs places WSNs at the fringe of industry. The makeSense system addresses this problem by simplifying the integration of WSNs into business processes. Developers use BPMN models extended with WSN-specific constructs to specify the application behavior across both traditional business process execution environments and the WSN itself, which is to be equipped with application-specific software. We compile these models into a high-level intermediate language—also directly usable by WSN developers—and then into OS-specific deployment-ready binaries. Key to this process is the notion of meta-abstraction, which we define to capture fundamental patterns of interaction with and within the WSN. The concrete realization of meta-abstractions is application-specific; developers tailor the system configuration by selecting concrete abstractions out of the existing codebase or by providing their own. Our evaluation of makeSense shows that i) users perceive our approach as a significant advance over the state of the art, providing evidence of the increased developer productivity when using makeSense; ii) in large-scale simulations, our prototype exhibits an acceptable system overhead and good scaling properties, demonstrating the general applicability of makeSense; and, iii) our prototype—including the complete tool-chain and underlying system support—sustains a real-world deployment where estimates by domain specialists indicate the potential for drastic reductions in the total cost of ownership compared to wired and conventional WSN-based solutions

    Medical Device Interoperability With Provable Safety Properties

    Get PDF
    Applications that can communicate with and control multiple medical devices have the potential to radically improve patient safety and the effectiveness of medical treatment. Medical device interoperability requires devices to have an open, standards-based interface that allows communication with any other device that implements the same interface. This will enable applications and functionality that can improve patient safety and outcomes. To build interoperable systems, we need to match up the capabilities of the medical devices with the needs of the application. An application that requires heart rate as an input and provides a control signal to an infusion pump requires a source of heart rate and a pump that will accept the control signal. We present means for devices to describe their capabilities and a methodology for automatically checking an application’s device requirements against the device capabilities. If such applications are going to be used for patient care, there needs to be convincing proof of their safety. The safety of a medical device is closely tied to its intended use and use environment. Medical device manufacturers create a hazard analysis of their device, where they explore the hazards associated with its intended use. We describe hazard analysis for interoperable devices and how to create system safety properties from these hazard analyses. The use environment of the application includes the application, connected devices, patient, and clinical workflow. The patient model is specific to each application and represents the patient’s response to treatment. We introduce Clinical Application Modeling Language (CAML), based on Extended Finite State Machines, and use model checking to test safety properties from the hazard analysis against the parallel composition of the application, patient model, clinical workflow, and the device models of connected devices
    corecore