244 research outputs found

    A Mixed Reality Approach to 3D Interactive Prototyping for Participatory Design of Ambient Intelligence

    Get PDF
    Ambient Intelligence (AmI in short) is a multi-disciplinary approach aimed at enriching physical environments with a network of distributed devices in order to support humans in achieving their everyday goals. However, in current research and development, AmI is still largely considered within the engineering domain bearing undeveloped relationship with architecture. The fact that architecture design substantially aims to address the requirements of supporting people in carrying out their everyday life activities, tasks and practices with spatial strategies. These are common to the AmI’s objectives and purposes, and we aim at considering the possibilities or even necessities of investigating the potential design approach accessible to an architectural context. For end users, AmI is a new type of service. Designing and evaluating the AmI experience before resources are spent on designing the processes and technology needed to eventually run the service can save large amounts of time and money. Therefore, it is essential to create an environment in which designers can involve real people in trying out the service design proposals as early as possible in the design process. Existing cases related to stakeholder engaged design of AmI have primarily focused on engineering implementation and generally only present final outcome to stakeholders for user evaluation. Researchers have been able to build AmI prototypes for design communication. However, most of these prototypes are typically built without the involvement of stakeholders and architects in their conceptual design stage. Using concepts solely designed by engineers may not be user centric and even contain safety risks. The key research question of this thesis is: “How can Ambient Intelligence be designed through a participatory process that involves stakeholders and prospective users?" The thesis consists of the following five components: 1) Identification of a novel participatory design process for modelling AmI scenarios; 2) Identification of the requirements to support prototyping of AmI design, resulting in a conceptual framework that both "lowers the floor" (i.e. making it easier for designers to build the AmI prototypes) and "raises the ceiling" (i.e. increasing the ability of stakeholders and end users to participate in the design process deeply); i 3) Prototyping an experimental Mixed Reality Modelling (MRM in short) platform to facilitate the participatory design of AmI that supports the requirements, design process, and scenarios prototyping; 4) Case study of applying MRM platform to participatory design of a Smart Laser Cutting Workshop(LCW in short) which used to evaluate the proposed MRM based AmI design approach. The result of the research shows that the MRM based participatory design approach is able to support the design of AmI effectively

    Cross-display attention switching in mobile interaction with large displays

    Get PDF
    Mobile devices equipped with features (e.g., camera, network connectivity and media player) are increasingly being used for different tasks such as web browsing, document reading and photography. While the portability of mobile devices makes them desirable for pervasive access to information, their small screen real-estate often imposes restrictions on the amount of information that can be displayed and manipulated on them. On the other hand, large displays have become commonplace in many outdoor as well as indoor environments. While they provide an efficient way of presenting and disseminating information, they provide little support for digital interactivity or physical accessibility. Researchers argue that mobile phones provide an efficient and portable way of interacting with large displays, and the latter can overcome the limitations of the small screens of mobile devices by providing a larger presentation and interaction space. However, distributing user interface (UI) elements across a mobile device and a large display can cause switching of visual attention and that may affect task performance. This thesis specifically explores how the switching of visual attention across a handheld mobile device and a vertical large display can affect a single user's task performance during mobile interaction with large displays. It introduces a taxonomy based on the factors associated with the visual arrangement of Multi Display User Interfaces (MDUIs) that can influence visual attention switching during interaction with MDUIs. It presents an empirical analysis of the effects of different distributions of input and output across mobile and large displays on the user's task performance, subjective workload and preference in the multiple-widget selection task, and in visual search tasks with maps, texts and photos. Experimental results show that the selection of multiple widgets replicated on the mobile device as well as on the large display, versus those shown only on the large display, is faster despite the cost of initial attention switching in the former. On the other hand, a hybrid UI configuration where the visual output is distributed across the mobile and large displays is the worst, or equivalent to the worst, configuration in all the visual search tasks. A mobile device-controlled large display configuration performs best in the map search task and equal to best (i.e., tied with a mobile-only configuration) in text- and photo-search tasks

    Development of actuated Tangible User Interfaces: new interaction concepts and evaluation methods

    Get PDF
    Riedenklau E. Development of actuated Tangible User Interfaces: new interaction concepts and evaluation methods. Bielefeld: Universität Bielefeld; 2016.Making information understandable and literally graspable is the main goal of tangible interaction research. By giving digital data physical representations (Tangible User Interface Objects, or TUIOs), they can be used and manipulated like everyday objects with the users’ natural manipulation skills. Such physical interaction is basically of uni-directional kind, directed from the user to the system, limiting the possible interaction patterns. In other words, the system has no means to actively support the physical interaction. Within the frame of tabletop tangible user interfaces, this problem was addressed by the introduction of actuated TUIOs, that are controllable by the system. Within the frame of this thesis, we present the development of our own actuated TUIOs and address multiple interaction concepts we identified as research gaps in literature on actuated Tangible User Interfaces (TUIs). Gestural interaction is a natural means for humans to non-verbally communicate using their hands. TUIs should be able to support gestural interaction, since our hands are already heavily involved in the interaction. This has rarely been investigated in literature. For a tangible social network client application, we investigate two methods for collecting user-defined gestures that our system should be able to interpret for triggering actions. Versatile systems often understand a wide palette of commands. Another approach for triggering actions is the use of menus. We explore the design space of menu metaphors used in TUIs and present our own actuated dial-based approach. Rich interaction modalities may support the understandability of the represented data and make the interaction with them more appealing, but also mean high demands on real-time precessing. We highlight new research directions for integrated feature rich and multi-modal interaction, such as graphical display, sound output, tactile feedback, our actuated menu and automatically maintained relations between actuated TUIOs within a remote collaboration application. We also tackle the introduction of further sophisticated measures for the evaluation of TUIs to provide further evidence to the theories on tangible interaction. We tested our enhanced measures within a comparative study. Since one of the key factors in effective manual interaction is speed, we benchmarked both the human hand’s manipulation speed and compare it with the capabilities of our own implementation of actuated TUIOs and the systems described in literature. After briefly discussing applications that lie beyond the scope of this thesis, we conclude with a collection of design guidelines gathered in the course of this work and integrate them together with our findings into a larger frame

    Development platform for elderly-oriented tabletop games

    Get PDF
    Tese de mestrado integrado. Engenharia Informática e Computação. Universidade do Porto. Faculdade de Engenharia. 201

    A comprehensive framework for the rapid prototyping of ubiquitous interaction

    Get PDF
    In the interaction between humans and computational systems, many advances have been made in terms of hardware (e.g., smart devices with embedded sensors and multi-touch surfaces) and software (e.g., algorithms for the detection and tracking of touches, gestures and full body movements). Now that we have the computational power and devices to manage interactions between the physical and the digital world, the question is—what should we do? For the Human-Computer Interaction research community answering to this question means to materialize Mark Weiser’s vision of Ubiquitous Computing. In the desktop computing paradigm, the desktop metaphor is implemented by a graphical user interface operated via mouse and keyboard. Users are accustomed to employing artificial control devices whose operation has to be learned and they interact in an environment that inhibits their faculties. For example the mouse is a device that allows movements in a two dimensional space, thus limiting the twenty three degrees of freedom of the human’s hand. The Ubiquitous Computing is an evolution in the history of computation: it aims at making the interface disappear and integrating the information processing into everyday objects with computational capabilities. In this way humans would no more be forced to adapt to machines but, instead, the technology will harmonize with the surrounding environment. Conversely from the desktop case, ubiquitous systems make use of heterogeneous Input/Output devices (e.g., motion sensors, cameras and touch surfaces among others) and interaction techniques such as touchless, multi-touch, and tangible. By reducing the physical constraints in interaction, ubiquitous technologies can enable interfaces that endow more expressive power (e.g., free-hand gestures) and, therefore, such technologies are expected to provide users with better tools to think, create and communicate. It appears clear that approaches based on classical user interfaces from the desktop computing world do not fit with ubiquitous needs, for they were thought for a single user who is interacting with a single computing systems, seated at his workstation and looking at a vertical screen. To overcome the inadequacy of the existing paradigm, new models started to be developed that enable users to employ their skills effortlessly and lower the cognitive burden of interaction with computational machines. Ubiquitous interfaces are pervasive and thus invisible to its users, or they become invisible with successive interactions in which the users feel they are instantly and continuously successful. All the benefits advocated by ubiquitous interaction, like the invisible interface and a more natural interaction, come at a price: the design and development of interactive systems raise new conceptual and practical challenges. Ubiquitous systems communicate with the real world by means of sensors, emitters and actuators. Sensors convert real world inputs into digital data, while emitters and actuators are mostly used to provide digital or physical feedback (e.g., a speaker emitting sounds). Employing such variety of hardware devices in a real application can be difficult because their use requires knowledge of underneath physics and many hours of programming work. Furthermore, data integration can be cumbersome, for any device vendor uses different programming interfaces and communication protocols. All these factors make the rapid prototyping of ubiquitous systems a challenging task. Prototyping is a pivoting activity to foster innovation and creativity through the exploration of a design space. Nevertheless, while there are many prototyping tools and guidelines for traditional user interfaces, very few solutions have been developed for a holistic prototyping of ubiquitous systems. The tremendous amount of different input devices, interaction techniques and physical environments envisioned by researchers produces a severe challenge from the point of view of general and comprehensive development tools. All of this makes it difficult to work in a design and development space where practitioners need to be familiar with different related subjects, involving software and hardware. Moreover, the technological context is further complicated by the fact that many of the ubiquitous technologies have recently grown from an embryonic stage and are still in a process of maturation; thus they lack of stability, reliability and homogeneity. For these reasons, it is compelling to develop tools support to the programming of ubiquitous interaction. In this thesis work this particular topic is addressed. The goal is to develop a general conceptual and software framework that makes use of hardware abstraction to lighten the prototyping process in the design of ubiquitous systems. The thesis is that, by abstracting from low-level details, it is possible to provide unified, coherent and consistent access to interacting devices independently of their implementation or communication protocols. In this dissertation the existing literature is revised and is pointed out that there is a need in the art of frameworks that provide such a comprehensive and integrate support. Moreover, the objectives and the methodology to fulfill them, together with the major contributions of this work are described. Finally, the design of the proposed framework, its development in the form of a set of software libraries, its evaluation with real users and a use case are presented. Through the evaluation and the use case it has been demonstrated that by encompassing heterogeneous devices into a unique design it is possible to reduce user efforts to develop interaction in ubiquitous environments. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------En la interacción entre personas y sistemas de computación se han realizado muchos adelantos por lo que concierne el hardware (p.ej., dispositivos inteligentes con sensores integrados y superficies táctiles) y el software (p.ej., algoritmos para el reconocimiento y rastreo de puntos de contactos, gestos de manos y movimientos corporales). Ahora que se dispone del poder computacional y de los dispositivos para proporcionar una interacción entre el mundo fisico y el mundo digital, la pregunta es—que se debería hacer? Contestar a esta pregunta, para la comunidad de investigación en la Interacción Persona-Ordenador, significa hacer realidad la visión de Mark Weiser sobre la Computación Ubicua. En el paradigma de computación de escritorio, la metáfora del escritorio se implementa a través de la interfaz gráfica de usuario con la que se interactúa a través de teclado y ratón. En este paradigma, los usuarios se adaptan a utilizar dispositivos artificiales, cuyas operaciones deben ser aprendidas, y a interactuar en un entorno que inhibe sus capacidades. Por ejemplo, el ratón es un dispositivo que permite movimientos en dos dimensiones, por tanto limita los veintitrés grados de libertad de una mano. La Computación Ubicua se considera como una evolución en la historia de la computación: su objetivo es hacer que la interfaz desaparezca e integrar el procesamiento de la información en los objetos cotidianos, provistos de capacidad de computo. De esta forma, el usuario no se vería forzado a adaptarse a la maquinas sino que la tecnología se integrarían directamente con el entorno. A diferencia de los sistemas de sobremesa, los sistemas ubicuos utilizan dispositivos de entrada/salida heterogéneos (p.ej., sensores de movimiento, cameras y superficies táctiles entre otros) y técnicas de interacción como la interacción sin tocar, multitáctil o tangible. Reduciendo las limitaciones físicas en la interacción, las tecnologías ubicuas permiten la creación de interfaces con un mayor poder de expresión (p.ej., gestos con las manos) y, por lo tanto, se espera que proporcionen a los usuarios mejores herramientas para pensar, crear y comunicar. Parece claro que las soluciones basadas en las interfaces clásicas no satisfacen las necesidades de la interacción ubicua, porque están pensadas por un único usuario que interactúa con un único sistema de computación, sentado a su mesa de trabajo y mirando una pantalla vertical. Para superar las deficiencias del paradigma de escritorio, se empezaron a desarrollar nuevos modelos de interacción que permitiesen a los usuarios emplear sin esfuerzo sus capacidades innatas y adquiridas y reducir la carga cognitiva de las interfaces clásicas. Las interfaces ubicuas son pervasivas y, por lo tanto, invisibles a sus usuarios, o devienen invisibles a través de interacciones sucesivas en las que los usuarios siempre se sienten que están teniendo éxito. Todos los beneficios propugnados por la interacción ubicua, como la interfaz invisible o una interacción mas natural, tienen un coste: el diseño y el desarrollo de sistemas de interacción ubicua introducen nuevos retos conceptuales y prácticos. Los sistemas ubicuos comunican con el mundo real a través de sensores y emisores. Los sensores convierten las entradas del mundo real en datos digitales, mientras que los emisores se utilizan principalmente para proporcionar una retroalimentación digital o física (p.ej., unos altavoces que emiten un sonido). Emplear una gran variedad de dispositivos hardware en una aplicación real puede ser difícil, porque su uso requiere conocimiento de física y muchas horas de programación. Además, la integración de los datos puede ser complicada, porque cada proveedor de dispositivos utiliza diferentes interfaces de programación y protocolos de comunicación. Todos estos factores hacen que el prototipado rápido de sistemas ubicuos sea una tarea que constituye un difícil reto en la actualidad. El prototipado es una actividad central para promover la innovación y la creatividad a través de la exploración de un espacio de diseño. Sin embargo, a pesar de que existan muchas herramientas y líneas guías para el prototipado de las interfaces de escritorio, a día de hoy han sido desarrolladas muy pocas soluciones para un prototipado holístico de la interacción ubicua. La enorme cantidad de dispositivos de entrada, técnicas de interacción y entornos físicos concebidos por los investigadores supone un gran desafío desde el punto de vista de un entorno general e integral. Todo esto hace que sea difícil trabajar en un espacio de diseño y desarrollo en el que los profesionales necesitan tener conocimiento de diferentes materias relacionadas con temas de software y hardware. Además, el contexto tecnológico se complica por el hecho que muchas de estas tecnologías ubicuas acaban de salir de un estadio embrionario y están todavía en un proceso de desarrollo; por lo tanto faltan de estabilidad, fiabilidad y homogeneidad. Por estos motivos es fundamental desarrollar herramientas que soporten el proceso de prototipado de la interacción ubicua. Este trabajo de tesis doctoral se dedica a este problema. El objetivo es desarrollar una arquitectura conceptual y software que utilice un nivel de abstracción del hardware para hacer mas fácil el proceso de prototipado de sistemas de interacción ubicua. La tesis es que, abstrayendo de los detalles de bajo nivel, es posible proporcionar un acceso unificado, consistente y coherente a los dispositivos de interacción independientemente de su implementación y de los protocolos de comunicación. En esta tesis doctoral se revisa la literatura existente y se pone de manifiesto la necesidad de herramientas y marcos que proporcionen dicho soporte global e integrado. Además, se describen los objetivos propuestos, la metodología para alcanzarlos y las contribuciones principales de este trabajo. Finalmente, se presentan el diseño del marco conceptual, así como su desarrollo en forma de un conjunto de librerías software, su evaluación con usuarios reales y un caso de uso. A través de la evaluación y del caso de uso se ha demostrado que considerando dispositivos heterogéneos en un único diseño es posible reducir los esfuerzos de los usuarios para desarrollar la interacción en entornos ubicuos

    Exploring human-object interaction through force vector measurement

    Get PDF
    Thesis: S.M., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2019Cataloged from PDF version of thesis.Includes bibliographical references (pages 101-107).I introduce SCALE, a project aiming to further understand Human-Object Interaction through the real-time analysis of force vector signals, which I have defined as "Force-based Interaction" in this thesis. Force conveys fundamental information in Force-based Interaction, including force intensity, its direction, and object weight - information otherwise difficult to be accessed or inferred from other sensing modalities. To explore the design space of force-based interaction, I have developed the SCALE toolkit, which is composed of modularized 3d-axis force sensors and application APIs. In collaboration with big industry companies, this system has been applied to a variety of application domains and settings, including a retail store, a smart home and a farmers market. In this thesis, I have proposed a base system SCALE, and two additional advanced projects titled KI/OSK and DepthTouch, which build upon the SCALE project.by Takatoshi Yoshida.S.M.S.M. Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Science

    Simulation in Contexts Involving an Interactive Table and Tangible Objects

    No full text
    International audienceBy using an interactive table, it is possible to interact with several people (decision-makers) in a simultaneous and collaborative way, around the table, during a simulation session. Thanks to the RFID technology with which the table is fitted, it is possible to give tangible objects a unique identity to include and to consider them in the simulation. The paper describes a context model, which takes into consideration the specificities related to interactive tables. The TangiSense interactive table is presented; it is connected to a multi-agent system making it possible to give the table a certain level of adaptation: each tangible object can be associated to an agent which can bring roles to the object (i.e., the roles are the equivalent of a set of behaviors). The multi-agent system proposed in this paper is modeled according to an architecture adapted to the exploitation of tangible and virtual objects during simulation on an interactive table. A case study is presented; it concerns a simulation of road traffic management. The illustrations give an outline of the potentialities of the simulation system as regards the context-awareness aspect, following both the actions of the decision-makers implied in simulation, and the agents composing the road traffic simulation

    Supporting Tangible User Interaction with Integrated Paper and Electronic Document Management Systems

    Get PDF
    Although electronic technology has had a significant impact on the way that offices manage documents, in most cases electronic documents have not completely replaced paper documents. As a result, many present-day offices use a combination of paper and electronic documents in their normal work-flow. The problem with this, however, is that it results in information and document management becoming fragmented between the paper and electronic forms. There is, therefore, a need to provide better integration of the management of paper and electronic documents in order to reduce this fragmentation and, where possible, bring the advantages of electronic document management to paper documents. Previous research has investigated methods of incorporating management and tracking of paper documents into electronic document management systems. However, better integration between paper and electronic document management is still needed, and could potentially be achieved by augmenting elements of the physical document management system with electronic circuitry so they can support tangible user interaction with the integrated document management system. Therefore, the aim of this thesis has been to investigate this. The approach that was taken began by identifying the requirements of such integrated systems by studying the document management needs of a number of real-world offices. This was followed by the development of a series of prototype systems designed to function as tangible user interfaces to the integrated document management system. These prototypes were then evaluated against the identified requirements, and a user study was conducted in order to evaluate their usability. The results of these evaluations demonstrate that it is possible to develop systems systems that can utilise tangible user interaction techniques to enhance the integration of paper and electronic document management, and thus better bridge the divide between the physical and virtual worlds of documents

    Designing for Cross-Device Interactions

    Get PDF
    Driven by technological advancements, we now own and operate an ever-growing number of digital devices, leading to an increased amount of digital data we produce, use, and maintain. However, while there is a substantial increase in computing power and availability of devices and data, many tasks we conduct with our devices are not well connected across multiple devices. We conduct our tasks sequentially instead of in parallel, while collaborative work across multiple devices is cumbersome to set up or simply not possible. To address these limitations, this thesis is concerned with cross-device computing. In particular it aims to conceptualise, prototype, and study interactions in cross-device computing. This thesis contributes to the field of Human-Computer Interaction (HCI)—and more specifically to the area of cross-device computing—in three ways: first, this work conceptualises previous work through a taxonomy of cross-device computing resulting in an in-depth understanding of the field, that identifies underexplored research areas, enabling the transfer of key insights into the design of interaction techniques. Second, three case studies were conducted that show how cross-device interactions can support curation work as well as augment users’ existing devices for individual and collaborative work. These case studies incorporate novel interaction techniques for supporting cross-device work. Third, through studying cross-device interactions and group collaboration, this thesis provides insights into how researchers can understand and evaluate multi- and cross-device interactions for individual and collaborative work. We provide a visualization and querying tool that facilitates interaction analysis of spatial measures and video recordings to facilitate such evaluations of cross-device work. Overall, the work in this thesis advances the field of cross-device computing with its taxonomy guiding research directions, novel interaction techniques and case studies demonstrating cross-device interactions for curation, and insights into and tools for effective evaluation of cross-device systems

    Self-managed Workflows for Cyber-physical Systems

    Get PDF
    Workflows are a well-established concept for describing business logics and processes in web-based applications and enterprise application integration scenarios on an abstract implementation-agnostic level. Applying Business Process Management (BPM) technologies to increase autonomy and automate sequences of activities in Cyber-physical Systems (CPS) promises various advantages including a higher flexibility and simplified programming, a more efficient resource usage, and an easier integration and orchestration of CPS devices. However, traditional BPM notations and engines have not been designed to be used in the context of CPS, which raises new research questions occurring with the close coupling of the virtual and physical worlds. Among these challenges are the interaction with complex compounds of heterogeneous sensors, actuators, things and humans; the detection and handling of errors in the physical world; and the synchronization of the cyber-physical process execution models. Novel factors related to the interaction with the physical world including real world obstacles, inconsistencies and inaccuracies may jeopardize the successful execution of workflows in CPS and may lead to unanticipated situations. This thesis investigates properties and requirements of CPS relevant for the introduction of BPM technologies into cyber-physical domains. We discuss existing BPM systems and related work regarding the integration of sensors and actuators into workflows, the development of a Workflow Management System (WfMS) for CPS, and the synchronization of the virtual and physical process execution as part of self-* capabilities for WfMSes. Based on the identified research gap, we present concepts and prototypes regarding the development of a CPS WFMS w.r.t. all phases of the BPM lifecycle. First, we introduce a CPS workflow notation that supports the modelling of the interaction of complex sensors, actuators, humans, dynamic services and WfMSes on the business process level. In addition, the effects of the workflow execution can be specified in the form of goals defining success and error criteria for the execution of individual process steps. Along with that, we introduce the notion of Cyber-physical Consistency. Following, we present a system architecture for a corresponding WfMS (PROtEUS) to execute the modelled processes-also in distributed execution settings and with a focus on interactive process management. Subsequently, the integration of a cyber-physical feedback loop to increase resilience of the process execution at runtime is discussed. Within this MAPE-K loop, sensor and context data are related to the effects of the process execution, deviations from expected behaviour are detected, and compensations are planned and executed. The execution of this feedback loop can be scaled depending on the required level of precision and consistency. Our implementation of the MAPE-K loop proves to be a general framework for adding self-* capabilities to WfMSes. The evaluation of our concepts within a smart home case study shows expected behaviour, reasonable execution times, reduced error rates and high coverage of the identified requirements, which makes our CPS~WfMS a suitable system for introducing workflows on top of systems, devices, things and applications of CPS.:1. Introduction 15 1.1. Motivation 15 1.2. Research Issues 17 1.3. Scope & Contributions 19 1.4. Structure of the Thesis 20 2. Workflows and Cyber-physical Systems 21 2.1. Introduction 21 2.2. Two Motivating Examples 21 2.3. Business Process Management and Workflow Technologies 23 2.4. Cyber-physical Systems 31 2.5. Workflows in CPS 38 2.6. Requirements 42 3. Related Work 45 3.1. Introduction 45 3.2. Existing BPM Systems in Industry and Academia 45 3.3. Modelling of CPS Workflows 49 3.4. CPS Workflow Systems 53 3.5. Cyber-physical Synchronization 58 3.6. Self-* for BPM Systems 63 3.7. Retrofitting Frameworks for WfMSes 69 3.8. Conclusion & Deficits 71 4. Modelling of Cyber-physical Workflows with Consistency Style Sheets 75 4.1. Introduction 75 4.2. Workflow Metamodel 76 4.3. Knowledge Base 87 4.4. Dynamic Services 92 4.5. CPS-related Workflow Effects 94 4.6. Cyber-physical Consistency 100 4.7. Consistency Style Sheets 105 4.8. Tools for Modelling of CPS Workflows 106 4.9. Compatibility with Existing Business Process Notations 111 5. Architecture of a WfMS for Distributed CPS Workflows 115 5.1. Introduction 115 5.2. PROtEUS Process Execution System 116 5.3. Internet of Things Middleware 124 5.4. Dynamic Service Selection via Semantic Access Layer 125 5.5. Process Distribution 126 5.6. Ubiquitous Human Interaction 130 5.7. Towards a CPS WfMS Reference Architecture for Other Domains 137 6. Scalable Execution of Self-managed CPS Workflows 141 6.1. Introduction 141 6.2. MAPE-K Control Loops for Autonomous Workflows 141 6.3. Feedback Loop for Cyber-physical Consistency 148 6.4. Feedback Loop for Distributed Workflows 152 6.5. Consistency Levels, Scalability and Scalable Consistency 157 6.6. Self-managed Workflows 158 6.7. Adaptations and Meta-adaptations 159 6.8. Multiple Feedback Loops and Process Instances 160 6.9. Transactions and ACID for CPS Workflows 161 6.10. Runtime View on Cyber-physical Synchronization for Workflows 162 6.11. Applicability of Workflow Feedback Loops to other CPS Domains 164 6.12. A Retrofitting Framework for Self-managed CPS WfMSes 165 7. Evaluation 171 7.1. Introduction 171 7.2. Hardware and Software 171 7.3. PROtEUS Base System 174 7.4. PROtEUS with Feedback Service 182 7.5. Feedback Service with Legacy WfMSes 213 7.6. Qualitative Discussion of Requirements and Additional CPS Aspects 217 7.7. Comparison with Related Work 232 7.8. Conclusion 234 8. Summary and Future Work 237 8.1. Summary and Conclusion 237 8.2. Advances of this Thesis 240 8.3. Contributions to the Research Area 242 8.4. Relevance 243 8.5. Open Questions 245 8.6. Future Work 247 Bibliography 249 Acronyms 277 List of Figures 281 List of Tables 285 List of Listings 287 Appendices 28
    corecore