286 research outputs found

    Passive Control Architectures for Collaborative Virtual Haptic Interaction and Bilateral Teleoperation over Unreliable Packet-Switched Digital Network

    Get PDF
    This PhD dissertation consists of two major parts: collaborative haptic interaction (CHI) and bilateral teleoperation over the Internet. For the CHI, we propose a novel hybrid peer-to-peer (P2P) architecture including the shared virtual environment (SVE) simulation, coupling between the haptic device and VE, and P2P synchronization control among all VE copies. This framework guarantees the interaction stability for all users with general unreliable packet-switched communication network which is the most challenging problem for CHI control framework design. This is achieved by enforcing our novel \emph{passivity condition} which fully considers time-varying non-uniform communication delays, random packet loss/swapping/duplication for each communication channel. The topology optimization method based on graph algebraic connectivity is also developed to achieve optimal performance under the communication bandwidth limitation. For validation, we implement a four-user collaborative haptic system with simulated unreliable packet-switched network connections. Both the hybrid P2P architecture design and the performance improvement due to the topology optimization are verified. In the second part, two novel hybrid passive bilateral teleoperation control architectures are proposed to address the challenging stability and performance issues caused by the general Internet communication unreliability (e.g. varying time delay, packet loss, data duplication, etc.). The first method--Direct PD Coupling (DPDC)--is an extension of traditional PD control to the hybrid teleoperation system. With the assumption that the Internet communication unreliability is upper bounded, the passive gain setting condition is derived and guarantees the interaction stability for the teleoperation system which interacts with unknown/unmodeled passive human and environment. However, the performance of DPDC degrades drastically when communication unreliability is severe because its feasible gain region is limited by the device viscous damping. The second method--Virtual Proxy Based PD Coupling (VPDC)--is proposed to improve the performance while providing the same interaction stability. Experimental and quantitative comparisons between DPDC and VPDC are conducted, and both interaction stability and performance difference are validated

    New Waves of IoT Technologies Research – Transcending Intelligence and Senses at the Edge to Create Multi Experience Environments

    Get PDF
    The next wave of Internet of Things (IoT) and Industrial Internet of Things (IIoT) brings new technological developments that incorporate radical advances in Artificial Intelligence (AI), edge computing processing, new sensing capabilities, more security protection and autonomous functions accelerating progress towards the ability for IoT systems to self-develop, self-maintain and self-optimise. The emergence of hyper autonomous IoT applications with enhanced sensing, distributed intelligence, edge processing and connectivity, combined with human augmentation, has the potential to power the transformation and optimisation of industrial sectors and to change the innovation landscape. This chapter is reviewing the most recent advances in the next wave of the IoT by looking not only at the technology enabling the IoT but also at the platforms and smart data aspects that will bring intelligence, sustainability, dependability, autonomy, and will support human-centric solutions.acceptedVersio

    Low-latency Networking: Where Latency Lurks and How to Tame It

    Full text link
    While the current generation of mobile and fixed communication networks has been standardized for mobile broadband services, the next generation is driven by the vision of the Internet of Things and mission critical communication services requiring latency in the order of milliseconds or sub-milliseconds. However, these new stringent requirements have a large technical impact on the design of all layers of the communication protocol stack. The cross layer interactions are complex due to the multiple design principles and technologies that contribute to the layers' design and fundamental performance limitations. We will be able to develop low-latency networks only if we address the problem of these complex interactions from the new point of view of sub-milliseconds latency. In this article, we propose a holistic analysis and classification of the main design principles and enabling technologies that will make it possible to deploy low-latency wireless communication networks. We argue that these design principles and enabling technologies must be carefully orchestrated to meet the stringent requirements and to manage the inherent trade-offs between low latency and traditional performance metrics. We also review currently ongoing standardization activities in prominent standards associations, and discuss open problems for future research

    NFC based remote control of services for interactive spaces

    Full text link
    Ubiquitous computing (one person, many computers) is the third era in the history of computing. It follows the mainframe era (many people, one computer) and the PC era (one person, one computer). Ubiquitous computing empowers people to communicate with services by interacting with their surroundings. Most of these so called smart environments contain sensors sensing users’ actions and try to predict the users’ intentions and necessities based on sensor data. The main drawback of this approach is that the system might perform unexpected or unwanted actions, making the user feel out of control. In this master thesis we propose a different procedure based on Interactive Spaces: instead of predicting users’ intentions based on sensor data, the system reacts to users’ explicit predefined actions. To that end, we present REACHeS, a server platform which enables communication among services, resources and users located in the same environment. With REACHeS, a user controls services and resources by interacting with everyday life objects and using a mobile phone as a mediator between himself/herself, the system and the environment. REACHeS’ interfaces with a user are built upon NFC (Near Field Communication) technology. NFC tags are attached to objects in the environment. A tag stores commands that are sent to services when a user touches the tag with his/her NFC enabled device. The prototypes and usability tests presented in this thesis show the great potential of NFC to build such user interfaces

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    Enhanced Virtuality: Increasing the Usability and Productivity of Virtual Environments

    Get PDF
    Mit stetig steigender Bildschirmauflösung, genauerem Tracking und fallenden Preisen stehen Virtual Reality (VR) Systeme kurz davor sich erfolgreich am Markt zu etablieren. Verschiedene Werkzeuge helfen Entwicklern bei der Erstellung komplexer Interaktionen mit mehreren Benutzern innerhalb adaptiver virtueller Umgebungen. Allerdings entstehen mit der Verbreitung der VR-Systeme auch zusĂ€tzliche Herausforderungen: Diverse EingabegerĂ€te mit ungewohnten Formen und Tastenlayouts verhindern eine intuitive Interaktion. DarĂŒber hinaus zwingt der eingeschrĂ€nkte Funktionsumfang bestehender Software die Nutzer dazu, auf herkömmliche PC- oder Touch-basierte Systeme zurĂŒckzugreifen. Außerdem birgt die Zusammenarbeit mit anderen Anwendern am gleichen Standort Herausforderungen hinsichtlich der Kalibrierung unterschiedlicher Trackingsysteme und der Kollisionsvermeidung. Beim entfernten Zusammenarbeiten wird die Interaktion durch Latenzzeiten und Verbindungsverluste zusĂ€tzlich beeinflusst. Schließlich haben die Benutzer unterschiedliche Anforderungen an die Visualisierung von Inhalten, z.B. GrĂ¶ĂŸe, Ausrichtung, Farbe oder Kontrast, innerhalb der virtuellen Welten. Eine strikte Nachbildung von realen Umgebungen in VR verschenkt Potential und wird es nicht ermöglichen, die individuellen BedĂŒrfnisse der Benutzer zu berĂŒcksichtigen. Um diese Probleme anzugehen, werden in der vorliegenden Arbeit Lösungen in den Bereichen Eingabe, Zusammenarbeit und Erweiterung von virtuellen Welten und Benutzern vorgestellt, die darauf abzielen, die Benutzerfreundlichkeit und ProduktivitĂ€t von VR zu erhöhen. ZunĂ€chst werden PC-basierte Hardware und Software in die virtuelle Welt ĂŒbertragen, um die Vertrautheit und den Funktionsumfang bestehender Anwendungen in VR zu erhalten. Virtuelle Stellvertreter von physischen GerĂ€ten, z.B. Tastatur und Tablet, und ein VR-Modus fĂŒr Anwendungen ermöglichen es dem Benutzer reale FĂ€higkeiten in die virtuelle Welt zu ĂŒbertragen. Des Weiteren wird ein Algorithmus vorgestellt, der die Kalibrierung mehrerer ko-lokaler VR-GerĂ€te mit hoher Genauigkeit und geringen Hardwareanforderungen und geringem Aufwand ermöglicht. Da VR-Headsets die reale Umgebung der Benutzer ausblenden, wird die Relevanz einer Ganzkörper-Avatar-Visualisierung fĂŒr die Kollisionsvermeidung und das entfernte Zusammenarbeiten nachgewiesen. DarĂŒber hinaus werden personalisierte rĂ€umliche oder zeitliche Modifikationen vorgestellt, die es erlauben, die Benutzerfreundlichkeit, Arbeitsleistung und soziale PrĂ€senz von Benutzern zu erhöhen. Diskrepanzen zwischen den virtuellen Welten, die durch persönliche Anpassungen entstehen, werden durch Methoden der Avatar-Umlenkung (engl. redirection) kompensiert. Abschließend werden einige der Methoden und Erkenntnisse in eine beispielhafte Anwendung integriert, um deren praktische Anwendbarkeit zu verdeutlichen. Die vorliegende Arbeit zeigt, dass virtuelle Umgebungen auf realen FĂ€higkeiten und Erfahrungen aufbauen können, um eine vertraute und einfache Interaktion und Zusammenarbeit von Benutzern zu gewĂ€hrleisten. DarĂŒber hinaus ermöglichen individuelle Erweiterungen des virtuellen Inhalts und der Avatare EinschrĂ€nkungen der realen Welt zu ĂŒberwinden und das Erlebnis von VR-Umgebungen zu steigern

    Enhancing interaction in mixed reality

    Get PDF
    With continuous technological innovation, we observe mixed reality emerging from research labs into the mainstream. The arrival of capable mixed reality devices transforms how we are entertained, consume information, and interact with computing systems, with the most recent being able to present synthesized stimuli to any of the human senses and substantially blur the boundaries between the real and virtual worlds. In order to build expressive and practical mixed reality experiences, designers, developers, and stakeholders need to understand and meet its upcoming challenges. This research contributes a novel taxonomy for categorizing mixed reality experiences and guidelines for designing mixed reality experiences. We present the results of seven studies examining the challenges and opportunities of mixed reality experiences, the impact of modalities and interaction techniques on the user experience, and how to enhance the experiences. We begin with a study determining user attitudes towards mixed reality in domestic and educational environments, followed by six research probes that each investigate an aspect of reality or virtuality. In the first, a levitating steerable projector enables us to investigate how the real world can be enhanced without instrumenting the user. We show that the presentation of in-situ instructions for navigational tasks leads to a significantly higher ability to observe and recall real-world landmarks. With the second probe, we enhance the perception of reality by superimposing information usually not visible to the human eye. In amplifying the human vision, we enable users to perceive thermal radiation visually. Further, we examine the effect of substituting physical components with non-functional tangible proxies or entirely virtual representations. With the third research probe, we explore how to enhance virtuality to enable a user to input text on a physical keyboard while being immersed in the virtual world. Our prototype tracked the user’s hands and keyboard to enable generic text input. Our analysis of text entry performance showed the importance and effect of different hand representations. We then investigate how to touch virtuality by simulating generic haptic feedback for virtual reality and show how tactile feedback through quadcopters can significantly increase the sense of presence. Our final research probe investigates the usability and input space of smartphones within mixed reality environments, pairing the user’s smartphone as an input device with a secondary physical screen. Based on our learnings from these individual research probes, we developed a novel taxonomy for categorizing mixed reality experiences and guidelines for designing mixed reality experiences. The taxonomy is based on the human sensory system and human capabilities of articulation. We showcased its versatility and set our research probes into perspective by organizing them inside the taxonomic space. The design guidelines are divided into user-centered and technology-centered. It is our hope that these will contribute to the bright future of mixed reality systems while emphasizing the new underlining interaction paradigm.Mixed Reality (vermischte RealitĂ€ten) gehen aufgrund kontinuierlicher technologischer Innovationen langsam von der reinen Forschung in den Massenmarkt ĂŒber. Mit der EinfĂŒhrung von leistungsfĂ€higen Mixed-Reality-GerĂ€ten verĂ€ndert sich die Art und Weise, wie wir Unterhaltungsmedien und Informationen konsumieren und wie wir mit Computersystemen interagieren. Verschiedene existierende GerĂ€te sind in der Lage, jeden der menschlichen Sinne mit synthetischen Reizen zu stimulieren. Hierdurch verschwimmt zunehmend die Grenze zwischen der realen und der virtuellen Welt. Um eindrucksstarke und praktische Mixed-Reality-Erfahrungen zu kreieren, mĂŒssen Designer und Entwicklerinnen die kĂŒnftigen Herausforderungen und neuen Möglichkeiten verstehen. In dieser Dissertation prĂ€sentieren wir eine neue Taxonomie zur Kategorisierung von Mixed-Reality-Erfahrungen sowie Richtlinien fĂŒr die Gestaltung von solchen. Wir stellen die Ergebnisse von sieben Studien vor, in denen die Herausforderungen und Chancen von Mixed-Reality-Erfahrungen, die Auswirkungen von ModalitĂ€ten und Interaktionstechniken auf die Benutzererfahrung und die Möglichkeiten zur Verbesserung dieser Erfahrungen untersucht werden. Wir beginnen mit einer Studie, in der die Haltung der nutzenden Person gegenĂŒber Mixed Reality in hĂ€uslichen und Bildungsumgebungen analysiert wird. In sechs weiteren Fallstudien wird jeweils ein Aspekt der RealitĂ€t oder VirtualitĂ€t untersucht. In der ersten Fallstudie wird mithilfe eines schwebenden und steuerbaren Projektors untersucht, wie die Wahrnehmung der realen Welt erweitert werden kann, ohne dabei die Person mit Technologie auszustatten. Wir zeigen, dass die Darstellung von in-situ-Anweisungen fĂŒr Navigationsaufgaben zu einer deutlich höheren FĂ€higkeit fĂŒhrt, SehenswĂŒrdigkeiten der realen Welt zu beobachten und wiederzufinden. In der zweiten Fallstudie erweitern wir die Wahrnehmung der RealitĂ€t durch Überlagerung von Echtzeitinformationen, die fĂŒr das menschliche Auge normalerweise unsichtbar sind. Durch die Erweiterung des menschlichen Sehvermögens ermöglichen wir den Anwender:innen, WĂ€rmestrahlung visuell wahrzunehmen. DarĂŒber hinaus untersuchen wir, wie sich das Ersetzen von physischen Komponenten durch nicht funktionale, aber greifbare Replikate oder durch die vollstĂ€ndig virtuelle Darstellung auswirkt. In der dritten Fallstudie untersuchen wir, wie virtuelle RealitĂ€ten verbessert werden können, damit eine Person, die in der virtuellen Welt verweilt, Text auf einer physischen Tastatur eingeben kann. Unser Versuchsdemonstrator detektiert die HĂ€nde und die Tastatur, zeigt diese in der vermischen RealitĂ€t an und ermöglicht somit die verbesserte Texteingaben. Unsere Analyse der TexteingabequalitĂ€t zeigte die Wichtigkeit und Wirkung verschiedener Handdarstellungen. Anschließend untersuchen wir, wie man VirtualitĂ€t berĂŒhren kann, indem wir generisches haptisches Feedback fĂŒr virtuelle RealitĂ€ten simulieren. Wir zeigen, wie Quadrokopter taktiles Feedback ermöglichen und dadurch das PrĂ€senzgefĂŒhl deutlich steigern können. Unsere letzte Fallstudie untersucht die Benutzerfreundlichkeit und den Eingaberaum von Smartphones in Mixed-Reality-Umgebungen. Hierbei wird das Smartphone der Person als EingabegerĂ€t mit einem sekundĂ€ren physischen Bildschirm verbunden, um die Ein- und AusgabemodalitĂ€ten zu erweitern. Basierend auf unseren Erkenntnissen aus den einzelnen Fallstudien haben wir eine neuartige Taxonomie zur Kategorisierung von Mixed-Reality-Erfahrungen sowie Richtlinien fĂŒr die Gestaltung von solchen entwickelt. Die Taxonomie basiert auf dem menschlichen Sinnessystem und den ArtikulationsfĂ€higkeiten. Wir stellen die vielseitige Verwendbarkeit vor und setzen unsere Fallstudien in Kontext, indem wir sie innerhalb des taxonomischen Raums einordnen. Die Gestaltungsrichtlinien sind in nutzerzentrierte und technologiezentrierte Richtlinien unterteilt. Es ist unsere Anliegen, dass diese Gestaltungsrichtlinien zu einer erfolgreichen Zukunft von Mixed-Reality-Systemen beitragen und gleichzeitig die neuen Interaktionsparadigmen hervorheben

    An aesthetics of touch: investigating the language of design relating to form

    Get PDF
    How well can designers communicate qualities of touch? This paper presents evidence that they have some capability to do so, much of which appears to have been learned, but at present make limited use of such language. Interviews with graduate designer-makers suggest that they are aware of and value the importance of touch and materiality in their work, but lack a vocabulary to fully relate to their detailed explanations of other aspects such as their intent or selection of materials. We believe that more attention should be paid to the verbal dialogue that happens in the design process, particularly as other researchers show that even making-based learning also has a strong verbal element to it. However, verbal language alone does not appear to be adequate for a comprehensive language of touch. Graduate designers-makers’ descriptive practices combined non-verbal manipulation within verbal accounts. We thus argue that haptic vocabularies do not simply describe material qualities, but rather are situated competences that physically demonstrate the presence of haptic qualities. Such competencies are more important than groups of verbal vocabularies in isolation. Design support for developing and extending haptic competences must take this wide range of considerations into account to comprehensively improve designers’ capabilities

    Robot-Assisted Minimally Invasive Surgery-Surgical Robotics in the Data Age

    Get PDF
    Telesurgical robotics, as a technical solution for robot-assisted minimally invasive surgery (RAMIS), has become the first domain within medicosurgical robotics that achieved a true global clinical adoption. Its relative success (still at a low single-digit percentile total market penetration) roots in the particular human-in-the-loop control, in which the trained surgeon is always kept responsible for the clinical outcome achieved by the robot-actuated invasive tools. Nowadays, this paradigm is challenged by the need for improved surgical performance, traceability, and safety reaching beyond the human capabilities. Partially due to the technical complexity and the financial burden, the adoption of telesurgical robotics has not reached its full potential, by far. Apart from the absolutely market-dominating da Vinci surgical system, there are already 60+ emerging RAMIS robot types, out of which 15 have already achieved some form of regulatory clearance. This article aims to connect the technological advancement with the principles of commercialization, particularly looking at engineering components that are under development and have the potential to bring significant advantages to the clinical practice. Current RAMIS robots often do not exceed the functionalities deriving from their mechatronics, due to the lack of data-driven assistance and smart human–machine collaboration. Computer assistance is gradually gaining more significance within emerging RAMIS systems. Enhanced manipulation capabilities, refined sensors, advanced vision, task-level automation, smart safety features, and data integration mark together the inception of a new era in telesurgical robotics, infiltrated by machine learning (ML) and artificial intelligence (AI) solutions. Observing other domains, it is definite that a key requirement of a robust AI is the good quality data, derived from proper data acquisition and sharing to allow building solutions in real time based on ML. Emerging RAMIS technologies are reviewed both in a historical and a future perspective

    Art and Design Practices as a Driver for Deformable Controls, Textures and Screen Interactions

    Get PDF
    In this thesis, we demonstrate the innovative uses of deformable interfaces to help de-velop future digital art and design interactions. The great beneïŹts of advancing digital art can often come at a cost of tactile feeling and physical expression, while traditional methods celebrate the diverse sets of physical tools and materials. We identiïŹed these sets of tools and materials to inform the development of new art and design interfaces that offer rich physical mediums for digital artist and designers. In order to bring forth these unique inter-actions, we draw on the latest advances in deformable interface technology. Therefore, our research contributes a set of understandings about how deformable interfaces can be har-nessed for art and design interfaces. We identify and discuss the following contributions: insights into tangible and digital practices of artists and designers; prototypes to probe the beneïŹts and possibilities of deformable displays and materials in support of digital-physical art and design, user-centred evaluations of these prototypes to inform future developments, and broader insights into the deformable interface research.Each chapter of this thesis investigates a speciïŹc element of art and design, alongside an aspect of deformable interfaces resulting in a new prototype. We begin the thesis by studying the use of physical actuation to simulate artist tools in deformable surfaces. In this chapter, our evaluations highlight the merits of improved user experiences and insights into eyes-free interactions. We then turn to explore deformable textures. Driven by the tactile feeling of mixing paints, we present a gel-based interface that is capable of simulating the feeling of paints on the back of mobile devices. Our evaluations showed how artists endorsed the interactions and held potential for digital oil painting.Our ïŹnal chapter presents research conducted with digital designers. We explore their colour picking processes and developed a digital version of physical swatches using a mod-ular screen system. This use of tangible proxies in digital-based processes brought a level of playfulness and held potential to support collaborative workïŹ‚ows across disciplines. To conclude, we share how our outcomes from these studies could help shape the broader space of art and design interactions and deformable interface research. We suggest future work and directions based on our ïŹndings
    • 

    corecore