33 research outputs found

    Haptic wearables as sensory replacement, sensory augmentation and trainer - a review

    Get PDF
    Sensory impairments decrease quality of life and can slow or hinder rehabilitation. Small, computationally powerful electronics have enabled the recent development of wearable systems aimed to improve function for individuals with sensory impairments. The purpose of this review is to synthesize current haptic wearable research for clinical applications involving sensory impairments. We define haptic wearables as untethered, ungrounded body worn devices that interact with skin directly or through clothing and can be used in natural environments outside a laboratory. Results of this review are categorized by degree of sensory impairment. Total impairment, such as in an amputee, blind, or deaf individual, involves haptics acting as sensory replacement; partial impairment, as is common in rehabilitation, involves haptics as sensory augmentation; and no impairment involves haptics as trainer. This review found that wearable haptic devices improved function for a variety of clinical applications including: rehabilitation, prosthetics, vestibular loss, osteoarthritis, vision loss and hearing loss. Future haptic wearables development should focus on clinical needs, intuitive and multimodal haptic displays, low energy demands, and biomechanical compliance for long-term usage

    Augmented visual, auditory, haptic, and multimodal feedback in motor learning: A review

    Get PDF
    It is generally accepted that augmented feedback, provided by a human expert or a technical display, effectively enhances motor learning. However, discussion of the way to most effectively provide augmented feedback has been controversial. Related studies have focused primarily on simple or artificial tasks enhanced by visual feedback. Recently, technical advances have made it possible also to investigate more complex, realistic motor tasks and to implement not only visual, but also auditory, haptic, or multimodal augmented feedback. The aim of this review is to address the potential of augmented unimodal and multimodal feedback in the framework of motor learning theories. The review addresses the reasons for the different impacts of feedback strategies within or between the visual, auditory, and haptic modalities and the challenges that need to be overcome to provide appropriate feedback in these modalities, either in isolation or in combination. Accordingly, the design criteria for successful visual, auditory, haptic, and multimodal feedback are elaborate

    Exodex Adam—A Reconfigurable Dexterous Haptic User Interface for the Whole Hand

    Get PDF
    Applications for dexterous robot teleoperation and immersive virtual reality are growing. Haptic user input devices need to allow the user to intuitively command and seamlessly “feel” the environment they work in, whether virtual or a remote site through an avatar. We introduce the DLR Exodex Adam, a reconfigurable, dexterous, whole-hand haptic input device. The device comprises multiple modular, three degrees of freedom (3-DOF) robotic fingers, whose placement on the device can be adjusted to optimize manipulability for different user hand sizes. Additionally, the device is mounted on a 7-DOF robot arm to increase the user’s workspace. Exodex Adam uses a front-facing interface, with robotic fingers coupled to two of the user’s fingertips, the thumb, and two points on the palm. Including the palm, as opposed to only the fingertips as is common in existing devices, enables accurate tracking of the whole hand without additional sensors such as a data glove or motion capture. By providing “whole-hand” interaction with omnidirectional force-feedback at the attachment points, we enable the user to experience the environment with the complete hand instead of only the fingertips, thus realizing deeper immersion. Interaction using Exodex Adam can range from palpation of objects and surfaces to manipulation using both power and precision grasps, all while receiving haptic feedback. This article details the concept and design of the Exodex Adam, as well as use cases where it is deployed with different command modalities. These include mixed-media interaction in a virtual environment, gesture-based telemanipulation, and robotic hand–arm teleoperation using adaptive model-mediated teleoperation. Finally, we share the insights gained during our development process and use case deployments

    Haptics: Science, Technology, Applications

    Get PDF
    This open access book constitutes the proceedings of the 13th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, EuroHaptics 2022, held in Hamburg, Germany, in May 2022. The 36 regular papers included in this book were carefully reviewed and selected from 129 submissions. They were organized in topical sections as follows: haptic science; haptic technology; and haptic applications

    Closed-loop prosthetic hand : understanding sensorimotor and multisensory integration under uncertainty.

    Get PDF
    To make sense of our unpredictable world, humans use sensory information streaming through billions of peripheral neurons. Uncertainty and ambiguity plague each sensory stream, yet remarkably our perception of the world is seamless, robust and often optimal in the sense of minimising perceptual variability. Moreover, humans have a remarkable capacity for dexterous manipulation. Initiation of precise motor actions under uncertainty requires awareness of not only the statistics of our environment but also the reliability of our sensory and motor apparatus. What happens when our sensory and motor systems are disrupted? Upper-limb amputees tted with a state-of-the-art prostheses must learn to both control and make sense of their robotic replacement limb. Tactile feedback is not a standard feature of these open-loop limbs, fundamentally limiting the degree of rehabilitation. This thesis introduces a modular closed-loop upper-limb prosthesis, a modified Touch Bionics ilimb hand with a custom-built linear vibrotactile feedback array. To understand the utility of the feedback system in the presence of multisensory and sensorimotor influences, three fundamental open questions were addressed: (i) What are the mechanisms by which subjects compute sensory uncertainty? (ii) Do subjects integrate an artificial modality with visual feedback as a function of sensory uncertainty? (iii) What are the influences of open-loop and closed-loop uncertainty on prosthesis control? To optimally handle uncertainty in the environment people must acquire estimates of the mean and uncertainty of sensory cues over time. A novel visual tracking experiment was developed in order to explore the processes by which people acquire these statistical estimators. Subjects were required to simultaneously report their evolving estimate of the mean and uncertainty of visual stimuli over time. This revealed that subjects could accumulate noisy evidence over the course of a trial to form an optimal continuous estimate of the mean, hindered only by natural kinematic constraints. Although subjects had explicit access to a measure of their continuous objective uncertainty, acquired from sensory information available within a trial, this was limited by a conservative margin for error. In the Bayesian framework, sensory evidence (from multiple sensory cues) and prior beliefs (knowledge of the statistics of sensory cues) are combined to form a posterior estimate of the state of the world. Multiple studies have revealed that humans behave as optimal Bayesian observers when making binary decisions in forced-choice tasks. In this thesis these results were extended to a continuous spatial localisation task. Subjects could rapidly accumulate evidence presented via vibrotactile feedback (an artificial modality ), and integrate it with visual feedback. The weight attributed to each sensory modality was chosen so as to minimise the overall objective uncertainty. Since subjects were able to combine multiple sources of sensory information with respect to their sensory uncertainties, it was hypothesised that vibrotactile feedback would benefit prosthesis wearers in the presence of either sensory or motor uncertainty. The closed-loop prosthesis served as a novel manipulandum to examine the role of feed-forward and feed-back mechanisms for prosthesis control, known to be required for successful object manipulation in healthy humans. Subjects formed economical grasps in idealised (noise-free) conditions and this was maintained even when visual, tactile and both sources of feedback were removed. However, when uncertainty was introduced into the hand controller, performance degraded significantly in the absence of visual or tactile feedback. These results reveal the complementary nature of feed-forward and feed-back processes in simulated prosthesis wearers, and highlight the importance of tactile feedback for control of a prosthesis

    An enactive approach to perceptual augmentation in mobility

    Get PDF
    Event predictions are an important constituent of situation awareness, which is a key objective for many applications in human-machine interaction, in particular in driver assistance. This work focuses on facilitating event predictions in dynamic environments. Its primary contributions are 1) the theoretical development of an approach for enabling people to expand their sampling and understanding of spatiotemporal information, 2) the introduction of exemplary systems that are guided by this approach, 3) the empirical investigation of effects functional prototypes of these systems have on human behavior and safety in a range of simulated road traffic scenarios, and 4) a connection of the investigated approach to work on cooperative human-machine systems. More specific contents of this work are summarized as follows: The first part introduces several challenges for the formation of situation awareness as a requirement for safe traffic participation. It reviews existing work on these challenges in the domain of driver assistance, resulting in an identification of the need to better inform drivers about dynamically changing aspects of a scene, including event probabilities, spatial and temporal distances, as well as a suggestion to expand the scope of assistance systems to start informing drivers about relevant scene elements at an early stage. Novel forms of assistance can be guided by different fundamental approaches that target either replacement, distribution, or augmentation of driver competencies. A subsequent differentiation of these approaches concludes that an augmentation-guided paradigm, characterized by an integration of machine capabilities into human feedback loops, can be advantageous for tasks that rely on active user engagement, the preservation of awareness and competence, and the minimization of complexity in human- machine interaction. Consequently, findings and theories about human sensorimotor processes are connected to develop an enactive approach that is consistent with an augmentation perspective on human-machine interaction. The approach is characterized by enabling drivers to exercise new sensorimotor processes through which safety-relevant spatiotemporal information may be sampled. In the second part of this work, a concept and functional prototype for augmenting the perception of traffic dynamics is introduced as a first example for applying principles of this enactive approach. As a loose expression of functional biomimicry, the prototype utilizes a tactile inter- face that communicates temporal distances to potential hazards continuously through stimulus intensity. In a driving simulator study, participants quickly gained an intuitive understanding of the assistance without instructions and demonstrated higher driving safety in safety-critical highway scenarios. But this study also raised new questions such as whether benefits are due to a continuous time-intensity encoding and whether utility generalizes to intersection scenarios or highway driving with low criticality events. Effects of an expanded assistance prototype with lane-independent risk assessment and an option for binary signaling were thus investigated in a separate driving simulator study. Subjective responses confirmed quick signal understanding and a perception of spatial and temporal stimulus characteristics. Surprisingly, even for a binary assistance variant with a constant intensity level, participants reported perceiving a danger-dependent variation in stimulus intensity. They further felt supported by the system in the driving task, especially in difficult situations. But in contrast to the first study, this support was not expressed by changes in driving safety, suggesting that perceptual demands of the low criticality scenarios could be satisfied by existing driver capabilities. But what happens if such basic capabilities are impaired, e.g., due to poor visibility conditions or other situations that introduce perceptual uncertainty? In a third driving simulator study, the driver assistance was employed specifically in such ambiguous situations and produced substantial safety advantages over unassisted driving. Additionally, an assistance variant that adds an encoding of spatial uncertainty was investigated in these scenarios. Participants had no difficulties to understand and utilize this added signal dimension to improve safety. Despite being inherently less informative than spatially precise signals, users rated uncertainty-encoding signals as equally useful and satisfying. This appreciation for transparency of variable assistance reliability is a promising indicator for the feasibility of an adaptive trust calibration in human-machine interaction and marks one step towards a closer integration of driver and vehicle capabilities. A complementary step on the driver side would be to increase transparency about the driver’s mental states and thus allow for mutual adaptation. The final part of this work discusses how such prerequisites of cooperation may be achieved by monitoring mental state correlates observable in human behavior, especially in eye movements. Furthermore, the outlook for an addition of cooperative features also raises new questions about the bounds of identity as well as practical consequences of human-machine systems in which co-adapting agents may exercise sensorimotor processes through one another.Die Vorhersage von Ereignissen ist ein Bestandteil des Situationsbewusstseins, dessen UnterstĂŒtzung ein wesentliches Ziel diverser Anwendungen im Bereich Mensch-Maschine Interaktion ist, insbesondere in der Fahrerassistenz. Diese Arbeit zeigt Möglichkeiten auf, Menschen bei Vorhersagen in dynamischen Situationen im Straßenverkehr zu unterstĂŒtzen. Zentrale BeitrĂ€ge der Arbeit sind 1) eine theoretische Auseinandersetzung mit der Aufgabe, die menschliche Wahrnehmung und das VerstĂ€ndnis von raum-zeitlichen Informationen im Straßenverkehr zu erweitern, 2) die EinfĂŒhrung beispielhafter Systeme, die aus dieser Betrachtung hervorgehen, 3) die empirische Untersuchung der Auswirkungen dieser Systeme auf das Nutzerverhalten und die Fahrsicherheit in simulierten Verkehrssituationen und 4) die VerknĂŒpfung der untersuchten AnsĂ€tze mit Arbeiten an kooperativen Mensch-Maschine Systemen. Die Arbeit ist in drei Teile gegliedert: Der erste Teil stellt einige Herausforderungen bei der Bildung von Situationsbewusstsein vor, welches fĂŒr die sichere Teilnahme am Straßenverkehr notwendig ist. Aus einem Vergleich dieses Überblicks mit frĂŒheren Arbeiten zeigt sich, dass eine Notwendigkeit besteht, Fahrer besser ĂŒber dynamische Aspekte von Fahrsituationen zu informieren. Dies umfasst unter anderem Ereigniswahrscheinlichkeiten, rĂ€umliche und zeitliche Distanzen, sowie eine frĂŒhere Signalisierung relevanter Elemente in der Umgebung. Neue Formen der Assistenz können sich an verschiedenen grundlegenden AnsĂ€tzen der Mensch-Maschine Interaktion orientieren, die entweder auf einen Ersatz, eine Verteilung oder eine Erweiterung von Fahrerkompetenzen abzielen. Die Differenzierung dieser AnsĂ€tze legt den Schluss nahe, dass ein von Kompetenzerweiterung geleiteter Ansatz fĂŒr die BewĂ€ltigung jener Aufgaben von Vorteil ist, bei denen aktiver Nutzereinsatz, die Erhaltung bestehender Kompetenzen und Situationsbewusstsein gefordert sind. Im Anschluss werden Erkenntnisse und Theorien ĂŒber menschliche sensomotorische Prozesse verknĂŒpft, um einen enaktiven Ansatz der Mensch-Maschine Interaktion zu entwickeln, der einer erweiterungsgeleiteten Perspektive Rechnung trĂ€gt. Dieser Ansatz soll es Fahrern ermöglichen, sicherheitsrelevante raum-zeitliche Informationen ĂŒber neue sensomotorische Prozesse zu erfassen. Im zweiten Teil der Arbeit wird ein Konzept und funktioneller Prototyp zur Erweiterung der Wahrnehmung von Verkehrsdynamik als ein erstes Beispiel zur Anwendung der Prinzipien dieses enaktiven Ansatzes vorgestellt. Dieser Prototyp nutzt vibrotaktile Aktuatoren zur Kommunikation von Richtungen und zeitlichen Distanzen zu möglichen Gefahrenquellen ĂŒber die Aktuatorposition und -intensitĂ€t. Teilnehmer einer Fahrsimulationsstudie waren in der Lage, in kurzer Zeit ein intuitives VerstĂ€ndnis dieser Assistenz zu entwickeln, ohne vorher ĂŒber die FunktionalitĂ€t unterrichtet worden zu sein. Sie zeigten zudem ein erhöhtes Maß an Fahrsicherheit in kritischen Verkehrssituationen. Doch diese Studie wirft auch neue Fragen auf, beispielsweise, ob der Sicherheitsgewinn auf kontinuierliche Distanzkodierung zurĂŒckzufĂŒhren ist und ob ein Nutzen auch in weiteren Szenarien vorliegen wĂŒrde, etwa bei Kreuzungen und weniger kritischem longitudinalen Verkehr. Um diesen Fragen nachzugehen, wurden Effekte eines erweiterten Prototypen mit spurunabhĂ€ngiger KollisionsprĂ€diktion, sowie einer Option zur binĂ€ren Kommunikation möglicher Kollisionsrichtungen in einer weiteren Fahrsimulatorstudie untersucht. Auch in dieser Studie bestĂ€tigen die subjektiven Bewertungen ein schnelles VerstĂ€ndnis der Signale und eine Wahrnehmung rĂ€umlicher und zeitlicher Signalkomponenten. Überraschenderweise berichteten Teilnehmer grĂ¶ĂŸtenteils auch nach der Nutzung einer binĂ€ren Assistenzvariante, dass sie eine gefahrabhĂ€ngige Variation in der IntensitĂ€t von taktilen Stimuli wahrgenommen hĂ€tten. Die Teilnehmer fĂŒhlten sich mit beiden Varianten in der Fahraufgabe unterstĂŒtzt, besonders in Situationen, die von ihnen als kritisch eingeschĂ€tzt wurden. Im Gegensatz zur ersten Studie hat sich diese gefĂŒhlte UnterstĂŒtzung nur geringfĂŒgig in einer messbaren SicherheitsverĂ€nderung widergespiegelt. Dieses Ergebnis deutet darauf hin, dass die Wahrnehmungsanforderungen der Szenarien mit geringer KritikalitĂ€t mit den vorhandenen FahrerkapazitĂ€ten erfĂŒllt werden konnten. Doch was passiert, wenn diese FĂ€higkeiten eingeschrĂ€nkt werden, beispielsweise durch schlechte Sichtbedingungen oder Situationen mit erhöhter AmbiguitĂ€t? In einer dritten Fahrsimulatorstudie wurde das Assistenzsystem in speziell solchen Situationen eingesetzt, was zu substantiellen Sicherheitsvorteilen gegenĂŒber unassistiertem Fahren gefĂŒhrt hat. ZusĂ€tzlich zu der vorher eingefĂŒhrten Form wurde eine neue Variante des Prototyps untersucht, welche rĂ€umliche Unsicherheiten der Fahrzeugwahrnehmung in taktilen Signalen kodiert. Studienteilnehmer hatten keine Schwierigkeiten, diese zusĂ€tzliche Signaldimension zu verstehen und die Information zur Verbesserung der Fahrsicherheit zu nutzen. Obwohl sie inherent weniger informativ sind als rĂ€umlich prĂ€zise Signale, bewerteten die Teilnehmer die Signale, die die Unsicherheit ĂŒbermitteln, als ebenso nĂŒtzlich und zufriedenstellend. Solch eine WertschĂ€tzung fĂŒr die Transparenz variabler InformationsreliabilitĂ€t ist ein vielversprechendes Indiz fĂŒr die Möglichkeit einer adaptiven Vertrauenskalibrierung in der Mensch-Maschine Interaktion. Dies ist ein Schritt hin zur einer engeren Integration der FĂ€higkeiten von Fahrer und Fahrzeug. Ein komplementĂ€rer Schritt wĂ€re eine Erweiterung der Transparenz mentaler ZustĂ€nde des Fahrers, wodurch eine wechselseitige Anpassung von Mensch und Maschine möglich wĂ€re. Der letzte Teil dieser Arbeit diskutiert, wie diese Transparenz und weitere Voraussetzungen von Mensch-Maschine Kooperation erfĂŒllt werden könnten, indem etwa Korrelate mentaler ZustĂ€nde, insbesondere ĂŒber das Blickverhalten, ĂŒberwacht werden. Des Weiteren ergeben sich mit Blick auf zusĂ€tzliche kooperative FĂ€higkeiten neue Fragen ĂŒber die Definition von IdentitĂ€t, sowie ĂŒber die praktischen Konsequenzen von Mensch-Maschine Systemen, in denen ko-adaptive Agenten sensomotorische Prozesse vermittels einander ausĂŒben können

    Is Multimedia Multisensorial? - A Review of Mulsemedia Systems

    Get PDF
    © 2018 Copyright held by the owner/author(s). Mulsemedia - multiple sensorial media - makes possible the inclusion of layered sensory stimulation and interaction through multiple sensory channels. e recent upsurge in technology and wearables provides mulsemedia researchers a vehicle for potentially boundless choice. However, in order to build systems that integrate various senses, there are still some issues that need to be addressed. is review deals with mulsemedia topics remained insu ciently explored by previous work, with a focus on multi-multi (multiple media - multiple senses) perspective, where multiple types of media engage multiple senses. Moreover, it addresses the evolution of previously identi ed challenges in this area and formulates new exploration directions.This article was funded by the European Union’s Horizon 2020 Research and Innovation program under Grant Agreement no. 688503

    Enhancing tele-operation - Investigating the effect of sensory feedback on performance

    Get PDF
    The decline in the number of healthcare service providers in comparison to the growing numbers of service users prompts the development of technologies to improve the efficiency of healthcare services. One such technology which could offer support are assistive robots, remotely tele-operated to provide assistive care and support for older adults with assistive care needs and people living with disabilities. Tele-operation makes it possible to provide human-in-the-loop robotic assistance while also addressing safety concerns in the use of autonomous robots around humans. Unlike many other applications of robot tele-operation, safety is particularly significant as the tele-operated assistive robots will be used in close proximity to vulnerable human users. It is therefore important to provide as much information about the robot (and the robot workspace) as possible to the tele-operators to ensure safety, as well as efficiency. Since robot tele-operation is relatively unexplored in the context of assisted living, this thesis explores different feedback modalities that may be employed to communicate sensor information to tele-operators. The thesis presents research as it transitioned from identifying and evaluating additional feedback modalities that may be used to supplement video feedback, to exploring different strategies for communicating the different feedback modalities. Due to the fact that some of the sensors and feedback needed are not readily available, different design iterations were carried out to develop the necessary hardware and software for the studies carried out. The first human study was carried out to investigate the effect of feedback on tele-operator performance. Performance was measured in terms of task completion time, ease of use of the system, number of robot joint movements, and success or failure of the task. The effect of verbal feedback between the tele-operator and service users was also investigated. Feedback modalities have differing effects on performance metrics and as a result, the choice of optimal feedback may vary from task to task. Results show that participants preferred scenarios with verbal feedback relative to scenarios without verbal feedback, which also reflects in their performance. Gaze metrics from the study also showed that it may be possible to understand how tele-operators interact with the system based on their areas of interest as they carry out tasks. This findings suggest that such studies can be used to improve the design of tele-operation systems.The need for social interaction between the tele-operator and service user suggests that visual and auditory feedback modalities will be engaged as tasks are carried out. This further reduces the number of available sensory modalities through which information can be communicated to tele-operators. A wrist-worn Wi-Fi enabled haptic feedback device was therefore developed and a study was carried out to investigate haptic sensitivities across the wrist. Results suggest that different locations on the wrist have varying sensitivities to haptic stimulation with and without video distraction, duration of haptic stimulation, and varying amplitudes of stimulation. This suggests that dynamic control of haptic feedback can be used to improve haptic perception across the wrist, and it may also be possible to display more than one type of sensor data to tele-operators during a task. The final study carried out was designed to investigate if participants can differentiate between different types of sensor data conveyed through different locations on the wrist via haptic feedback. The effect of increased number of attempts on performance was also investigated. Total task completion time decreased with task repetition. Participants with prior gaming and robot experience had a more significant reduction in total task completion time when compared to participants without prior gaming and robot experience. Reduction in task completion time was noticed for all stages of the task but participants with additional feedback had higher task completion time than participants without supplementary feedback. Reduction in task completion time varied for different stages of the task. Even though gripper trajectory reduced with task repetition, participants with supplementary feedback had longer gripper trajectories than participants without supplementary feedback, while participants with prior gaming experience had shorter gripper trajectories than participants without prior gaming experience. Perceived workload was also found to reduce with task repetition but perceived workload was higher for participants with feedback reported higher perceived workload than participants without feedback. However participants without feedback reported higher frustration than participants without feedback.Results show that the effect of feedback may not be significant where participants can get necessary information from video feedback. However, participants were fully dependent on feedback when video feedback could not provide requisite information needed.The findings presented in this thesis have potential applications in healthcare, and other applications of robot tele-operation and feedback. Findings can be used to improve feedback designs for tele-operation systems to ensure safe and efficient tele-operation. The thesis also provides ways visual feedback can be used with other feedback modalities. The haptic feedback designed in this research may also be used to provide situational awareness for the visually impaired

    Advancing proxy-based haptic feedback in virtual reality

    Get PDF
    This thesis advances haptic feedback for Virtual Reality (VR). Our work is guided by Sutherland's 1965 vision of the ultimate display, which calls for VR systems to control the existence of matter. To push towards this vision, we build upon proxy-based haptic feedback, a technique characterized by the use of passive tangible props. The goal of this thesis is to tackle the central drawback of this approach, namely, its inflexibility, which yet hinders it to fulfill the vision of the ultimate display. Guided by four research questions, we first showcase the applicability of proxy-based VR haptics by employing the technique for data exploration. We then extend the VR system's control over users' haptic impressions in three steps. First, we contribute the class of Dynamic Passive Haptic Feedback (DPHF) alongside two novel concepts for conveying kinesthetic properties, like virtual weight and shape, through weight-shifting and drag-changing proxies. Conceptually orthogonal to this, we study how visual-haptic illusions can be leveraged to unnoticeably redirect the user's hand when reaching towards props. Here, we contribute a novel perception-inspired algorithm for Body Warping-based Hand Redirection (HR), an open-source framework for HR, and psychophysical insights. The thesis concludes by proving that the combination of DPHF and HR can outperform the individual techniques in terms of the achievable flexibility of the proxy-based haptic feedback.Diese Arbeit widmet sich haptischem Feedback fĂŒr Virtual Reality (VR) und ist inspiriert von Sutherlands Vision des ultimativen Displays, welche VR-Systemen die FĂ€higkeit zuschreibt, Materie kontrollieren zu können. Um dieser Vision nĂ€her zu kommen, baut die Arbeit auf dem Konzept proxy-basierter Haptik auf, bei der haptische EindrĂŒcke durch anfassbare Requisiten vermittelt werden. Ziel ist es, diesem Ansatz die fĂŒr die Realisierung eines ultimativen Displays nötige FlexibilitĂ€t zu verleihen. Dazu bearbeiten wir vier Forschungsfragen und zeigen zunĂ€chst die Anwendbarkeit proxy-basierter Haptik durch den Einsatz der Technik zur Datenexploration. Anschließend untersuchen wir in drei Schritten, wie VR-Systeme mehr Kontrolle ĂŒber haptische EindrĂŒcke von Nutzern erhalten können. Hierzu stellen wir Dynamic Passive Haptic Feedback (DPHF) vor, sowie zwei Verfahren, die kinĂ€sthetische EindrĂŒcke wie virtuelles Gewicht und Form durch Gewichtsverlagerung und VerĂ€nderung des Luftwiderstandes von Requisiten vermitteln. ZusĂ€tzlich untersuchen wir, wie visuell-haptische Illusionen die Hand des Nutzers beim Greifen nach Requisiten unbemerkt umlenken können. Dabei stellen wir einen neuen Algorithmus zur Body Warping-based Hand Redirection (HR), ein Open-Source-Framework, sowie psychophysische Erkenntnisse vor. Abschließend zeigen wir, dass die Kombination von DPHF und HR proxy-basierte Haptik noch flexibler machen kann, als es die einzelnen Techniken alleine können

    Computer-supported movement guidance: investigating visual/visuotactile guidance and informing the design of vibrotactile body-worn interfaces

    Get PDF
    This dissertation explores the use of interactive systems to support movement guidance, with applications in various fields such as sports, dance, physiotherapy, and immersive sketching. The research focuses on visual, haptic, and visuohaptic approaches and aims to overcome the limitations of traditional guidance methods, such as dependence on an expert and high costs for the novice. The main contributions of the thesis are (1) an evaluation of the suitability of various types of displays and visualizations of the human body for posture guidance, (2) an investigation into the influence of different viewpoints/perspectives, the addition of haptic feedback, and various movement properties on movement guidance in virtual environments, (3) an investigation into the effectiveness of visuotactile guidance for hand movements in a virtual environment, (4) two in-depth studies of haptic perception on the body to inform the design of wearable and handheld interfaces that leverage tactile output technologies, and (5) an investigation into new interaction techniques for tactile guidance of arm movements. The results of this research advance the state of the art in the field, provide design and implementation insights, and pave the way for new investigations in computer-supported movement guidance
    corecore