58 research outputs found

    Effects of haptic feedback in dual-task teleoperation of a mobile robot

    Get PDF
    Teleoperation system usage is challenging to human operators, as this system has a predominantly visual interface that limits the ability to acquire situation awareness, (e.g. maintain a safe teleoperation). This limitation coupled with the dual-task problem of teleoperating a mobile robot, negatively affects the operators cognitive load and motor skills. Our motivation is to offload some of the visual information to a secondary perceptual channel (haptic), by proposing an assisted teleoperation system. This system uses haptic feedback to alert the operator of obstacle proximity, without directly influencing the operator’s command inputs. The objective of this paper, is to evaluate and validate the efficacy of our system’s haptic feedback, by providing the obstacle proximity information to the operator. The user experiment was conducted to emulate the dual-task problem, by having a concurrent task for cognitive distraction. Our results showed significant differences in time to complete the navigation task and the duration of collisions, between the haptic feedback condition and the control condition.info:eu-repo/semantics/acceptedVersio

    Improved mutual understanding for human-robot collaboration: Combining human-aware motion planning with haptic feedback devices for communicating planned trajectory

    Get PDF
    In a collaborative scenario, the communication between humans and robots is a fundamental aspect to achieve good efficiency and ergonomics in the task execution. A lot of research has been made related to enabling a robot system to understand and predict human behaviour, allowing the robot to adapt its motion to avoid collisions with human workers. Assuming the production task has a high degree of variability, the robot's movements can be difficult to predict, leading to a feeling of anxiety in the worker when the robot changes its trajectory and approaches since the worker has no information about the planned movement of the robot. Additionally, without information about the robot's movement, the human worker cannot effectively plan own activity without forcing the robot to constantly replan its movement. We propose a novel approach to communicating the robot's intentions to a human worker. The improvement to the collaboration is presented by introducing haptic feedback devices, whose task is to notify the human worker about the currently planned robot's trajectory and changes in its status. In order to verify the effectiveness of the developed human-machine interface in the conditions of a shared collaborative workspace, a user study was designed and conducted among 16 participants, whose objective was to accurately recognise the goal position of the robot during its movement. Data collected during the experiment included both objective and subjective parameters. Statistically significant results of the experiment indicated that all the participants could improve their task completion time by over 45% and generally were more subjectively satisfied when completing the task with equipped haptic feedback devices. The results also suggest the usefulness of the developed notification system since it improved users' awareness about the motion plan of the robot.Web of Science2111art. no. 367

    Haptics: Science, Technology, Applications

    Get PDF
    This open access book constitutes the proceedings of the 12th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, EuroHaptics 2020, held in Leiden, The Netherlands, in September 2020. The 60 papers presented in this volume were carefully reviewed and selected from 111 submissions. The were organized in topical sections on haptic science, haptic technology, and haptic applications. This year's focus is on accessibility

    Head-mounted Sensory Augmentation System for Navigation in Low Visibility Environments

    Get PDF
    Sensory augmentation can be used to assist in some tasks where sensory information is limited or sparse. This thesis focuses on the design and investigation of a head-mounted vibrotactile sensory augmentation interface to assist navigation in low visibility environments such as firefighters’ navigation or travel aids for visually impaired people. A novel head-mounted vibrotactile interface comprising a 1-by-7 vibrotactile display worn on the forehead is developed. A series of psychophysical studies is carried out with this display to (1) determine the vibrotactile absolute threshold, (2) investigate the accuracy of vibrotactile localization, and (3) evaluate the funneling illusion and apparent motion as sensory phenomena that could be used to communicate navigation signals. The results of these studies provide guidelines for the design of head-mounted interfaces. A 2nd generation head-mounted sensory augmentation interface called the Mark-II Tactile Helmet is developed for the application of firefighters’ navigation. It consists of a ring of ultrasound sensors mounted to the outside of a helmet, a microcontroller, two batteries and a refined vibrotactile display composed of seven vibration motors based on the results of the aforementioned psychophysical studies. A ‘tactile language’, that is, a set of distinguishable vibrotactile patterns, is developed for communicating navigation commands to the Mark-II Tactile Helmet. Four possible combinations of two command presentation modes (continuous, discrete) and two command types (recurring, single) are evaluated for their effectiveness in guiding users along a virtual wall in a structured environment. Continuous and discrete presentation modes use spatiotemporal patterns that induce the experience of apparent movement and discrete movement on the forehead, respectively. The recurring command type presents the tactile command repeatedly with an interval between patterns of 500 ms while the single command type presents the tactile command just once when there is a change in the command. The effectiveness of this tactile language is evaluated according to the objective measures of the users’ walking speed and the smoothness of their trajectory parallel to the virtual wall and subjective measures of utility and comfort employing Likert-type rating scales. The Recurring Continuous (RC) commands that exploit the phenomena of apparent motion are most effective in generating efficient routes and fast travel, and are most preferred. Finally, the optimal tactile language (RC) is compared with audio guidance using verbal instructions to investigate effectiveness in delivering navigation commands. The results show that haptic guidance leads to better performance as well as lower cognitive workload compared to auditory feedback. This research demonstrates that a head-mounted sensory augmentation interface can enhance spatial awareness in low visibility environments and could help firefighters’ navigation by providing them with supplementary sensory information

    How to Build an Embodiment Lab: Achieving Body Representation Illusions in Virtual Reality

    Get PDF
    Advances in computer graphics algorithms and virtual reality (VR) systems, together with the reduction in cost of associated equipment, have led scientists to consider VR as a useful tool for conducting experimental studies in fields such as neuroscience and experimental psychology. In particular virtual body ownership, where the feeling of ownership over a virtual body is elicited in the participant, has become a useful tool in the study of body representation, in cognitive neuroscience and psychology, concerned with how the brain represents the body. Although VR has been shown to be a useful tool for exploring body ownership illusions, integrating the various technologies necessary for such a system can be daunting. In this paper we discuss the technical infrastructure necessary to achieve virtual embodiment. We describe a basic VR system and how it may be used for this purpose, and then extend this system with the introduction of real-time motion capture, a simple haptics system and the integration of physiological and brain electrical activity recordings

    Multimodal interaction: developing an interaction concept for a touchscreen incorporating tactile feedback

    Get PDF
    The touchscreen, as an alternative user interface for applications that normally require mice and keyboards, has become more and more commonplace, showing up on mobile devices, on vending machines, on ATMs and in the control panels of machines in industry, where conventional input devices cannot provide intuitive, rapid and accurate user interaction with the content of the display. The exponential growth in processing power on the PC, together with advances in understanding human communication channels, has had a significant effect on the design of usable, human-factored interfaces on touchscreens, and on the number and complexity of applications available on touchscreens. Although computer-driven touchscreen interfaces provide programmable and dynamic displays, the absence of the expected tactile cues on the hard and static surfaces of conventional touchscreens is challenging interface design and touchscreen usability, in particular for distracting, low-visibility environments. Current technology allows the human tactile modality to be used in touchscreens. While the visual channel converts graphics and text unidirectionally from the computer to the end user, tactile communication features a bidirectional information flow to and from the user as the user perceives and acts on the environment and the system responds to changing contextual information. Tactile sensations such as detents and pulses provide users with cues that make selecting and controlling a more intuitive process. Tactile features can compensate for deficiencies in some of the human senses, especially in tasks which carry a heavy visual or auditory burden. In this study, an interaction concept for tactile touchscreens is developed with a view to employing the key characteristics of the human sense of touch effectively and efficiently, especially in distracting environments where vision is impaired and hearing is overloaded. As a first step toward improving the usability of touchscreens through the integration of tactile effects, different mechanical solutions for producing motion in tactile touchscreens are investigated, to provide a basis for selecting suitable vibration directions when designing tactile displays. Building on these results, design know-how regarding tactile feedback patterns is further developed to enable dynamic simulation of UI controls, in order to give users a sense of perceiving real controls on a highly natural touch interface. To study the value of adding tactile properties to touchscreens, haptically enhanced UI controls are then further investigated with the aim of mapping haptic signals to different usage scenarios to perform primary and secondary tasks with touchscreens. The findings of the study are intended for consideration and discussion as a guide to further development of tactile stimuli, haptically enhanced user interfaces and touchscreen applications

    Haptics: Science, Technology, Applications

    Get PDF
    This open access book constitutes the proceedings of the 12th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, EuroHaptics 2020, held in Leiden, The Netherlands, in September 2020. The 60 papers presented in this volume were carefully reviewed and selected from 111 submissions. The were organized in topical sections on haptic science, haptic technology, and haptic applications. This year's focus is on accessibility

    Somatic ABC's: A Theoretical Framework for Designing, Developing and Evaluating the Building Blocks of Touch-Based Information Delivery

    Get PDF
    abstract: Situations of sensory overload are steadily becoming more frequent as the ubiquity of technology approaches reality--particularly with the advent of socio-communicative smartphone applications, and pervasive, high speed wireless networks. Although the ease of accessing information has improved our communication effectiveness and efficiency, our visual and auditory modalities--those modalities that today's computerized devices and displays largely engage--have become overloaded, creating possibilities for distractions, delays and high cognitive load; which in turn can lead to a loss of situational awareness, increasing chances for life threatening situations such as texting while driving. Surprisingly, alternative modalities for information delivery have seen little exploration. Touch, in particular, is a promising candidate given that it is our largest sensory organ with impressive spatial and temporal acuity. Although some approaches have been proposed for touch-based information delivery, they are not without limitations including high learning curves, limited applicability and/or limited expression. This is largely due to the lack of a versatile, comprehensive design theory--specifically, a theory that addresses the design of touch-based building blocks for expandable, efficient, rich and robust touch languages that are easy to learn and use. Moreover, beyond design, there is a lack of implementation and evaluation theories for such languages. To overcome these limitations, a unified, theoretical framework, inspired by natural, spoken language, is proposed called Somatic ABC's for Articulating (designing), Building (developing) and Confirming (evaluating) touch-based languages. To evaluate the usefulness of Somatic ABC's, its design, implementation and evaluation theories were applied to create communication languages for two very unique application areas: audio described movies and motor learning. These applications were chosen as they presented opportunities for complementing communication by offloading information, typically conveyed visually and/or aurally, to the skin. For both studies, it was found that Somatic ABC's aided the design, development and evaluation of rich somatic languages with distinct and natural communication units.Dissertation/ThesisPh.D. Computer Science 201

    Touch- and Walkable Virtual Reality to Support Blind and Visually Impaired Peoples‘ Building Exploration in the Context of Orientation and Mobility

    Get PDF
    Der Zugang zu digitalen Inhalten und Informationen wird immer wichtiger für eine erfolgreiche Teilnahme an der heutigen, zunehmend digitalisierten Zivilgesellschaft. Solche Informationen werden meist visuell präsentiert, was den Zugang für blinde und sehbehinderte Menschen einschränkt. Die grundlegendste Barriere ist oft die elementare Orientierung und Mobilität (und folglich die soziale Mobilität), einschließlich der Erlangung von Kenntnissen über unbekannte Gebäude vor deren Besuch. Um solche Barrieren zu überbrücken, sollten technische Hilfsmittel entwickelt und eingesetzt werden. Es ist ein Kompromiss zwischen technologisch niedrigschwellig zugänglichen und verbreitbaren Hilfsmitteln und interaktiv-adaptiven, aber komplexen Systemen erforderlich. Die Anpassung der Technologie der virtuellen Realität (VR) umfasst ein breites Spektrum an Entwicklungs- und Entscheidungsoptionen. Die Hauptvorteile der VR-Technologie sind die erhöhte Interaktivität, die Aktualisierbarkeit und die Möglichkeit, virtuelle Räume und Modelle als Abbilder von realen Räumen zu erkunden, ohne dass reale Gefahren und die begrenzte Verfügbarkeit von sehenden Helfern auftreten. Virtuelle Objekte und Umgebungen haben jedoch keine physische Beschaffenheit. Ziel dieser Arbeit ist es daher zu erforschen, welche VR-Interaktionsformen sinnvoll sind (d.h. ein angemessenes Verbreitungspotenzial bieten), um virtuelle Repräsentationen realer Gebäude im Kontext von Orientierung und Mobilität berührbar oder begehbar zu machen. Obwohl es bereits inhaltlich und technisch disjunkte Entwicklungen und Evaluationen zur VR-Technologie gibt, fehlt es an empirischer Evidenz. Zusätzlich bietet diese Arbeit einen Überblick über die verschiedenen Interaktionen. Nach einer Betrachtung der menschlichen Physiologie, Hilfsmittel (z.B. taktile Karten) und technologischen Eigenschaften wird der aktuelle Stand der Technik von VR vorgestellt und die Anwendung für blinde und sehbehinderte Nutzer und der Weg dorthin durch die Einführung einer neuartigen Taxonomie diskutiert. Neben der Interaktion selbst werden Merkmale des Nutzers und des Geräts, der Anwendungskontext oder die nutzerzentrierte Entwicklung bzw. Evaluation als Klassifikatoren herangezogen. Begründet und motiviert werden die folgenden Kapitel durch explorative Ansätze, d.h. im Bereich 'small scale' (mit sogenannten Datenhandschuhen) und im Bereich 'large scale' (mit einer avatargesteuerten VR-Fortbewegung). Die folgenden Kapitel führen empirische Studien mit blinden und sehbehinderten Nutzern durch und geben einen formativen Einblick, wie virtuelle Objekte in Reichweite der Hände mit haptischem Feedback erfasst werden können und wie verschiedene Arten der VR-Fortbewegung zur Erkundung virtueller Umgebungen eingesetzt werden können. Daraus werden geräteunabhängige technologische Möglichkeiten und auch Herausforderungen für weitere Verbesserungen abgeleitet. Auf der Grundlage dieser Erkenntnisse kann sich die weitere Forschung auf Aspekte wie die spezifische Gestaltung interaktiver Elemente, zeitlich und räumlich kollaborative Anwendungsszenarien und die Evaluation eines gesamten Anwendungsworkflows (d.h. Scannen der realen Umgebung und virtuelle Erkundung zu Trainingszwecken sowie die Gestaltung der gesamten Anwendung in einer langfristig barrierefreien Weise) konzentrieren.Access to digital content and information is becoming increasingly important for successful participation in today's increasingly digitized civil society. Such information is mostly presented visually, which restricts access for blind and visually impaired people. The most fundamental barrier is often basic orientation and mobility (and consequently, social mobility), including gaining knowledge about unknown buildings before visiting them. To bridge such barriers, technological aids should be developed and deployed. A trade-off is needed between technologically low-threshold accessible and disseminable aids and interactive-adaptive but complex systems. The adaptation of virtual reality (VR) technology spans a wide range of development and decision options. The main benefits of VR technology are increased interactivity, updatability, and the possibility to explore virtual spaces as proxies of real ones without real-world hazards and the limited availability of sighted assistants. However, virtual objects and environments have no physicality. Therefore, this thesis aims to research which VR interaction forms are reasonable (i.e., offering a reasonable dissemination potential) to make virtual representations of real buildings touchable or walkable in the context of orientation and mobility. Although there are already content and technology disjunctive developments and evaluations on VR technology, there is a lack of empirical evidence. Additionally, this thesis provides a survey between different interactions. Having considered the human physiology, assistive media (e.g., tactile maps), and technological characteristics, the current state of the art of VR is introduced, and the application for blind and visually impaired users and the way to get there is discussed by introducing a novel taxonomy. In addition to the interaction itself, characteristics of the user and the device, the application context, or the user-centered development respectively evaluation are used as classifiers. Thus, the following chapters are justified and motivated by explorative approaches, i.e., in the group of 'small scale' (using so-called data gloves) and in the scale of 'large scale' (using an avatar-controlled VR locomotion) approaches. The following chapters conduct empirical studies with blind and visually impaired users and give formative insight into how virtual objects within hands' reach can be grasped using haptic feedback and how different kinds of VR locomotion implementation can be applied to explore virtual environments. Thus, device-independent technological possibilities and also challenges for further improvements are derived. On the basis of this knowledge, subsequent research can be focused on aspects such as the specific design of interactive elements, temporally and spatially collaborative application scenarios, and the evaluation of an entire application workflow (i.e., scanning the real environment and exploring it virtually for training purposes, as well as designing the entire application in a long-term accessible manner)

    Enhancing Situational Awareness Through Haptics Interaction In Virtual Environment Training Systmes

    Get PDF
    Virtual environment (VE) technology offers a viable training option for developing knowledge, skills and attitudes (KSA) within domains that have limited live training opportunities due to personnel safety and cost (e.g., live fire exercises). However, to ensure these VE training systems provide effective training and transfer, designers of such systems must ensure that training goals and objectives are clearly defined and VEs are designed to support development of KSAs required. Perhaps the greatest benefit of VE training is its ability to provide a multimodal training experience, where trainees can see, hear and feel their surrounding environment, thus engaging them in training scenarios to further their expertise. This work focused on enhancing situation awareness (SA) within a training VE through appropriate use of multimodal cues. The Multimodal Optimization of Situation Awareness (MOSA) model was developed to identify theoretical benefits of various environmental and individual multimodal cues on SA components. Specific focus was on benefits associated with adding cues that activated the haptic system (i.e., kinesthetic/cutaneous sensory systems) or vestibular system in a VE. An empirical study was completed to evaluate the effectiveness of adding two independent spatialized tactile cues to a Military Operations on Urbanized Terrain (MOUT) VE training system, and how head tracking (i.e., addition of rotational vestibular cues) impacted spatial awareness and performance when tactile cues were added during training. Results showed tactile cues enhanced spatial awareness and performance during both repeated training and within a transfer environment, yet there were costs associated with including two cues together during training, as each cue focused attention on a different aspect of the global task. In addition, the results suggest that spatial awareness benefits from a single point indicator (i.e., spatialized tactile cues) may be impacted by interaction mode, as performance benefits were seen when tactile cues were paired with head tracking. Future research should further examine theoretical benefits outlined in the MOSA model, and further validate that benefits can be realized through appropriate activation of multimodal cues for targeted training objectives during training, near transfer and far transfer (i.e., real world performance)
    corecore