204 research outputs found

    Augmented visual, auditory, haptic, and multimodal feedback in motor learning: A review

    Get PDF
    It is generally accepted that augmented feedback, provided by a human expert or a technical display, effectively enhances motor learning. However, discussion of the way to most effectively provide augmented feedback has been controversial. Related studies have focused primarily on simple or artificial tasks enhanced by visual feedback. Recently, technical advances have made it possible also to investigate more complex, realistic motor tasks and to implement not only visual, but also auditory, haptic, or multimodal augmented feedback. The aim of this review is to address the potential of augmented unimodal and multimodal feedback in the framework of motor learning theories. The review addresses the reasons for the different impacts of feedback strategies within or between the visual, auditory, and haptic modalities and the challenges that need to be overcome to provide appropriate feedback in these modalities, either in isolation or in combination. Accordingly, the design criteria for successful visual, auditory, haptic, and multimodal feedback are elaborate

    Force feedback facilitates multisensory integration during robotic tool use

    Get PDF
    The present study investigated the effects of force feedback in relation to tool use on the multisensory integration of visuo-tactile information. Participants learned to control a robotic tool through a surgical robotic interface. Following tool-use training, participants performed a crossmodal congruency task, by responding to tactile vibrations applied to their hands, while ignoring visual distractors superimposed on the robotic tools. In the first experiment it was found that tool-use training with force feedback facilitates multisensory integration of signals from the tool, as reflected in a stronger crossmodal congruency effect with the force feedback training compared to training without force feedback and to no training. The second experiment extends these findings by showing that training with realistic online force feedback resulted in a stronger crossmodal congruency effect compared to training in which force feedback was delayed. The present study highlights the importance of haptic information for multisensory integration and extends findings from classical tool-use studies to the domain of robotic tools. We argue that such crossmodal congruency effects are an objective measure of robotic tool integration and propose some potential applications in surgical robotics, robotic tools, and human-tool interactio

    Multimodality in {VR}: {A} Survey

    Get PDF
    Virtual reality has the potential to change the way we create and consume content in our everyday life. Entertainment, training, design and manufacturing, communication, or advertising are all applications that already benefit from this new medium reaching consumer level. VR is inherently different from traditional media: it offers a more immersive experience, and has the ability to elicit a sense of presence through the place and plausibility illusions. It also gives the user unprecedented capabilities to explore their environment, in contrast with traditional media. In VR, like in the real world, users integrate the multimodal sensory information they receive to create a unified perception of the virtual world. Therefore, the sensory cues that are available in a virtual environment can be leveraged to enhance the final experience. This may include increasing realism, or the sense of presence; predicting or guiding the attention of the user through the experience; or increasing their performance if the experience involves the completion of certain tasks. In this state-of-the-art report, we survey the body of work addressing multimodality in virtual reality, its role and benefits in the final user experience. The works here reviewed thus encompass several fields of research, including computer graphics, human computer interaction, or psychology and perception. Additionally, we give an overview of different applications that leverage multimodal input in areas such as medicine, training and education, or entertainment; we include works in which the integration of multiple sensory information yields significant improvements, demonstrating how multimodality can play a fundamental role in the way VR systems are designed, and VR experiences created and consumed

    Haptics: Science, Technology, Applications

    Get PDF
    This open access book constitutes the proceedings of the 12th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, EuroHaptics 2020, held in Leiden, The Netherlands, in September 2020. The 60 papers presented in this volume were carefully reviewed and selected from 111 submissions. The were organized in topical sections on haptic science, haptic technology, and haptic applications. This year's focus is on accessibility

    Haptics: Science, Technology, Applications

    Get PDF
    This open access book constitutes the proceedings of the 13th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, EuroHaptics 2022, held in Hamburg, Germany, in May 2022. The 36 regular papers included in this book were carefully reviewed and selected from 129 submissions. They were organized in topical sections as follows: haptic science; haptic technology; and haptic applications

    Multisensory Approaches to Restore Visual Functions

    Get PDF

    Multimodality in VR: A survey

    Get PDF
    Virtual reality (VR) is rapidly growing, with the potential to change the way we create and consume content. In VR, users integrate multimodal sensory information they receive, to create a unified perception of the virtual world. In this survey, we review the body of work addressing multimodality in VR, and its role and benefits in user experience, together with different applications that leverage multimodality in many disciplines. These works thus encompass several fields of research, and demonstrate that multimodality plays a fundamental role in VR; enhancing the experience, improving overall performance, and yielding unprecedented abilities in skill and knowledge transfer

    Translating novel findings of perceptual-motor codes into the neuro-rehabilitation of movement disorders

    Get PDF
    The bidirectional flow of perceptual and motor information has recently proven useful as rehabilitative tool for re-building motor memories. We analyzed how the visual-motor approach has been successfully applied in neurorehabilitation, leading to surprisingly rapid and effective improvements in action execution. We proposed that the contribution of multiple sensory channels during treatment enables individuals to predict and optimize motor behavior, having a greater effect than visual input alone. We explored how the state-of-the-art neuroscience techniques show direct evidence that employment of visual-motor approach leads to increased motor cortex excitability and synaptic and cortical map plasticity. This super-additive response to multimodal stimulation may maximize neural plasticity, potentiating the effect of conventional treatment, and will be a valuable approach when it comes to advances in innovative methodologies

    Structuring a virtual environment for sport training: A case study on rowing technique

    Get PDF
    The advancements in technology and the possibility of their integration in the domain of virtual environments allow access to new application domains previously limited to highly expensive setups. This is specifically the case of sport training that can take advantage of the improved quality of measurement systems and computing techniques. Given this the challenge that emerges is related to the way training is performed and how it is possible to evaluate the transfer from the virtual setup to the real case. In this work we discuss the aspect of system architecture for a VE in sport training, taking as a case study a rowing training system. The paper will address in particular the challenges of training technique in rowing

    Effects of Virtual Reality-Based Multimodal Audio-Tactile Cueing in Patients With Spatial Attention Deficits: Pilot Usability Study.

    Get PDF
    BACKGROUND Virtual reality (VR) devices are increasingly being used in medicine and other areas for a broad spectrum of applications. One of the possible applications of VR involves the creation of an environment manipulated in a way that helps patients with disturbances in the spatial allocation of visual attention (so-called hemispatial neglect). One approach to ameliorate neglect is to apply cross-modal cues (ie, cues in sensory modalities other than the visual one, eg, auditory and tactile) to guide visual attention toward the neglected space. So far, no study has investigated the effects of audio-tactile cues in VR on the spatial deployment of visual attention in neglect patients. OBJECTIVE This pilot study aimed to investigate the feasibility and usability of multimodal (audio-tactile) cueing, as implemented in a 3D VR setting, in patients with neglect, and obtain preliminary results concerning the effects of different types of cues on visual attention allocation compared with noncued conditions. METHODS Patients were placed in a virtual environment using a head-mounted display (HMD). The inlay of the HMD was equipped to deliver tactile feedback to the forehead. The task was to find and flag appearing birds. The birds could appear at 4 different presentation angles (lateral and paracentral on the left and right sides), and with (auditory, tactile, or audio-tactile cue) or without (no cue) a spatially meaningful cue. The task usability and feasibility, and 2 simple in-task measures (performance and early orientation) were assessed in 12 right-hemispheric stroke patients with neglect (5 with and 7 without additional somatosensory impairment). RESULTS The new VR setup showed high usability (mean score 10.2, SD 1.85; maximum score 12) and no relevant side effects (mean score 0.833, SD 0.834; maximum score 21). A repeated measures ANOVA on task performance data, with presentation angle, cue type, and group as factors, revealed a significant main effect of cue type (F30,3=9.863; P<.001) and a significant 3-way interaction (F90,9=2.057; P=.04). Post-hoc analyses revealed that among patients without somatosensory impairment, any cue led to better performance compared with no cue, for targets on the left side, and audio-tactile cues did not seem to have additive effects. Among patients with somatosensory impairment, performance was better with both auditory and audio-tactile cueing than with no cue, at every presentation angle; conversely, tactile cueing alone had no significant effect at any presentation angle. Analysis of early orientation data showed that any type of cue triggered better orientation in both groups for lateral presentation angles, possibly reflecting an early alerting effect. CONCLUSIONS Overall, audio-tactile cueing seems to be a promising method to guide patient attention. For instance, in the future, it could be used as an add-on method that supports attentional orientation during established therapeutic approaches
    corecore