1,055,575 research outputs found

    Novel Multimodal Feedback Techniques for In-Car Mid-Air Gesture Interaction

    Get PDF
    This paper presents an investigation into the effects of different feedback modalities on mid-air gesture interaction for infotainment systems in cars. Car crashes and near-crash events are most commonly caused by driver distraction. Mid-air interaction is a way of reducing driver distraction by reducing visual demand from infotainment. Despite a range of available modalities, feedback in mid-air gesture systems is generally provided through visual displays. We conducted a simulated driving study to investigate how different types of multimodal feedback can support in-air gestures. The effects of different feedback modalities on eye gaze behaviour, and the driving and gesturing tasks are considered. We found that feedback modality influenced gesturing behaviour. However, drivers corrected falsely executed gestures more often in non-visual conditions. Our findings show that non-visual feedback can reduce visual distraction significantl

    Suppressing visual feedback in written composition: Effects on processing demands and coordination of the writing processes

    Get PDF
    The goal of this experiment was to investigate the role of visual feedback during written composition. Effects of suppression of visual feedback were analysed both on processing demands and on on-line coordination of low-level execution processes and of high-level conceptual and linguistic processes. Writers composed a text and copied it either with or without visual feedback. Processing demands of the writing processes were evaluated with reaction times to secondary auditory probes that were analysed according to whether participants were handwriting (in a composing and a copying tasks) or engaged in high level processes (when pausing in a composing task). Suppression of visual feedback increased reaction times interference (secondary reaction time minus baseline reaction time) during handwriting in the copying task and not during pauses in the composing task. This suggests that suppression of visual feedback affected processing demands of only execution processes and not those of high-level conceptual and linguistic processes. This is confirmed by analysis of quality of the texts produced by participants that were little, if any, affected by the suppression of visual feedback. Results also indicate that the increase in processing demands of execution related to suppression of visual feedback affected on-line coordination of the writing processes. Indeed, when visual feedback was suppressed, reaction time interferences associated to handwriting were not reliable different in the copying task and in the composing task but were significantly different in the composition task, RT interference associated to handwriting being lower in the copying task than in the composition task. When visual feedback was suppressed, writers activated step-by-step execution processes and high-level writing processes, whereas they concurrently activated these writing processes when composing with visual feedback

    Speech Disruption During Delayed Auditory Feedback with Simultaneous Visual Feedback

    Get PDF
    Delayed auditory feedback (DAF) regarding speech can cause dysfluency. The purpose of this study was to explore whether providing visual feedback in addition to DAF would ameliorate speech disruption. Speakers repeated sentences and heard their auditory feedback delayed with and without simultaneous visual feedback. DAF led to increased sentence durations and an increased number of speech disruptions. Although visual feedback did not reduce DAF effects on duration, a promising but nonsignificant trend was observed for fewer speech disruptions when visual feedback was provided. This trend was significant in speakers who were overall less affected by DAF. The results suggest the possibility that speakers strategically use alternative sources of feedback

    Touching the invisible: Localizing ultrasonic haptic cues

    Get PDF
    While mid-air gestures offer new possibilities to interact with or around devices, some situations, such as interacting with applications, playing games or navigating, may require visual attention to be focused on a main task. Ultrasonic haptic feedback can provide 3D spatial haptic cues that do not demand visual attention for these contexts. In this paper, we present an initial study of active exploration of ultrasonic haptic virtual points that investigates the spatial localization with and without the use of the visual modality. Our results show that, when providing haptic feedback giving the location of a widget, users perform 50% more accurately compared to providing visual feedback alone. When provided with a haptic location of a widget alone, users are more than 30% more accurate than when given a visual location. When aware of the location of the haptic feedback, active exploration decreased the minimum recommended widget size from 2cm2 to 1cm2 when compared to passive exploration from previous studies. Our results will allow designers to create better mid-air interactions using this new form of haptic feedback

    Variable force and visual feedback effects on teleoperator man/machine performance

    Get PDF
    An experimental study was conducted to determine the effects of various forms of visual and force feedback on human performance for several telemanipulation tasks. Experiments were conducted with varying frame rates and subtended visual angles, with and without force feedback

    Summation of visual and mechanosensory feedback in Drosophila flight control

    Get PDF
    The fruit fly Drosophila melanogaster relies on feedback from multiple sensory modalities to control flight maneuvers. Two sensory organs, the compound eyes and mechanosensory hindwings called halteres, are capable of encoding angular velocity of the body during flight. Although motor reflexes driven by the two modalities have been studied individually, little is known about how the two sensory feedback channels are integrated during flight. Using a specialized flight simulator we presented tethered flies with simultaneous visual and mechanosensory oscillations while measuring compensatory changes in stroke kinematics. By varying the relative amplitude, phase and axis of rotation of the visual and mechanical stimuli, we were able to determine the contribution of each sensory modality to the compensatory motor reflex. Our results show that over a wide range of experimental conditions sensory inputs from halteres and the visual system are combined in a weighted sum. Furthermore, the weighting structure places greater influence on feedback from the halteres than from the visual system

    Visual feedback alters force control and functional activity in the visuomotor network after stroke.

    Get PDF
    Modulating visual feedback may be a viable option to improve motor function after stroke, but the neurophysiological basis for this improvement is not clear. Visual gain can be manipulated by increasing or decreasing the spatial amplitude of an error signal. Here, we combined a unilateral visually guided grip force task with functional MRI to understand how changes in the gain of visual feedback alter brain activity in the chronic phase after stroke. Analyses focused on brain activation when force was produced by the most impaired hand of the stroke group as compared to the non-dominant hand of the control group. Our experiment produced three novel results. First, gain-related improvements in force control were associated with an increase in activity in many regions within the visuomotor network in both the stroke and control groups. These regions include the extrastriate visual cortex, inferior parietal lobule, ventral premotor cortex, cerebellum, and supplementary motor area. Second, the stroke group showed gain-related increases in activity in additional regions of lobules VI and VIIb of the ipsilateral cerebellum. Third, relative to the control group, the stroke group showed increased activity in the ipsilateral primary motor cortex, and activity in this region did not vary as a function of visual feedback gain. The visuomotor network, cerebellum, and ipsilateral primary motor cortex have each been targeted in rehabilitation interventions after stroke. Our observations provide new insight into the role these regions play in processing visual gain during a precisely controlled visuomotor task in the chronic phase after stroke

    A feedback model of perceptual learning and categorisation

    Get PDF
    Top-down, feedback, influences are known to have significant effects on visual information processing. Such influences are also likely to affect perceptual learning. This article employs a computational model of the cortical region interactions underlying visual perception to investigate possible influences of top-down information on learning. The results suggest that feedback could bias the way in which perceptual stimuli are categorised and could also facilitate the learning of sub-ordinate level representations suitable for object identification and perceptual expertise

    A robot hand testbed designed for enhancing embodiment and functional neurorehabilitation of body schema in subjects with upper limb impairment or loss.

    Get PDF
    Many upper limb amputees experience an incessant, post-amputation "phantom limb pain" and report that their missing limbs feel paralyzed in an uncomfortable posture. One hypothesis is that efferent commands no longer generate expected afferent signals, such as proprioceptive feedback from changes in limb configuration, and that the mismatch of motor commands and visual feedback is interpreted as pain. Non-invasive therapeutic techniques for treating phantom limb pain, such as mirror visual feedback (MVF), rely on visualizations of postural changes. Advances in neural interfaces for artificial sensory feedback now make it possible to combine MVF with a high-tech "rubber hand" illusion, in which subjects develop a sense of embodiment with a fake hand when subjected to congruent visual and somatosensory feedback. We discuss clinical benefits that could arise from the confluence of known concepts such as MVF and the rubber hand illusion, and new technologies such as neural interfaces for sensory feedback and highly sensorized robot hand testbeds, such as the "BairClaw" presented here. Our multi-articulating, anthropomorphic robot testbed can be used to study proprioceptive and tactile sensory stimuli during physical finger-object interactions. Conceived for artificial grasp, manipulation, and haptic exploration, the BairClaw could also be used for future studies on the neurorehabilitation of somatosensory disorders due to upper limb impairment or loss. A remote actuation system enables the modular control of tendon-driven hands. The artificial proprioception system enables direct measurement of joint angles and tendon tensions while temperature, vibration, and skin deformation are provided by a multimodal tactile sensor. The provision of multimodal sensory feedback that is spatiotemporally consistent with commanded actions could lead to benefits such as reduced phantom limb pain, and increased prosthesis use due to improved functionality and reduced cognitive burden
    corecore