5,248 research outputs found

    Touching the invisible: Localizing ultrasonic haptic cues

    Get PDF
    While mid-air gestures offer new possibilities to interact with or around devices, some situations, such as interacting with applications, playing games or navigating, may require visual attention to be focused on a main task. Ultrasonic haptic feedback can provide 3D spatial haptic cues that do not demand visual attention for these contexts. In this paper, we present an initial study of active exploration of ultrasonic haptic virtual points that investigates the spatial localization with and without the use of the visual modality. Our results show that, when providing haptic feedback giving the location of a widget, users perform 50% more accurately compared to providing visual feedback alone. When provided with a haptic location of a widget alone, users are more than 30% more accurate than when given a visual location. When aware of the location of the haptic feedback, active exploration decreased the minimum recommended widget size from 2cm2 to 1cm2 when compared to passive exploration from previous studies. Our results will allow designers to create better mid-air interactions using this new form of haptic feedback

    Novel Multimodal Feedback Techniques for In-Car Mid-Air Gesture Interaction

    Get PDF
    This paper presents an investigation into the effects of different feedback modalities on mid-air gesture interaction for infotainment systems in cars. Car crashes and near-crash events are most commonly caused by driver distraction. Mid-air interaction is a way of reducing driver distraction by reducing visual demand from infotainment. Despite a range of available modalities, feedback in mid-air gesture systems is generally provided through visual displays. We conducted a simulated driving study to investigate how different types of multimodal feedback can support in-air gestures. The effects of different feedback modalities on eye gaze behaviour, and the driving and gesturing tasks are considered. We found that feedback modality influenced gesturing behaviour. However, drivers corrected falsely executed gestures more often in non-visual conditions. Our findings show that non-visual feedback can reduce visual distraction significantl

    Affordance of vibrational excitation for music composition and performance

    Get PDF
    Mechanical vibrations have typically been used in the performance domain within feedback systems to inform musicians of system states or as communication channels between performers. In this paper, we propose the addi- tional taxonomic category of vibrational excitation of mu- sical instruments for sound generation. To explore the va- riety of possibilities associated with this extended taxon- omy, we present the Oktopus, a multi-purpose wireless sys- tem capable of motorised vibrational excitation. The sys- tem can receive up to eight inputs and generates vibrations as outputs through eight motors that can be positioned ac- cordingly to produce a wide range of sounds from an ex- cited instrument. We demonstrate the usefulness of the proposed system and extended taxonomy through the de- velopment and performance of Live Mechanics, a compo- sition for piano and interactive electronics

    Perceived synchronization of mulsemedia services

    Get PDF
    Multimedia synchronization involves a temporal relationship between audio and visual media components. The presentation of "in-sync" data streams is essential to achieve a natural impression, as "out-of-sync" effects are often associated with user quality of experience (QoE) decrease. Recently, multi-sensory media (mulsemedia) has been demonstrated to provide a highly immersive experience for its users. Unlike traditional multimedia, mulsemedia consists of other media types (i.e., haptic, olfaction, taste, etc.) in addition to audio and visual content. Therefore, the goal of achieving high quality mulsemedia transmission is to present no or little synchronization errors between the multiple media components. In order to achieve this ideal synchronization, there is a need for comprehensive knowledge of the synchronization requirements at the user interface. This paper presents the results of a subjective study carried out to explore the temporal boundaries within which haptic and air-flow media objects can be successfully synchronized with video media. Results show that skews between sensorial media and multimedia might still give the effect that the mulsemedia sequence is "in-sync" and provide certain constraints under which synchronization errors might be tolerated. The outcomes of the paper are used to provide recommendations for mulsemedia service providers in order for their services to be associated with acceptable user experience levels, e.g. haptic media could be presented with a delay of up to 1 s behind video content, while air-flow media could be released either 5 s ahead of or 3 s behind video content

    An empirical examination of feedback : user control and performance in a hapto-audio-visual training environment

    Full text link
    Utilising advanced technologies, such as virtual environments (VEs), is of importance to training and education. The need to develop and effectively apply interactive, immersive 3D VEs continues to grow. As with any emerging technology, user acceptance of new software and hardware devices is often difficult to measure and guidelines to introduce and ensure adequate and correct usage of such technologies are lacking. It is therefore imperative to obtain a solid understanding of the important elements that play a role in effective learning through VEs. In particular, 3D VEs may present unusual and varied interaction and adoption considerations. The major contribution of this study is to investigate a complex set of interrelated factors in the relatively new sphere of VEs for training and education. Although many of the factors appears to be important from past research, researcher have not explicitly studied a comprehensive set of inter-dependant, empirically validated factors in order to understand how VEs aid complex procedural knowledge and motor skill learning. By integrating theory from research on training, human computer interaction (HCI), ergonomics and cognitive psychology, this research proposes and validates a model that contributes to application-specific VE efficacy formation. The findings of this study show visual feedback has a significant effect on performance. For tactile/force feedback and auditory feedback, no significant effect were found. For satisfaction, user control is salient for performance. Other factors such as interactivity and system comfort, as well as level of task difficulty, also showed effects on performance.<br /
    corecore