32,495 research outputs found

    Novel Multimodal Feedback Techniques for In-Car Mid-Air Gesture Interaction

    Get PDF
    This paper presents an investigation into the effects of different feedback modalities on mid-air gesture interaction for infotainment systems in cars. Car crashes and near-crash events are most commonly caused by driver distraction. Mid-air interaction is a way of reducing driver distraction by reducing visual demand from infotainment. Despite a range of available modalities, feedback in mid-air gesture systems is generally provided through visual displays. We conducted a simulated driving study to investigate how different types of multimodal feedback can support in-air gestures. The effects of different feedback modalities on eye gaze behaviour, and the driving and gesturing tasks are considered. We found that feedback modality influenced gesturing behaviour. However, drivers corrected falsely executed gestures more often in non-visual conditions. Our findings show that non-visual feedback can reduce visual distraction significantl

    Classifying public display systems: an input/output channel perspective

    Get PDF
    Public display screens are relatively recent additions to our world, and while they may be as simple as a large screen with minimal input/output features, more recent developments have introduced much richer interaction possibilities supporting a variety of interaction styles. In this paper we propose a framework for classifying public display systems with a view to better understanding how they differ in terms of their interaction channels and how future installations are likely to evolve. This framework is explored through 15 existing public display systems which use mobile phones for interaction in the display space

    Seamless and Secure VR: Adapting and Evaluating Established Authentication Systems for Virtual Reality

    Get PDF
    Virtual reality (VR) headsets are enabling a wide range of new opportunities for the user. For example, in the near future users may be able to visit virtual shopping malls and virtually join international conferences. These and many other scenarios pose new questions with regards to privacy and security, in particular authentication of users within the virtual environment. As a first step towards seamless VR authentication, this paper investigates the direct transfer of well-established concepts (PIN, Android unlock patterns) into VR. In a pilot study (N = 5) and a lab study (N = 25), we adapted existing mechanisms and evaluated their usability and security for VR. The results indicate that both PINs and patterns are well suited for authentication in VR. We found that the usability of both methods matched the performance known from the physical world. In addition, the private visual channel makes authentication harder to observe, indicating that authentication in VR using traditional concepts already achieves a good balance in the trade-off between usability and security. The paper contributes to a better understanding of authentication within VR environments, by providing the first investigation of established authentication methods within VR, and presents the base layer for the design of future authentication schemes, which are used in VR environments only

    Interoceptive Ingredients of Body Ownership: Affective Touch and Cardiac Awareness in the Rubber Hand Illusion

    Get PDF
    This document is the Accepted Manuscript version of the following article: Laura Crucianelli, Charlotte Krahe, Paul M. Jenkinson, Aikaterini (Katerina) Fotopoulou, 'Interoceptive Ingredients of Body Ownership: Affective Touch and Cardiac Awareness in the Rubber Hand Illusion', Cortex, first published online 1 May 2017, available at doi: https://doi.org/10.1016/j.cortex.2017.04.018. © 2017 Elsevier Ltd. All rights reserved.The sense of body ownership represents a fundamental aspect of bodily self-consciousness. Using multisensory integration paradigms, recent studies have shown that both exteroceptive and interoceptive information contribute to our sense of body ownership. Interoception refers to the physiological sense of the condition of the body, including afferent signals that originate inside the body and outside the body. However, it remains unclear whether individual sensitivity to interoceptive modalities is unitary or differs between modalities. It is also unclear whether the effect of interoceptive information on body ownership is caused by exteroceptive ‘visual capture’ of these modalities, or by bottom-up processing of interoceptive information. This study aimed to test these questions in two separate samples. In the first experiment (N = 76), we examined the relationship between two different interoceptive modalities, namely cardiac awareness based on a heartbeat counting task, and affective touch perception based on stimulation of a specialized C tactile (CT) afferent system. This is an interoceptive modality of affective and social significance. In a second experiment (N = 63), we explored whether ‘off-line’ trait interoceptive sensitivity based on a heartbeat counting task would modulate the extent to which CT affective touch influences the multisensory process during the rubber hand illusion (RHI). We found that affective touch enhanced the subjective experience of body ownership during the RHI. Nevertheless, interoceptive sensitivity, as measured by a heartbeat counting task, did not modulate this effect, nor did it relate to the perception of ownership or of CT-optimal affective touch more generally. By contrast, this trait measure of interoceptive sensitivity appeared most relevant when the multisensory context of interoception was ambiguous, suggesting that the perception of interoceptive signals and their effects on body ownership may depend on individual abilities to regulate the balance of interoception and exteroception in given contexts.Peer reviewedFinal Accepted Versio

    Spatial audio in small display screen devices

    Get PDF
    Our work addresses the problem of (visual) clutter in mobile device interfaces. The solution we propose involves the translation of technique-from the graphical to the audio domain-for expliting space in information representation. This article presents an illustrative example in the form of a spatialisedaudio progress bar. In usability tests, participants performed background monitoring tasks significantly more accurately using this spatialised audio (a compared with a conventional visual) progress bar. Moreover, their performance in a simultaneously running, visually demanding foreground task was significantly improved in the eye-free monitoring condition. These results have important implications for the design of multi-tasking interfaces for mobile devices

    Multimodal perception of histological images for persons blind or visually impaired

    Get PDF
    Currently there is no suitable substitute technology to enable blind or visually impaired (BVI) people to interpret visual scientific data commonly generated during lab experimentation in real time, such as performing light microscopy, spectrometry, and observing chemical reactions. This reliance upon visual interpretation of scientific data certainly impedes students and scientists that are BVI from advancing in careers in medicine, biology, chemistry, and other scientific fields. To address this challenge, a real-time multimodal image perception system is developed to transform standard laboratory blood smear images for persons with BVI to perceive, employing a combination of auditory, haptic, and vibrotactile feedbacks. These sensory feedbacks are used to convey visual information through alternative perceptual channels, thus creating a palette of multimodal, sensorial information. A Bayesian network is developed to characterize images through two groups of features of interest: primary and peripheral features. Causal relation links were established between these two groups of features. Then, a method was conceived for optimal matching between primary features and sensory modalities. Experimental results confirmed this real-time approach of higher accuracy in recognizing and analyzing objects within images compared to tactile images
    corecore