196 research outputs found

    Assisting Navigation and Object Selection with Vibrotactile Cues

    Get PDF
    Our lives have been drastically altered by information technology in the last decades, leading to evolutionary mismatches between human traits and the modern environment. One particular mismatch occurs when visually demanding information technology overloads the perceptual, cognitive or motor capabilities of the human nervous system. This information overload could be partly alleviated by complementing visual interaction with haptics. The primary aim of this thesis was to investigate how to assist movement control with vibrotactile cues. Vibrotactile cues refer to technologymediated vibrotactile signals that notify users of perceptual events, propose users to make decisions, and give users feedback from actions. To explore vibrotactile cues, we carried out five experiments in two contexts of movement control: navigation and object selection. The goal was to find ways to reduce information load in these tasks, thus helping users to accomplish the tasks more effectively. We employed measurements such as reaction times, error rates, and task completion times. We also used subjective rating scales, short interviews, and free-form participant comments to assess the vibrotactile assisted interactive systems. The findings of this thesis can be summarized as follows. First, if the context of movement control allows the use of both feedback and feedforward cues, feedback cues are a reasonable first option. Second, when using vibrotactile feedforward cues, using low-level abstractions and supporting the interaction with other modalities can keep the information load as low as possible. Third, the temple area is a feasible actuation location for vibrotactile cues in movement control, including navigation cues and object selection cues with head turns. However, the usability of the area depends on contextual factors such as spatial congruency, the actuation device, and the pace of the interaction task

    Haptic feedback in eye typing

    Get PDF
    Proper feedback is essential in gaze based interfaces, where the same modality is used for both perception and control. We measured how vibrotactile feedback, a form of haptic feedback, compares with the commonly used visual and auditory feedback in eye typing. Haptic feedback was found to produce results that are close to those of auditory feedback; both were easy to perceive and participants liked both the auditory ”click” and the tactile “tap” of the selected key. Implementation details (such as the placement of the haptic actuator) were also found important

    Spatial guiding through haptic cues in omnidirectional video

    Get PDF
    Omnidirectional video’s extensive amount of visual information challenges the users to find and stay focused on the essential parts of the video. I examined how user experience was affected when haptic cues in the head area are used to guide the viewer’s gaze towards the essential parts of omnidirectional video. User experiences with different omnidirectional video types combined with haptic guiding were compared and analyzed. Other part of the research was aimed to find out how haptic and auditory modalities and their combination affected the user experience. The participants used an Oculus Rift headset to watch omnidirectional video material and two actuators were placed on their forehead to indicate if the essential part located in the left or right direction. The results of the questionnaires and the comments showed that haptic guiding was useful and effective, though it was not experienced as a necessary feature during easy to follow and slow-paced videos. The combination of haptic guiding and audio was rated the most positive use of modalities. This feature has a lot of potential to enhance user experience of omnidirectional videos. Further studies on the long-term usage of the feature are required to eliminate the novelty effect and gain a more accurate understanding of the users’ needs

    HapticLock: Eyes-Free Authentication for Mobile Devices

    Get PDF
    Smartphones provide access to increasing amounts of personal and sensitive information, yet are often only secured using methods that are prone to observational attacks. We present HapticLock, a novel authentication method for mobile devices that uses non-visual interaction modalities for discreet PIN entry that is difficult to attack by shoulder surfing. A usability experiment (N=20) finds effective PIN entry in secure conditions: e.g., in 23.5s with 98.3% success rate for a four-digit PIN entered from a random start digit. A shoulder surfing experiment (N=15) finds that HapticLock is highly resistant to observational attacks. Even when interaction is highly visible, attackers need to guess the first digit when PIN entry begins with a random number, yielding a very low success rate for shoulder surfing. Furthermore, a device can be hidden from view during authentication. Our use of haptic interaction modalities gives privacy-conscious mobile device users a usable and secure authentication alternative for sensitive situations

    Mobile gaze interaction : gaze gestures with haptic feedback

    Get PDF
    There has been an increasing need for alternate interaction techniques to support mobile usage context. Gaze tracking technology is anticipated to soon appear in commercial mobile devices. There are two important considerations when designing mobile gaze interactions. Firstly, the interaction should be robust to accuracy problems. Secondly, user feedback should be instantaneous, meaningful and appropriate to ease the interaction. This thesis proposes gaze gesture input with haptic feedback as an interaction technique in the mobile context. This work presents the results of an experiment that was conducted to understand the effectiveness of vibrotactile feedback in two stroke gaze gesture based mobile interaction and to find the best temporal point in terms of gesture progression to provide the feedback. Four feedback conditions were used, NO (no tactile feedback), OUT (tactile feedback at the end of first stroke), FULL (tactile feedback at the end of second stroke) and BOTH (tactile feedback at the end of first and second strokes). The results suggest that haptic feedback does help the interaction. The participants completed the tasks with fewer errors when haptic feedback was provided. The feedback conditions OUT and BOTH were found to be equally effective in terms of task completion time. The participants also subjectively rated these feedback conditions as being more comfortable and easier to use than FULL and NO feedback conditions

    Mobile gaze interaction : gaze gestures with haptic feedback

    Get PDF
    There has been an increasing need for alternate interaction techniques to support mobile usage context. Gaze tracking technology is anticipated to soon appear in commercial mobile devices. There are two important considerations when designing mobile gaze interactions. Firstly, the interaction should be robust to accuracy problems. Secondly, user feedback should be instantaneous, meaningful and appropriate to ease the interaction. This thesis proposes gaze gesture input with haptic feedback as an interaction technique in the mobile context. This work presents the results of an experiment that was conducted to understand the effectiveness of vibrotactile feedback in two stroke gaze gesture based mobile interaction and to find the best temporal point in terms of gesture progression to provide the feedback. Four feedback conditions were used, NO (no tactile feedback), OUT (tactile feedback at the end of first stroke), FULL (tactile feedback at the end of second stroke) and BOTH (tactile feedback at the end of first and second strokes). The results suggest that haptic feedback does help the interaction. The participants completed the tasks with fewer errors when haptic feedback was provided. The feedback conditions OUT and BOTH were found to be equally effective in terms of task completion time. The participants also subjectively rated these feedback conditions as being more comfortable and easier to use than FULL and NO feedback conditions

    Natural Walking in Virtual Reality:A Review

    Get PDF

    Multimodality in {VR}: {A} Survey

    Get PDF
    Virtual reality has the potential to change the way we create and consume content in our everyday life. Entertainment, training, design and manufacturing, communication, or advertising are all applications that already benefit from this new medium reaching consumer level. VR is inherently different from traditional media: it offers a more immersive experience, and has the ability to elicit a sense of presence through the place and plausibility illusions. It also gives the user unprecedented capabilities to explore their environment, in contrast with traditional media. In VR, like in the real world, users integrate the multimodal sensory information they receive to create a unified perception of the virtual world. Therefore, the sensory cues that are available in a virtual environment can be leveraged to enhance the final experience. This may include increasing realism, or the sense of presence; predicting or guiding the attention of the user through the experience; or increasing their performance if the experience involves the completion of certain tasks. In this state-of-the-art report, we survey the body of work addressing multimodality in virtual reality, its role and benefits in the final user experience. The works here reviewed thus encompass several fields of research, including computer graphics, human computer interaction, or psychology and perception. Additionally, we give an overview of different applications that leverage multimodal input in areas such as medicine, training and education, or entertainment; we include works in which the integration of multiple sensory information yields significant improvements, demonstrating how multimodality can play a fundamental role in the way VR systems are designed, and VR experiences created and consumed

    Toward multimodality: gesture and vibrotactile feedback in natural human computer interaction

    Get PDF
    In the present work, users’ interaction with advanced systems has been investigated in different application domains and with respect to different interfaces. The methods employed were carefully devised to respond to the peculiarities of the interfaces under examination. We could extract a set of recommendations for developers. The first application domain examined regards the home. In particular, we addressed the design of a gestural interface for controlling a lighting system embedded into a piece of furniture in the kitchen. A sample of end users was observed while interacting with the virtual simulation of the interface. Based on the videoanalysis of users’ spontaneous behaviors, we could derive a set of significant interaction trends The second application domain involved the exploration of an urban environment in mobility. In a comparative study, a haptic-audio interface and an audio-visual interface were employed for guiding users towards landmarks and for providing them with information. We showed that the two systems were equally efficient in supporting the users and they were both well- received by them. In a navigational task we compared two tactile displays each embedded in a different wearable device, i.e., a glove and a vest. Despite the differences in the shape and size, both systems successfully directed users to the target. The strengths and the flaws of the two devices were pointed out and commented by users. In a similar context, two devices supported Augmented Reality technology, i.e., a pair of smartglasses and a smartphone, were compared. The experiment allowed us to identify the circumstances favoring the use of smartglasses or the smartphone. Considered altogether, our findings suggest a set of recommendations for developers of advanced systems. First, we outline the importance of properly involving end users for unveiling intuitive interaction modalities with gestural interfaces. We also highlight the importance of providing the user the chance to choose the interaction mode better fitting the contextual characteristics and to adjust the features of every interaction mode. Finally, we outline the potential of wearable devices to support interactions on the move and the importance of finding a proper balance between the amount of information conveyed to the user and the size of the device

    16th Sound and Music Computing Conference SMC 2019 (28–31 May 2019, Malaga, Spain)

    Get PDF
    The 16th Sound and Music Computing Conference (SMC 2019) took place in Malaga, Spain, 28-31 May 2019 and it was organized by the Application of Information and Communication Technologies Research group (ATIC) of the University of Malaga (UMA). The SMC 2019 associated Summer School took place 25-28 May 2019. The First International Day of Women in Inclusive Engineering, Sound and Music Computing Research (WiSMC 2019) took place on 28 May 2019. The SMC 2019 TOPICS OF INTEREST included a wide selection of topics related to acoustics, psychoacoustics, music, technology for music, audio analysis, musicology, sonification, music games, machine learning, serious games, immersive audio, sound synthesis, etc
    • …
    corecore