62,594 research outputs found

    Selection strategies in gaze interaction

    Get PDF
    This thesis deals with selection strategies in gaze interaction, specifically for a context where gaze is the sole input modality for users with severe motor impairments. The goal has been to contribute to the subfield of assistive technology where gaze interaction is necessary for the user to achieve autonomous communication and environmental control. From a theoretical point of view research has been done on the physiology of the gaze, eye tracking technology, and a taxonomy of existing selection strategies has been developed. Empirically two overall approaches have been taken. Firstly, end-user research has been conducted through interviews and observation. The capabilities, requirements, and wants of the end-user have been explored. Secondly, several applications have been developed to explore the selection strategy of single stroke gaze gestures (SSGG) and aspects of complex gaze gestures. The main finding is that single stroke gaze gestures can successfully be used as a selection strategy. Some of the features of SSGG are: That horizontal single stroke gaze gestures are faster than vertical single stroke gaze gestures; That there is a significant difference in completion time depending on gesture length; That single stroke gaze gestures can be completed without visual feedback; That gaze tracking equipment has a significant effect on the completion times and error rates of single stroke gaze gestures; That there is not a significantly greater chance of making selection errors with single stroke gaze gestures compared with dwell selection. The overall conclusion is that the future of gaze interaction should focus on developing multi-modal interactions for mono-modal input

    Combined Head Gestures for Improved User Interaction

    Get PDF
    Fine-grained gestures such as eye gaze, facial expressions, etc., can be useful as input mechanisms for smart devices. However, single gesture inputs such as eye gaze are inefficient, and it is difficult to perform complex operations with such inputs. This disclosure describes techniques to fuse multiple head gestures, e.g., head, eye, mouth, or eyebrow movement, to provide superior user interaction. Head gestures are classified as analog (e.g., eye movements, which provide continuous input) or binary (e.g., frowns, which indicate one-time operation). An analog gesture is fused with multiple binary gestures to improve efficiency. For example, to move an object, the user selects the object by looking and frowning at it; moves the object using eye gaze; and frowns again to set it in position. Different facial expressions can be flexibly assigned to accommodate varied user abilities and preferences

    Explorations in engagement for humans and robots

    Get PDF
    This paper explores the concept of engagement, the process by which individuals in an interaction start, maintain and end their perceived connection to one another. The paper reports on one aspect of engagement among human interactors--the effect of tracking faces during an interaction. It also describes the architecture of a robot that can participate in conversational, collaborative interactions with engagement gestures. Finally, the paper reports on findings of experiments with human participants who interacted with a robot when it either performed or did not perform engagement gestures. Results of the human-robot studies indicate that people become engaged with robots: they direct their attention to the robot more often in interactions where engagement gestures are present, and they find interactions more appropriate when engagement gestures are present than when they are not.Comment: 31 pages, 5 figures, 3 table

    GazeTouchPass: Multimodal Authentication Using Gaze and Touch on Mobile Devices

    Get PDF
    We propose a multimodal scheme, GazeTouchPass, that combines gaze and touch for shoulder-surfing resistant user authentication on mobile devices. GazeTouchPass allows passwords with multiple switches between input modalities during authentication. This requires attackers to simultaneously observe the device screen and the user's eyes to find the password. We evaluate the security and usability of GazeTouchPass in two user studies. Our findings show that GazeTouchPass is usable and significantly more secure than single-modal authentication against basic and even advanced shoulder-surfing attacks

    Changing the game:exploring infants' participation in early play routines

    Get PDF
    Play has proved to have a central role in children’s development, most notably in rule learning (Piaget, 1965; Sutton-Smith, 1979) and negotiation of roles and goals (Garvey, 1972; Bruner et al., 1976). Yet very little research has been done on early play. The present study focuses on early social games, i.e. vocal-kinetic play routines that mothers use to interact with infants from very early on. We explored 3-month-old infants and their mothers performing a routine game first in the usual way, then in two violated conditions: without gestures and without sound. The aim of the study is to investigate infants’ participation and expectations in the game and whether this participation is affected by changes in the multimodal format of the game. Infants’ facial expressions, gaze and body movements were coded to measure levels of engagement and affective state across the three conditions. Results showed a significant decrease in Limbs Movements and expressions of Positive Affect, an increase in Gaze Away and in Stunned Expression when the game structure was violated. These results indicate that the violated game conditions were experienced as less engaging, either because of an unexpected break in the established joint routine, or simply because they were weaker versions of the same game. Overall, our results suggest that structured, multimodal play routines may constitute interactional contexts that only work as integrated units of auditory and motor resources, representing early communicative contexts which prepare the ground for later, more complex multimodal interactions, such as verbal exchanges

    Introduction: Multimodal interaction

    Get PDF
    That human social interaction involves the intertwined cooperation of different modalities is uncontroversial. Researchers in several allied ïŹelds have, however, only recently begun to document the precise ways in which talk, gesture, gaze, and aspects of the material surround are brought together to form coherent courses of action. The papers in this volume are attempts to develop this line of inquiry. Although the authors draw on a range of analytic, theoretical, and methodological traditions (conversation analysis, ethnography, distributed cognition, and workplace studies), all are concerned to explore and illuminate the inherently multimodal character of social interaction. Recent studies, including those collected in this volume, suggest that different modalities work together not only to elaborate the semantic content of talk but also to constitute coherent courses of action. In this introduction we present evidence for this position. We begin by reviewing some select literature focusing primarily on communicative functions and interactive organizations of speciïŹc modalities before turning to consider the integration of distinct modalities in interaction

    Gaze and Gestures in Telepresence: multimodality, embodiment, and roles of collaboration

    Full text link
    This paper proposes a controlled experiment to further investigate the usefulness of gaze awareness and gesture recognition in the support of collaborative work at a distance. We propose to redesign experiments conducted several years ago with more recent technology that would: a) enable to better study of the integration of communication modalities, b) allow users to freely move while collaborating at a distance and c) avoid asymmetries of communication between collaborators.Comment: Position paper, International Workshop New Frontiers in Telepresence 2010, part of CSCW2010, Savannah, GA, USA, 7th of February, 2010. http://research.microsoft.com/en-us/events/nft2010

    Novel Multimodal Feedback Techniques for In-Car Mid-Air Gesture Interaction

    Get PDF
    This paper presents an investigation into the effects of different feedback modalities on mid-air gesture interaction for infotainment systems in cars. Car crashes and near-crash events are most commonly caused by driver distraction. Mid-air interaction is a way of reducing driver distraction by reducing visual demand from infotainment. Despite a range of available modalities, feedback in mid-air gesture systems is generally provided through visual displays. We conducted a simulated driving study to investigate how different types of multimodal feedback can support in-air gestures. The effects of different feedback modalities on eye gaze behaviour, and the driving and gesturing tasks are considered. We found that feedback modality influenced gesturing behaviour. However, drivers corrected falsely executed gestures more often in non-visual conditions. Our findings show that non-visual feedback can reduce visual distraction significantl
    • 

    corecore