13 research outputs found

    Investigations of the Role of Gaze in Mixed-Reality Personal Computing

    Get PDF
    This paper investigates how eye tracking andgaze estimation can help create better mixedrealitypersonal computing systems involvingboth physical (real world) and virtual (digital)objects. The role of gaze is discussed in the lightof the situative space model (SSM)determining the set of objects (physical andvirtual) which a given human agent canperceive, and act on, in any given moment intime. The analysis and discussion results inideas for how to extend the SSM model in orderto better incorporate the role of gaze in everydayhuman activities, and for taking advantage ofemerging mobile eye tracking technology

    Creating Gaze Annotations in Head Mounted Displays

    No full text
    To facilitate distributed communication in mobile settings, we developed GazeNote for creating and sharing gaze annotations in head mounted displays (HMDs). With gaze annotations it possible to point out objects of interest within an image and add a verbal description. To create an annota-tion, the user simply captures an image using the HMDs camera, looks at an object of interest in the image and speaks out the information to be associated with the object. The gaze location is recorded and visualized with a marker. The voice is transcribed using speech recognition. Gaze annotations can be shared. Our study showed that users found that gaze annotations add precision and expressive-ness compared to annotations of the image as a whole

    Mobile gaze-based screen interaction in 3D environments

    No full text

    Eye-based head gestures for interaction in the car

    No full text
    corecore