16,531 research outputs found

    Can you see what i am talking about? Human speech triggers referential expectation in four-month-old infants

    Get PDF
    Infants’ sensitivity to selectively attend to human speech and to process it in a unique way has been widely reported in the past. However, in order to successfully acquire language, one should also understand that speech is a referential, and that words can stand for other entities in the world. While there has been some evidence showing that young infants can make inferences about the communicative intentions of a speaker, whether they would also appreciate the direct relationship between a specific word and its referent, is still unknown. In the present study we tested four-month-old infants to see whether they would expect to find a referent when they hear human speech. Our results showed that compared to other auditory stimuli or to silence, when infants were listening to speech they were more prepared to find some visual referents of the words, as signalled by their faster orienting towards the visual objects. Hence, our study is the first to report evidence that infants at a very young age already understand the referential relationship between auditory words and physical objects, thus show a precursor in appreciating the symbolic nature of language, even if they do not understand yet the meanings of words

    EyeScout: Active Eye Tracking for Position and Movement Independent Gaze Interaction with Large Public Displays

    Get PDF
    While gaze holds a lot of promise for hands-free interaction with public displays, remote eye trackers with their confined tracking box restrict users to a single stationary position in front of the display. We present EyeScout, an active eye tracking system that combines an eye tracker mounted on a rail system with a computational method to automatically detect and align the tracker with the user's lateral movement. EyeScout addresses key limitations of current gaze-enabled large public displays by offering two novel gaze-interaction modes for a single user: In "Walk then Interact" the user can walk up to an arbitrary position in front of the display and interact, while in "Walk and Interact" the user can interact even while on the move. We report on a user study that shows that EyeScout is well perceived by users, extends a public display's sweet spot into a sweet line, and reduces gaze interaction kick-off time to 3.5 seconds -- a 62% improvement over state of the art solutions. We discuss sample applications that demonstrate how EyeScout can enable position and movement-independent gaze interaction with large public displays

    VRpursuits: Interaction in Virtual Reality Using Smooth Pursuit Eye Movements

    Get PDF
    Gaze-based interaction using smooth pursuit eye movements (Pursuits) is attractive given that it is intuitive and overcomes the Midas touch problem. At the same time, eye tracking is becoming increasingly popular for VR applications. While Pursuits was shown to be effective in several interaction contexts, it was never explored in-depth for VR before. In a user study (N=26), we investigated how parameters that are specific to VR settings influence the performance of Pursuits. For example, we found that Pursuits is robust against different sizes of virtual 3D targets. However performance improves when the trajectory size (e.g., radius) is larger, particularly if the user is walking while interacting. While walking, selecting moving targets via Pursuits is generally feasible albeit less accurate than when stationary. Finally, we discuss the implications of these findings and the potential of smooth pursuits for interaction in VR by demonstrating two sample use cases: 1) gaze-based authentication in VR, and 2) a space meteors shooting game

    Regarding Pilot Usage of Display Technologies for Improving Awareness of Aircraft System States

    Get PDF
    ed systems and the procedures for ng in complexity. This interacting trend places a larger burden on pilots to manage increasing amounts of information and to understand system interactions. The result is an increase in the likelihood of loss of airplane state awareness (ASA). One way to gain more insight into this issue is through experimentation using objective measures of visual behavior. This study summarizes an analysis of oculometer data obtained during a high-fidelity flight simulation study that included a variety of complex pilot-system interactions that occur in current flight decks, as well as several planned for the next generation air transportation system. The study was comprised of various scenarios designed to induce low and high energy aircraft states coupled with other emulated causal factors in recent accidents. Three different display technologies were evaluated in this recent pilot-in-the-loop study conducted at NASA Langley Research Center. These technologies include a stall recovery guidance algorithm and display concept, an enhanced airspeed control indication of when the automation is no longer actively controlling airspeed, and enhanced synoptic diagrams with corresponding simplified electronic interactive checklists. Multiple data analyses were performed to understand how the 26 participating airline pilots were observing ASA-related information provided during different stag specific events within these stages
    • …
    corecore