24,524 research outputs found

    Tracking Gaze and Visual Focus of Attention of People Involved in Social Interaction

    Get PDF
    The visual focus of attention (VFOA) has been recognized as a prominent conversational cue. We are interested in estimating and tracking the VFOAs associated with multi-party social interactions. We note that in this type of situations the participants either look at each other or at an object of interest; therefore their eyes are not always visible. Consequently both gaze and VFOA estimation cannot be based on eye detection and tracking. We propose a method that exploits the correlation between eye gaze and head movements. Both VFOA and gaze are modeled as latent variables in a Bayesian switching state-space model. The proposed formulation leads to a tractable learning procedure and to an efficient algorithm that simultaneously tracks gaze and visual focus. The method is tested and benchmarked using two publicly available datasets that contain typical multi-party human-robot and human-human interactions.Comment: 15 pages, 8 figures, 6 table

    Use of the tilt cue in a simulated heading tracking task

    Get PDF
    The task was performed with subjects using visual-only cues and combined visual and roll-axis motion cues. Half of the experimental trials were conducted with the simulator rotating about the horizontal axis; to suppress the tilt cue, the remaining trials were conducted with the simulator cab tilted 90 deg so that roll-axis motions were about earth vertical. The presence of the tilt cue allowed a substantial and statistically significant reduction in performance scores. When the tilt cue was suppressed, the availability of motion cues did not result in significant performance improvement. These effects were accounted for by the optimal-control pilot/vehicle model, wherein the presence or absence of various motion cues was represented by appropriate definition of the perceptual quantities assumed to be used by the human operator
    corecore