23,485 research outputs found

    Tensity Research Based on the Information of Eye Movement

    Full text link
    User's mental state is concerned gradually, during the interaction course of human robot. As the measurement and identification method of psychological state, tension, has certain practical significance role. At presents there is no suitable method of measuring the tension. Firstly, sum up some availability of eye movement index. And then parameters extraction on eye movement characteristics of normal illumination is studied, including the location of the face, eyes location, access to the pupil diameter, the eye pupil center characteristic parameters. And with the judgment of the tension in eye images, extract exact information of gaze direction. Finally, through the experiment to prove the proposed method is effective

    Advanced statistical methods for eye movement analysis and modeling: a gentle introduction

    Full text link
    In this Chapter we show that by considering eye movements, and in particular, the resulting sequence of gaze shifts, a stochastic process, a wide variety of tools become available for analyses and modelling beyond conventional statistical methods. Such tools encompass random walk analyses and more complex techniques borrowed from the pattern recognition and machine learning fields. After a brief, though critical, probabilistic tour of current computational models of eye movements and visual attention, we lay down the basis for gaze shift pattern analysis. To this end, the concepts of Markov Processes, the Wiener process and related random walks within the Gaussian framework of the Central Limit Theorem will be introduced. Then, we will deliberately violate fundamental assumptions of the Central Limit Theorem to elicit a larger perspective, rooted in statistical physics, for analysing and modelling eye movements in terms of anomalous, non-Gaussian, random walks and modern foraging theory. Eventually, by resorting to machine learning techniques, we discuss how the analyses of movement patterns can develop into the inference of hidden patterns of the mind: inferring the observer's task, assessing cognitive impairments, classifying expertise.Comment: Draft of Chapter to appear in "An introduction to the scientific foundations of eye movement research and its applications

    Gaze-dependent topography in human posterior parietal cortex.

    Get PDF
    The brain must convert retinal coordinates into those required for directing an effector. One prominent theory holds that, through a combination of visual and motor/proprioceptive information, head-/body-centered representations are computed within the posterior parietal cortex (PPC). An alternative theory, supported by recent visual and saccade functional magnetic resonance imaging (fMRI) topographic mapping studies, suggests that PPC neurons provide a retinal/eye-centered coordinate system, in which the coding of a visual stimulus location and/or intended saccade endpoints should remain unaffected by changes in gaze position. To distinguish between a retinal/eye-centered and a head-/body-centered coordinate system, we measured how gaze direction affected the representation of visual space in the parietal cortex using fMRI. Subjects performed memory-guided saccades from a central starting point to locations “around the clock.” Starting points varied between left, central, and right gaze relative to the head-/body midline. We found that memory-guided saccadotopic maps throughout the PPC showed spatial reorganization with very subtle changes in starting gaze position, despite constant retinal input and eye movement metrics. Such a systematic shift is inconsistent with models arguing for a retinal/eye-centered coordinate system in the PPC, but it is consistent with head-/body-centered coordinate representations

    Saccade Sequence Prediction: Beyond Static Saliency Maps

    Full text link
    Visual attention is a field with a considerable history, with eye movement control and prediction forming an important subfield. Fixation modeling in the past decades has been largely dominated computationally by a number of highly influential bottom-up saliency models, such as the Itti-Koch-Niebur model. The accuracy of such models has dramatically increased recently due to deep learning. However, on static images the emphasis of these models has largely been based on non-ordered prediction of fixations through a saliency map. Very few implemented models can generate temporally ordered human-like sequences of saccades beyond an initial fixation point. Towards addressing these shortcomings we present STAR-FC, a novel multi-saccade generator based on a central/peripheral integration of deep learning-based saliency and lower-level feature-based saliency. We have evaluated our model using the CAT2000 database, successfully predicting human patterns of fixation with equivalent accuracy and quality compared to what can be achieved by using one human sequence to predict another. This is a significant improvement over fixation sequences predicted by state-of-the-art saliency algorithms

    Ocular attention-sensing interface system

    Get PDF
    The purpose of the research was to develop an innovative human-computer interface based on eye movement and voice control. By eliminating a manual interface (keyboard, joystick, etc.), OASIS provides a control mechanism that is natural, efficient, accurate, and low in workload

    Driver Gaze Region Estimation Without Using Eye Movement

    Full text link
    Automated estimation of the allocation of a driver's visual attention may be a critical component of future Advanced Driver Assistance Systems. In theory, vision-based tracking of the eye can provide a good estimate of gaze location. In practice, eye tracking from video is challenging because of sunglasses, eyeglass reflections, lighting conditions, occlusions, motion blur, and other factors. Estimation of head pose, on the other hand, is robust to many of these effects, but cannot provide as fine-grained of a resolution in localizing the gaze. However, for the purpose of keeping the driver safe, it is sufficient to partition gaze into regions. In this effort, we propose a system that extracts facial features and classifies their spatial configuration into six regions in real-time. Our proposed method achieves an average accuracy of 91.4% at an average decision rate of 11 Hz on a dataset of 50 drivers from an on-road study.Comment: Accepted for Publication in IEEE Intelligent System

    HeadOn: Real-time Reenactment of Human Portrait Videos

    Get PDF
    We propose HeadOn, the first real-time source-to-target reenactment approach for complete human portrait videos that enables transfer of torso and head motion, face expression, and eye gaze. Given a short RGB-D video of the target actor, we automatically construct a personalized geometry proxy that embeds a parametric head, eye, and kinematic torso model. A novel real-time reenactment algorithm employs this proxy to photo-realistically map the captured motion from the source actor to the target actor. On top of the coarse geometric proxy, we propose a video-based rendering technique that composites the modified target portrait video via view- and pose-dependent texturing, and creates photo-realistic imagery of the target actor under novel torso and head poses, facial expressions, and gaze directions. To this end, we propose a robust tracking of the face and torso of the source actor. We extensively evaluate our approach and show significant improvements in enabling much greater flexibility in creating realistic reenacted output videos.Comment: Video: https://www.youtube.com/watch?v=7Dg49wv2c_g Presented at Siggraph'1

    Social Navigation Planning Based on People's Awareness of Robots

    Full text link
    When mobile robots maneuver near people, they run the risk of rudely blocking their paths; but not all people behave the same around robots. People that have not noticed the robot are the most difficult to predict. This paper investigates how mobile robots can generate acceptable paths in dynamic environments by predicting human behavior. Here, human behavior may include both physical and mental behavior, we focus on the latter. We introduce a simple safe interaction model: when a human seems unaware of the robot, it should avoid going too close. In this study, people around robots are detected and tracked using sensor fusion and filtering techniques. To handle uncertainties in the dynamic environment, a Partially-Observable Markov Decision Process Model (POMDP) is used to formulate a navigation planning problem in the shared environment. People's awareness of robots is inferred and included as a state and reward model in the POMDP. The proposed planner enables a robot to change its navigation plan based on its perception of each person's robot-awareness. As far as we can tell, this is a new capability. We conduct simulation and experiments using the Toyota Human Support Robot (HSR) to validate our approach. We demonstrate that the proposed framework is capable of running in real-time.Comment: 8pages, 7 figure

    Modelling Locomotor Control: the advantages of mobile gaze

    No full text
    In 1958, JJ Gibson put forward proposals on the visual control of locomotion. Research in the last 50 years has served to clarify the sources of visual and nonvisual information that contribute to successful steering, but has yet to determine how this information is optimally combined under conditions of uncertainty. Here, we test the conditions under which a locomotor robot with a mobile camera can steer effectively using simple visual and extra-retinal parameters to examine how such models cope with the noisy real-world visual and motor estimates that are available to humans. This applied modeling gives us an insight into both the advantages and limitations of using active gaze to sample information when steering
    corecore