2,467 research outputs found

    Gaze Behavior, Believability, Likability and the iCat

    Get PDF
    The iCat is a user-interface robot with the ability to express a range of emotions through its facial features. This paper summarizes our research whether we can increase the believability and likability of the iCat for its human partners through the application of gaze behaviour. Gaze behaviour serves several functions during social interaction such as mediating conversation flow, communicating emotional information and avoiding distraction by restricting visual input. There are several types of eye and head movements that are necessary for realizing these functions. We designed and evaluated a gaze behaviour system for the iCat robot that implements realistic models of the major types of eye and head movements found in living beings: vergence, vestibulo ocular reflexive, smooth pursuit movements and gaze shifts. We discuss how these models are integrated into the software environment of the iCat and can be used to create complex interaction scenarios. We report about some user tests and draw conclusions for future evaluation scenarios

    Quantifying interactions between accommodation and vergence in a binocularly normal population

    Get PDF
    AbstractStimulation of the accommodation system results in a response in the vergence system via accommodative vergence cross-link interactions, and stimulation of the vergence system results in an accommodation response via vergence accommodation cross-link interactions. Cross-link interactions are necessary in order to ensure simultaneous responses in the accommodation and vergence systems. The crosslink interactions are represented most comprehensively by the response AC/A (accommodative vergence) and CA/C (vergence accommodation) ratios, although the stimulus AC/A ratio is measured clinically, and the stimulus CA/C ratio is seldom measured in clinical practice. The present study aims to quantify both stimulus and response AC/A and CA/C ratios in a binocularly normal population, and determine the relationship between them. 25 Subjects (mean±SD age 21.0±1.9years) were recruited from the university population. A significant linear relationship was found between the stimulus and response ratios, for both AC/A (r2=0.96, p<0.001) and CA/C ratios (r2=0.40, p<0.05). Good agreement was found between the stimulus and response AC/A ratios (95% CI −0.06 to 0.24MA/D). Stimulus and response CA/C ratios are linearly related. Stimulus CA/C ratios were higher than response ratios at low values, and lower than response ratios at high values (95% CI −0.46 to 0.42D/MA). Agreement between stimulus and response CA/C ratios is poorer than that found for AC/A ratios due to increased variability in vergence responses when viewing the Gaussian blurred target. This study has shown that more work is needed to refine the methodology of CA/C ratio measurement

    The use of cues to convergence and accommodation in naive, uninstructed participants

    Get PDF
    A remote haploscopic video refractor was used to assess vergence and accommodation responses in a group of 32 emmetropic, orthophoric, symptom free, young adults naïve to vision experiments in a minimally instructed setting. Picture targets were presented at four positions between 2 m and 33 cm. Blur, disparity and looming cues were presented in combination or separately to asses their contributions to the total near response in a within-subjects design. Response gain for both vergence and accommodation reduced markedly whenever disparity was excluded, with much smaller effects when blur and proximity were excluded. Despite the clinical homogeneity of the participant group there were also some individual differences

    Neural Representations for Sensory-Motor Control, II: Learning a Head-Centered Visuomotor Representation of 3-D Target Position

    Full text link
    A neural network model is described for how an invariant head-centered representation of 3-D target position can be autonomously learned by the brain in real time. Once learned, such a target representation may be used to control both eye and limb movements. The target representation is derived from the positions of both eyes in the head, and the locations which the target activates on the retinas of both eyes. A Vector Associative Map, or YAM, learns the many-to-one transformation from multiple combinations of eye-and-retinal position to invariant 3-D target position. Eye position is derived from outflow movement signals to the eye muscles. Two successive stages of opponent processing convert these corollary discharges into a. head-centered representation that closely approximates the azimuth, elevation, and vergence of the eyes' gaze position with respect to a cyclopean origin located between the eyes. YAM learning combines this cyclopean representation of present gaze position with binocular retinal information about target position into an invariant representation of 3-D target position with respect to the head. YAM learning can use a teaching vector that is externally derived from the positions of the eyes when they foveate the target. A YAM can also autonomously discover and learn the invariant representation, without an explicit teacher, by generating internal error signals from environmental fluctuations in which these invariant properties are implicit. YAM error signals are computed by Difference Vectors, or DVs, that are zeroed by the YAM learning process. YAMs may be organized into YAM Cascades for learning and performing both sensory-to-spatial maps and spatial-to-motor maps. These multiple uses clarify why DV-type properties are computed by cells in the parietal, frontal, and motor cortices of many mammals. YAMs are modulated by gating signals that express different aspects of the will-to-act. These signals transform a single invariant representation into movements of different speed (GO signal) and size (GRO signal), and thereby enable YAM controllers to match a planned action sequence to variable environmental conditions.National Science Foundation (IRI-87-16960, IRI-90-24877); Office of Naval Research (N00014-92-J-1309

    Accommodation Dynamics

    Get PDF

    Gaze-based teleprosthetic enables intuitive continuous control of complex robot arm use: Writing &amp; drawing

    Get PDF
    Eye tracking is a powerful mean for assistive technologies for people with movement disorders, paralysis and amputees. We present a highly intuitive eye tracking-controlled robot arm operating in 3-dimensional space based on the user's gaze target point that enables tele-writing and drawing. The usability and intuitive usage was assessed by a “tele” writing experiment with 8 subjects that learned to operate the system within minutes of first time use. These subjects were naive to the system and the task and had to write three letters on a white board with a white board pen attached to the robot arm's endpoint. The instructions are to imagine they were writing text with the pen and look where the pen would be going, they had to write the letters as fast and as accurate as possible, given a letter size template. Subjects were able to perform the task with facility and accuracy, and movements of the arm did not interfere with subjects ability to control their visual attention so as to enable smooth writing. On the basis of five consecutive trials there was a significant decrease in the total time used and the total number of commands sent to move the robot arm from the first to the second trial but no further improvement thereafter, suggesting that within writing 6 letters subjects had mastered the ability to control the system. Our work demonstrates that eye tracking is a powerful means to control robot arms in closed-loop and real-time, outperforming other invasive and non-invasive approaches to Brain-Machine-Interfaces in terms of calibration time (<;2 minutes), training time (<;10 minutes), interface technology costs. We suggests that gaze-based decoding of action intention may well become one of the most efficient ways to interface with robotic actuators - i.e. Brain-Robot-Interfaces - and become useful beyond paralysed and amputee users also for the general teleoperation of robotic and exoskeleton in human augmentation

    Visual and control aspects of saccadic eye movements

    Get PDF
    Physiological, behavioral, and control investigation of rapid saccadic jump eye movement in human

    Eye Movement and Pupil Measures: A Review

    Get PDF
    Our subjective visual experiences involve complex interaction between our eyes, our brain, and the surrounding world. It gives us the sense of sight, color, stereopsis, distance, pattern recognition, motor coordination, and more. The increasing ubiquity of gaze-aware technology brings with it the ability to track gaze and pupil measures with varying degrees of fidelity. With this in mind, a review that considers the various gaze measures becomes increasingly relevant, especially considering our ability to make sense of these signals given different spatio-temporal sampling capacities. In this paper, we selectively review prior work on eye movements and pupil measures. We first describe the main oculomotor events studied in the literature, and their characteristics exploited by different measures. Next, we review various eye movement and pupil measures from prior literature. Finally, we discuss our observations based on applications of these measures, the benefits and practical challenges involving these measures, and our recommendations on future eye-tracking research directions
    corecore