37 research outputs found
Recommended from our members
Binocular Eye Movements Are Adapted to the Natural Environment.
Humans and many animals make frequent saccades requiring coordinated movements of the eyes. When landing on the new fixation point, the eyes must converge accurately or double images will be perceived. We asked whether the visual system uses statistical regularities in the natural environment to aid eye alignment at the end of saccades. We measured the distribution of naturally occurring disparities in different parts of the visual field. The central tendency of the distributions was crossed (nearer than fixation) in the lower field and uncrossed (farther) in the upper field in male and female participants. It was uncrossed in the left and right fields. We also measured horizontal vergence after completion of vertical, horizontal, and oblique saccades. When the eyes first landed near the eccentric target, vergence was quite consistent with the natural-disparity distribution. For example, when making an upward saccade, the eyes diverged to be aligned with the most probable uncrossed disparity in that part of the visual field. Likewise, when making a downward saccade, the eyes converged to enable alignment with crossed disparity in that part of the field. Our results show that rapid binocular eye movements are adapted to the statistics of the 3D environment, minimizing the need for large corrective vergence movements at the end of saccades. The results are relevant to the debate about whether eye movements are derived from separate saccadic and vergence neural commands that control both eyes or from separate monocular commands that control the eyes independently.SIGNIFICANCE STATEMENT We show that the human visual system incorporates statistical regularities in the visual environment to enable efficient binocular eye movements. We define the oculomotor horopter: the surface of 3D positions to which the eyes initially move when stimulated by eccentric targets. The observed movements maximize the probability of accurate fixation as the eyes move from one position to another. This is the first study to show quantitatively that binocular eye movements conform to 3D scene statistics, thereby enabling efficient processing. The results provide greater insight into the neural mechanisms underlying the planning and execution of saccadic eye movements
A hierarchical system for a distributed representation of the peripersonal space of a humanoid robot
Reaching a target object in an unknown and unstructured environment is easily performed by human beings. However, designing a humanoid robot that executes the same task requires the implementation of complex abilities, such as identifying the target in the visual field, estimating its spatial location, and precisely driving the motors of the arm to reach it. While research usually tackles the development of such abilities singularly, in this work we integrate a number of computational models into a unified framework, and demonstrate in a humanoid torso the feasibility of an integrated working representation of its peripersonal space. To achieve this goal, we propose a cognitive architecture that connects several models inspired by neural circuits of the visual, frontal and posterior parietal cortices of the brain. The outcome of the integration process is a system that allows the robot to create its internal model and its representation of the surrounding space by interacting with the environment directly, through a mutual adaptation of perception and action. The robot is eventually capable of executing a set of tasks, such as recognizing, gazing and reaching target objects, which can work separately or cooperate for supporting more structured and effective behaviors
Vector Disparity Sensor with Vergence Control for Active Vision Systems
This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system
Modeling Accommodation Control of the Human Eye: Chromatic Aberration and Color Opponency
Accommodation is the process by which the eye lens changes optical power to maintain a clear retinal image as the distance to the fixated object varies. Although luminance blur has long been considered the driving feature for accommodation, it is by definition unsigned (i.e., there is no difference between the defocus of an object closer or farther than the focus distance). Nonetheless, the visual system initially accommodates in the correct direction, implying that it exploits a cue with sign information. Here, we present a model of accommodation control based on such a cue: Longitudinal Chromatic Aberration (LCA). The model relies on color-opponent units, much like those observed among retinal ganglion cells, to make the computation required to use LCA to drive accommodation
Recommended from our members
Binocular Eye Movements Are Adapted to the Natural Environment.
Humans and many animals make frequent saccades requiring coordinated movements of the eyes. When landing on the new fixation point, the eyes must converge accurately or double images will be perceived. We asked whether the visual system uses statistical regularities in the natural environment to aid eye alignment at the end of saccades. We measured the distribution of naturally occurring disparities in different parts of the visual field. The central tendency of the distributions was crossed (nearer than fixation) in the lower field and uncrossed (farther) in the upper field in male and female participants. It was uncrossed in the left and right fields. We also measured horizontal vergence after completion of vertical, horizontal, and oblique saccades. When the eyes first landed near the eccentric target, vergence was quite consistent with the natural-disparity distribution. For example, when making an upward saccade, the eyes diverged to be aligned with the most probable uncrossed disparity in that part of the visual field. Likewise, when making a downward saccade, the eyes converged to enable alignment with crossed disparity in that part of the field. Our results show that rapid binocular eye movements are adapted to the statistics of the 3D environment, minimizing the need for large corrective vergence movements at the end of saccades. The results are relevant to the debate about whether eye movements are derived from separate saccadic and vergence neural commands that control both eyes or from separate monocular commands that control the eyes independently.SIGNIFICANCE STATEMENT We show that the human visual system incorporates statistical regularities in the visual environment to enable efficient binocular eye movements. We define the oculomotor horopter: the surface of 3D positions to which the eyes initially move when stimulated by eccentric targets. The observed movements maximize the probability of accurate fixation as the eyes move from one position to another. This is the first study to show quantitatively that binocular eye movements conform to 3D scene statistics, thereby enabling efficient processing. The results provide greater insight into the neural mechanisms underlying the planning and execution of saccadic eye movements