634 research outputs found

    Active Estimation of Distance in a Robotic Vision System that Replicates Human Eye Movement

    Full text link
    Many visual cues, both binocular and monocular, provide 3D information. When an agent moves with respect to a scene, an important cue is the different motion of objects located at various distances. While a motion parallax is evident for large translations of the agent, in most head/eye systems a small parallax occurs also during rotations of the cameras. A similar parallax is present also in the human eye. During a relocation of gaze, the shift in the retinal projection of an object depends not only on the amplitude of the movement, but also on the distance of the object with respect to the observer. This study proposes a method for estimating distance on the basis of the parallax that emerges from rotations of a camera. A pan/tilt system specifically designed to reproduce the oculomotor parallax present in the human eye was used to replicate the oculomotor strategy by which humans scan visual scenes. We show that the oculomotor parallax provides accurate estimation of distance during sequences of eye movements. In a system that actively scans a visual scene, challenging tasks such as image segmentation and figure/ground segregation greatly benefit from this cue.National Science Foundation (BIC-0432104, CCF-0130851

    Vergence control system for stereo depth recovery

    Get PDF
    This paper describes a vergence control algorithm for a 3D stereo recovery system. This work has been developed within framework of the project ROBTET. This project has the purpose of designing a Teleoperated Robotic System for live power lines maintenance. The tasks involved suppose the automatic calculation of path for standard tasks, collision detection to avoid electrical shocks, force feedback and accurate visual data, and the generation of collision free real paths. To accomplish these tasks the system needs an exact model of the environment that is acquired through an active stereoscopic head. A cooperative algorithm using vergence and stereo correlation is shown. The proposed system is carried out through an algorithm based on the phase correlation, trying to keep the vergence on the interest object. The sharp vergence changes produced by the variation of the interest objects are controlled through an estimation of the depth distance generated by a stereo correspondence system. In some elements of the scene, those aligned with the epipolar plane, large errors in the depth estimation as well as in the phase correlation, are produced. To minimize these errors a laser lighting system is used to help fixation, assuring an adequate vergence and depth extraction .The work presented in this paper has been supported by electric utility IBERDROLA, S.A. under project PIE No. 132.198

    A hierarchical system for a distributed representation of the peripersonal space of a humanoid robot

    Get PDF
    Reaching a target object in an unknown and unstructured environment is easily performed by human beings. However, designing a humanoid robot that executes the same task requires the implementation of complex abilities, such as identifying the target in the visual field, estimating its spatial location, and precisely driving the motors of the arm to reach it. While research usually tackles the development of such abilities singularly, in this work we integrate a number of computational models into a unified framework, and demonstrate in a humanoid torso the feasibility of an integrated working representation of its peripersonal space. To achieve this goal, we propose a cognitive architecture that connects several models inspired by neural circuits of the visual, frontal and posterior parietal cortices of the brain. The outcome of the integration process is a system that allows the robot to create its internal model and its representation of the surrounding space by interacting with the environment directly, through a mutual adaptation of perception and action. The robot is eventually capable of executing a set of tasks, such as recognizing, gazing and reaching target objects, which can work separately or cooperate for supporting more structured and effective behaviors

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    A hierarchical system for a distributed representation of the peripersonal space of a humanoid robot

    Get PDF
    Reaching a target object in an unknown and unstructured environment is easily performed by human beings. However, designing a humanoid robot that executes the same task requires the implementation of complex abilities, such as identifying the target in the visual field, estimating its spatial location, and precisely driving the motors of the arm to reach it. While research usually tackles the development of such abilities singularly, in this work we integrate a number of computational models into a unified framework, and demonstrate in a humanoid torso the feasibility of an integrated working representation of its peripersonal space. To achieve this goal, we propose a cognitive architecture that connects several models inspired by neural circuits of the visual, frontal and posterior parietal cortices of the brain. The outcome of the integration process is a system that allows the robot to create its internal model and its representation of the surrounding space by interacting with the environment directly, through a mutual adaptation of perception and action. The robot is eventually capable of executing a set of tasks, such as recognizing, gazing and reaching target objects, which can work separately or cooperate for supporting more structured and effective behaviors

    Engineering data compendium. Human perception and performance. User's guide

    Get PDF
    The concept underlying the Engineering Data Compendium was the product of a research and development program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design and military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from the existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by systems designers. The present four volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is the first volume, the User's Guide, containing a description of the program and instructions for its use

    Perceiving slant about a horizontal axis from stereopsis

    Get PDF
    Rotating a surface about a horizontal axis alters the retinal horizontal-shear disparities. Opposed torsional eye movements (cyclovergence) also change horizontal shear. If there were no compensation for the horizontal disparities created by cyclovergence, slant estimates would be erroneous. We asked whether compensation for cyclovergence occurs, and, if it does, whether it occurs by use of an extraretinal cyclovergence signal, by use of vertical-shear disparities, or by use of both signals. In four experiments, we found that compensation is nearly veridical when verticalshear disparities are available and easily measured. When they are not available or easily measured, no compensation occurs. Thus, the visual system does not seem to use an extraretinal cyclovergence signal in stereoscopic slant estimation. We also looked for evidence of an extraretinal cyclovergence signal in a visual direction task and found none. We calculated the statistical reliabilities of slant-from-disparity and slant-from-texture estimates and found that the more reliable of the two means of estimation varies significantly with distance and slant. Finally, we examined how slant about a horizontal axis might be estimated when the eyes look eccentrically
    corecore