131 research outputs found

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Visually guided vergence in a new stereo camera system

    Get PDF
    People move their eyes several times each second, to selectivelyanalyze visual information from specific locations. This is impor-tant, because analyzing the whole scene in foveal detail would re-quire a beachball-sized brain and thousands of additional caloriesper day. As artificial vision becomes more sophisticated, it mayface analogous constraints. Anticipating this, we previously devel-oped a robotic head with biologically realistic oculomotor capabil-ities. Here we present a system for accurately orienting the cam-eras toward a three-dimensional point. The robot’s cameras con-verge when looking at something nearby, so each camera shouldideally centre the same visual feature. At the end of a saccade,we combine priors with cross-correlation of the images from eachcamera to iteratively fine-tune their alignment, and we use the ori-entations to set focus distance. This system allows the robot toaccurately view a visual target with both eyes

    Naturalistic depth perception and binocular vision

    Get PDF
    Humans continuously move both their eyes to redirect their foveae to objects at new depths. To correctly execute these complex combinations of saccades, vergence eye movements and accommodation changes, the visual system makes use of multiple sources of depth information, including binocular disparity and defocus. Furthermore, during development, both fine-tuning of oculomotor control as well as correct eye growth are likely driven by complex interactions between eye movements, accommodation, and the distributions of defocus and depth information across the retina. I have employed photographs of natural scenes taken with a commercial plenoptic camera to examine depth perception while varying perspective, blur and binocular disparity. Using a gaze contingent display with these natural images, I have shown that disparity and peripheral blur interact to modify eye movements and facilitate binocular fusion. By decoupling visual feedback for each eye, I have found it possible to induces both conjugate and disconjugate changes in saccadic adaptation, which helps us understand to what degree the eyes can be individually controlled. To understand the aetiology of myopia, I have developed geometric models of emmetropic and myopic eye shape, from which I have derived psychophysically testable predictions about visual function. I have then tested the myopic against the emmetropic visual system and have found that some aspects of visual function decrease in the periphery at a faster rate in best-corrected myopic observers than in emmetropes. To study the effects of different depth cues on visual development, I have investigated accommodation response and sensitivity to blur in normal and myopic subjects. This body of work furthers our understanding of oculomotor control and 3D perception, has applied implications regarding discomfort in the use of virtual reality, and provides clinically relevant insights regarding the development of refractive error and potential approaches to prevent incorrect emmetropization

    A hierarchical system for a distributed representation of the peripersonal space of a humanoid robot

    Get PDF
    Reaching a target object in an unknown and unstructured environment is easily performed by human beings. However, designing a humanoid robot that executes the same task requires the implementation of complex abilities, such as identifying the target in the visual field, estimating its spatial location, and precisely driving the motors of the arm to reach it. While research usually tackles the development of such abilities singularly, in this work we integrate a number of computational models into a unified framework, and demonstrate in a humanoid torso the feasibility of an integrated working representation of its peripersonal space. To achieve this goal, we propose a cognitive architecture that connects several models inspired by neural circuits of the visual, frontal and posterior parietal cortices of the brain. The outcome of the integration process is a system that allows the robot to create its internal model and its representation of the surrounding space by interacting with the environment directly, through a mutual adaptation of perception and action. The robot is eventually capable of executing a set of tasks, such as recognizing, gazing and reaching target objects, which can work separately or cooperate for supporting more structured and effective behaviors

    The spatial averaging of disparities in brief, static random-dot stereograms

    Get PDF
    Visual images from the two eyes are transmitted to the brain. Because the eyes are horizontally separated, there is a horizontal disparity between the two images. The amount of disparity between the images of a given point depends on the distance of that point from the viewer's point of fixation. A natural visual environment contains surfaces at many different depths. Therefore, the brain must process a spatial distribution of disparities. How are these disparities spatially put together? Brief (about 200 msec) static Cyclopean random-dot stereograms were used as stimuli for vergence and depth discrimination to answer this question. The results indicated a large averaging region for vergence, and a smaller pooling region for depth discrimination. Vergence responded to the mean disparity of two transparent planes. When a disparate target was present in a fixation plane surround, vergence improved as target size was increased, with a saturation at 3-6 degrees. Depth discrimination thresholds improved with target size, reaching a minimum at 1-3 degrees, but increased for larger targets. Depth discrimination showed a dependence on the extent of a disparity pedestal surrounding the target, consistent with vergence facilitation. Vergence might, therefore, implement a coarse-to-fine reduction in binocular matching noise. Interocular decorrelation can be considered as multiple chance matches at different disparities. The spatial pooling limits found for disparity were replicated when interocular decorrelation was discriminated. The disparity of the random dots also influenced the apparent horizontal. alignment of neighbouring monocular lines. This finding suggests that disparity averaging takes place at an early stage of visual processing. The following possible explanations were considered: 1) Disparities are detected in different spatial frequency channels (Marr and Poggio, 1979). 2) Second-order luminance patterns are matched between the two eyes using non-linear channels. 3) Secondary disparity filters process disparities extracted from linear filters

    A hierarchical system for a distributed representation of the peripersonal space of a humanoid robot

    Get PDF
    Reaching a target object in an unknown and unstructured environment is easily performed by human beings. However, designing a humanoid robot that executes the same task requires the implementation of complex abilities, such as identifying the target in the visual field, estimating its spatial location, and precisely driving the motors of the arm to reach it. While research usually tackles the development of such abilities singularly, in this work we integrate a number of computational models into a unified framework, and demonstrate in a humanoid torso the feasibility of an integrated working representation of its peripersonal space. To achieve this goal, we propose a cognitive architecture that connects several models inspired by neural circuits of the visual, frontal and posterior parietal cortices of the brain. The outcome of the integration process is a system that allows the robot to create its internal model and its representation of the surrounding space by interacting with the environment directly, through a mutual adaptation of perception and action. The robot is eventually capable of executing a set of tasks, such as recognizing, gazing and reaching target objects, which can work separately or cooperate for supporting more structured and effective behaviors

    Interactive Object Learning and Recognition with Multiclass Support Vector Machines

    Get PDF
    • …
    corecore