1,538 research outputs found

    Kiwi forego vison in the guidance of their nocturnal activities

    Get PDF
    We propose that the Kiwi visual system has undergone adaptive regression evolution driven by the trade-off between the relatively low rate of gain of visual information that is possible at low light levels, and the metabolic costs of extracting that information

    Computing optical flow in the primate visual system

    Get PDF
    Computing motion on the basis of the time-varying image intensity is a difficult problem for both artificial and biological vision systems. We show how gradient models, a well-known class of motion algorithms, can be implemented within the magnocellular pathway of the primate's visual system. Our cooperative algorithm computes optical flow in two steps. In the first stage, assumed to be located in primary visual cortex, local motion is measured while spatial integration occurs in the second stage, assumed to be located in the middle temporal area (MT). The final optical flow is extracted in this second stage using population coding, such that the velocity is represented by the vector sum of neurons coding for motion in different directions. Our theory, relating the single-cell to the perceptual level, accounts for a number of psychophysical and electrophysiological observations and illusions

    Neural Representations for Sensory-Motor Control, III: Learning a Body-Centered Representation of 3-D Target Position

    Full text link
    A neural model is described of how the brain may autonomously learn a body-centered representation of 3-D target position by combining information about retinal target position, eye position, and head position in real time. Such a body-centered spatial representation enables accurate movement commands to the limbs to be generated despite changes in the spatial relationships between the eyes, head, body, and limbs through time. The model learns a vector representation--otherwise known as a parcellated distributed representation--of target vergence with respect to the two eyes, and of the horizontal and vertical spherical angles of the target with respect to a cyclopean egocenter. Such a vergence-spherical representation has been reported in the caudal midbrain and medulla of the frog, as well as in psychophysical movement studies in humans. A head-centered vergence-spherical representation of foveated target position can be generated by two stages of opponent processing that combine corollary discharges of outflow movement signals to the two eyes. Sums and differences of opponent signals define angular and vergence coordinates, respectively. The head-centered representation interacts with a binocular visual representation of non-foveated target position to learn a visuomotor representation of both foveated and non-foveated target position that is capable of commanding yoked eye movementes. This head-centered vector representation also interacts with representations of neck movement commands to learn a body-centered estimate of target position that is capable of commanding coordinated arm movements. Learning occurs during head movements made while gaze remains fixed on a foveated target. An initial estimate is stored and a VOR-mediated gating signal prevents the stored estimate from being reset during a gaze-maintaining head movement. As the head moves, new estimates arc compared with the stored estimate to compute difference vectors which act as error signals that drive the learning process, as well as control the on-line merging of multimodal information.Air Force Office of Scientific Research (F49620-92-J-0499); National Science Foundation (IRI -87-16960, IRI-90-24877); Office of Naval Research (N00014-92-J-l309

    Computing motion in the primate's visual system

    Get PDF
    Computing motion on the basis of the time-varying image intensity is a difficult problem for both artificial and biological vision systems. We will show how one well-known gradient-based computer algorithm for estimating visual motion can be implemented within the primate's visual system. This relaxation algorithm computes the optical flow field by minimizing a variational functional of a form commonly encountered in early vision, and is performed in two steps. In the first stage, local motion is computed, while in the second stage spatial integration occurs. Neurons in the second stage represent the optical flow field via a population-coding scheme, such that the vector sum of all neurons at each location codes for the direction and magnitude of the velocity at that location. The resulting network maps onto the magnocellular pathway of the primate visual system, in particular onto cells in the primary visual cortex (V1) as well as onto cells in the middle temporal area (MT). Our algorithm mimics a number of psychophysical phenomena and illusions (perception of coherent plaids, motion capture, motion coherence) as well as electrophysiological recordings. Thus, a single unifying principle ‘the final optical flow should be as smooth as possible’ (except at isolated motion discontinuities) explains a large number of phenomena and links single-cell behavior with perception and computational theory

    Development of lateralization of the magnetic compass in a migratory bird

    Get PDF
    The magnetic compass of a migratory bird, the European robin (Erithacus rubecula), was shown to be lateralized in favour of the right eye/left brain hemisphere. However, this seems to be a property of the avian magnetic compass that is not present from the beginning, but develops only as the birds grow older. During first migration in autumn, juvenile robins can orient by their magnetic compass with their right as well as with their left eye. In the following spring, however, the magnetic compass is already lateralized, but this lateralization is still flexible: it could be removed by covering the right eye for 6 h. During the following autumn migration, the lateralization becomes more strongly fixed, with a 6 h occlusion of the right eye no longer having an effect. This change from a bilateral to a lateralized magnetic compass appears to be a maturation process, the first such case known so far in birds. Because both eyes mediate identical information about the geomagnetic field, brain asymmetry for the magnetic compass could increase efficiency by setting the other hemisphere free for other processes

    Kiwi Forego Vision in the Guidance of Their Nocturnal Activities

    Get PDF
    BACKGROUND: In vision, there is a trade-off between sensitivity and resolution, and any eye which maximises information gain at low light levels needs to be large. This imposes exacting constraints upon vision in nocturnal flying birds. Eyes are essentially heavy, fluid-filled chambers, and in flying birds their increased size is countered by selection for both reduced body mass and the distribution of mass towards the body core. Freed from these mass constraints, it would be predicted that in flightless birds nocturnality should favour the evolution of large eyes and reliance upon visual cues for the guidance of activity. METHODOLOGY/PRINCIPAL FINDINGS: We show that in Kiwi (Apterygidae), flightlessness and nocturnality have, in fact, resulted in the opposite outcome. Kiwi show minimal reliance upon vision indicated by eye structure, visual field topography, and brain structures, and increased reliance upon tactile and olfactory information. CONCLUSIONS/SIGNIFICANCE: This lack of reliance upon vision and increased reliance upon tactile and olfactory information in Kiwi is markedly similar to the situation in nocturnal mammals that exploit the forest floor. That Kiwi and mammals evolved to exploit these habitats quite independently provides evidence for convergent evolution in their sensory capacities that are tuned to a common set of perceptual challenges found in forest floor habitats at night and which cannot be met by the vertebrate visual system. We propose that the Kiwi visual system has undergone adaptive regressive evolution driven by the trade-off between the relatively low rate of gain of visual information that is possible at low light levels, and the metabolic costs of extracting that information

    Visuomotor Transformation in the Fly Gaze Stabilization System

    Get PDF
    For sensory signals to control an animal's behavior, they must first be transformed into a format appropriate for use by its motor systems. This fundamental problem is faced by all animals, including humans. Beyond simple reflexes, little is known about how such sensorimotor transformations take place. Here we describe how the outputs of a well-characterized population of fly visual interneurons, lobula plate tangential cells (LPTCs), are used by the animal's gaze-stabilizing neck motor system. The LPTCs respond to visual input arising from both self-rotations and translations of the fly. The neck motor system however is involved in gaze stabilization and thus mainly controls compensatory head rotations. We investigated how the neck motor system is able to selectively extract rotation information from the mixed responses of the LPTCs. We recorded extracellularly from fly neck motor neurons (NMNs) and mapped the directional preferences across their extended visual receptive fields. Our results suggest that—like the tangential cells—NMNs are tuned to panoramic retinal image shifts, or optic flow fields, which occur when the fly rotates about particular body axes. In many cases, tangential cells and motor neurons appear to be tuned to similar axes of rotation, resulting in a correlation between the coordinate systems the two neural populations employ. However, in contrast to the primarily monocular receptive fields of the tangential cells, most NMNs are sensitive to visual motion presented to either eye. This results in the NMNs being more selective for rotation than the LPTCs. Thus, the neck motor system increases its rotation selectivity by a comparatively simple mechanism: the integration of binocular visual motion information
    corecore