3,934 research outputs found

    Neural Representations for Sensory-Motor Control, II: Learning a Head-Centered Visuomotor Representation of 3-D Target Position

    Full text link
    A neural network model is described for how an invariant head-centered representation of 3-D target position can be autonomously learned by the brain in real time. Once learned, such a target representation may be used to control both eye and limb movements. The target representation is derived from the positions of both eyes in the head, and the locations which the target activates on the retinas of both eyes. A Vector Associative Map, or YAM, learns the many-to-one transformation from multiple combinations of eye-and-retinal position to invariant 3-D target position. Eye position is derived from outflow movement signals to the eye muscles. Two successive stages of opponent processing convert these corollary discharges into a. head-centered representation that closely approximates the azimuth, elevation, and vergence of the eyes' gaze position with respect to a cyclopean origin located between the eyes. YAM learning combines this cyclopean representation of present gaze position with binocular retinal information about target position into an invariant representation of 3-D target position with respect to the head. YAM learning can use a teaching vector that is externally derived from the positions of the eyes when they foveate the target. A YAM can also autonomously discover and learn the invariant representation, without an explicit teacher, by generating internal error signals from environmental fluctuations in which these invariant properties are implicit. YAM error signals are computed by Difference Vectors, or DVs, that are zeroed by the YAM learning process. YAMs may be organized into YAM Cascades for learning and performing both sensory-to-spatial maps and spatial-to-motor maps. These multiple uses clarify why DV-type properties are computed by cells in the parietal, frontal, and motor cortices of many mammals. YAMs are modulated by gating signals that express different aspects of the will-to-act. These signals transform a single invariant representation into movements of different speed (GO signal) and size (GRO signal), and thereby enable YAM controllers to match a planned action sequence to variable environmental conditions.National Science Foundation (IRI-87-16960, IRI-90-24877); Office of Naval Research (N00014-92-J-1309

    How Does the Cerebral Cortex Work? Developement, Learning, Attention, and 3D Vision by Laminar Circuits of Visual Cortex

    Full text link
    A key goal of behavioral and cognitive neuroscience is to link brain mechanisms to behavioral functions. The present article describes recent progress towards explaining how the visual cortex sees. Visual cortex, like many parts of perceptual and cognitive neocortex, is organized into six main layers of cells, as well as characteristic sub-lamina. Here it is proposed how these layered circuits help to realize the processes of developement, learning, perceptual grouping, attention, and 3D vision through a combination of bottom-up, horizontal, and top-down interactions. A key theme is that the mechanisms which enable developement and learning to occur in a stable way imply properties of adult behavior. These results thus begin to unify three fields: infant cortical developement, adult cortical neurophysiology and anatomy, and adult visual perception. The identified cortical mechanisms promise to generalize to explain how other perceptual and cognitive processes work.Air Force Office of Scientific Research (F49620-01-1-0397); Office of Naval Research (N00014-01-1-0624

    Towards a Unified Theory of Neocortex: Laminar Cortical Circuits for Vision and Cognition

    Full text link
    A key goal of computational neuroscience is to link brain mechanisms to behavioral functions. The present article describes recent progress towards explaining how laminar neocortical circuits give rise to biological intelligence. These circuits embody two new and revolutionary computational paradigms: Complementary Computing and Laminar Computing. Circuit properties include a novel synthesis of feedforward and feedback processing, of digital and analog processing, and of pre-attentive and attentive processing. This synthesis clarifies the appeal of Bayesian approaches but has a far greater predictive range that naturally extends to self-organizing processes. Examples from vision and cognition are summarized. A LAMINART architecture unifies properties of visual development, learning, perceptual grouping, attention, and 3D vision. A key modeling theme is that the mechanisms which enable development and learning to occur in a stable way imply properties of adult behavior. It is noted how higher-order attentional constraints can influence multiple cortical regions, and how spatial and object attention work together to learn view-invariant object categories. In particular, a form-fitting spatial attentional shroud can allow an emerging view-invariant object category to remain active while multiple view categories are associated with it during sequences of saccadic eye movements. Finally, the chapter summarizes recent work on the LIST PARSE model of cognitive information processing by the laminar circuits of prefrontal cortex. LIST PARSE models the short-term storage of event sequences in working memory, their unitization through learning into sequence, or list, chunks, and their read-out in planned sequential performance that is under volitional control. LIST PARSE provides a laminar embodiment of Item and Order working memories, also called Competitive Queuing models, that have been supported by both psychophysical and neurobiological data. These examples show how variations of a common laminar cortical design can embody properties of visual and cognitive intelligence that seem, at least on the surface, to be mechanistically unrelated.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Intrinsic activity in the fly brain gates visual information during behavioral choices

    Get PDF
    The small insect brain is often described as an input/output system that executes reflex-like behaviors. It can also initiate neural activity and behaviors intrinsically, seen as spontaneous behaviors, different arousal states and sleep. However, less is known about how intrinsic activity in neural circuits affects sensory information processing in the insect brain and variability in behavior. Here, by simultaneously monitoring Drosophila's behavioral choices and brain activity in a flight simulator system, we identify intrinsic activity that is associated with the act of selecting between visual stimuli. We recorded neural output (multiunit action potentials and local field potentials) in the left and right optic lobes of a tethered flying Drosophila, while its attempts to follow visual motion (yaw torque) were measured by a torque meter. We show that when facing competing motion stimuli on its left and right, Drosophila typically generate large torque responses that flip from side to side. The delayed onset (0.1-1 s) and spontaneous switch-like dynamics of these responses, and the fact that the flies sometimes oppose the stimuli by flying straight, make this behavior different from the classic steering reflexes. Drosophila, thus, seem to choose one stimulus at a time and attempt to rotate toward its direction. With this behavior, the neural output of the optic lobes alternates; being augmented on the side chosen for body rotation and suppressed on the opposite side, even though the visual input to the fly eyes stays the same. Thus, the flow of information from the fly eyes is gated intrinsically. Such modulation can be noise-induced or intentional; with one possibility being that the fly brain highlights chosen information while ignoring the irrelevant, similar to what we know to occur in higher animals

    Functional representation of vision within the mind: A visual consciousness model based in 3D default space

    Get PDF
    The human eyes and brain, which have finite boundaries, create a ‘‘virtual’’ space within our central nervous system that interprets and perceives a space that appears boundless and infinite. Using insights from studies on the visual system, we propose a novel fast processing mechanism involving the eyes, visual pathways, and cortex where external vision is imperceptibly processed in our brain in real time creating an internal representation of external space that appears as an external view. We introduce the existence of a three-dimension default space consisting of intrapersonal body space that serves as the framework where visual and non-visual sensory information is sensed and experienced. We propose that the thalamus integrates processed information from corticothalamic feedback loops and fills-in the neural component of 3D default space with an internal visual representation of external space, leading to the experience of visual consciousness. This visual space inherently evades perception so we have introduced three easy clinical tests that can assist in experiencing this visual space. We also review visual neuroanatomical pathways, binocular vision, neurological disorders, and visual phenomenon to elucidate how the representation of external visible space is recreated within the mind

    Objective Evaluation Criteria for Shooting Quality of Stereo Cameras over Short Distance

    Get PDF
    Stereo cameras are the basic tools used to obtain stereoscopic image pairs, which can lead to truly great image quality. However, some inappropriate shooting conditions may cause discomfort while viewing stereo images. It is therefore considerably necessary to establish the perceptual criteria that can be used to evaluate the shooting quality of stereo cameras. This article proposes objective quality evaluation criteria based on the characteristics of parallel and toed-in camera configurations. Considering the different internal structures and basic shooting principles, this paper focuses on short-distance shooting conditions and establishes assessment criteria for both parallel and toed-in camera configurations. Experimental results show that the proposed evaluation criteria can predict the visual perception of stereoscopic images and effectively evaluate stereoscopic image quality

    Involuntary saccades and binocular coordination during visual pursuit in Parkinson's disease

    Get PDF
    Prior studies of oculomotor function in Parkinson's disease (PD) have either focused on saccades while smooth pursuit eye movements were not involved, or tested smooth pursuit without considering the effect of any involuntary saccades. The present study investigated whether these involuntary saccades could serve as a useful biomarker for PD. Ten observers with PD participated in the study along with 10 age-matched normal control (NC) and 10 young control participants (YC). Observers fixated on a central cross while a disk (target) moved toward it from either side of the screen. Once the target reached the fixation cross, observers began to pursue the moving target until the target reached to the other side. To vary the difficulty of fixation and pursuit, the moving target was presented on a blank or a moving background. The moving background consisted of uniformly distributed dots moved in either the same or the opposite direction of the target once the target reached the central fixation cross. To investigate binocular coordination, each background condition was presented under a binocular condition, in which both eyes saw the same stimulus, and under a dichoptic condition, in which one eye saw only the target and the other eye only saw the background. The results showed that in both background conditions, observers with PD made more involuntary saccades than NC and YC during both fixation and pursuit periods while YC and NC showed no difference. Moreover, the difference between left and right eye positions increased over time during the pursuit period for PD group but not for the other two groups. This suggests that individuals with PD may be impaired not only in saccade inhibition, but also in binocular coordination during pursuit. [Meeting abstract presented at VSS 2016.]Accepted manuscrip

    Eye movement control during visual pursuit in Parkinson's disease

    Get PDF
    BACKGROUND: Prior studies of oculomotor function in Parkinson’s disease (PD) have either focused on saccades without considering smooth pursuit, or tested smooth pursuit while excluding saccades. The present study investigated the control of saccadic eye movements during pursuit tasksand assessed the quality of binocular coordinationas potential sensitive markers of PD. METHODS: Observers fixated on a central cross while a target moved toward it. Once the target reached the fixation cross, observers began to pursue the moving target. To further investigate binocular coordination, the moving target was presented on both eyes (binocular condition), or on one eye only (dichoptic condition). RESULTS: The PD group made more saccades than age-matched normal control adults (NC) both during fixation and pursuit. The difference between left and right gaze positions increased over time during the pursuit period for PD but not for NC. The findings were not related to age, as NC and young-adult control group (YC) performed similarly on most of the eye movement measures, and were not correlated with classical measures of PD severity (e.g., Unified Parkinson’s Disease Rating Scale (UPDRS) score). DISCUSSION: Our results suggest that PD may be associated with impairment not only in saccade inhibition, but also in binocular coordination during pursuit, and these aspects of dysfunction may be useful in PD diagnosis or tracking of disease course.This work was supported in part by grants from the National Science Foundation (NSF SBE-0354378 to Arash Yazdanbakhsh and Bo Cao) and Office of Naval Research (ONR N00014-11-1-0535 to Bo Cao, Chia-Chien Wu, and Arash Yazdanbakhsh). There was no additional external funding received for this study. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. (SBE-0354378 - National Science Foundation (NSF); ONR N00014-11-1-0535 - Office of Naval Research)Published versio

    Neural Representations for Sensory-Motor Control, III: Learning a Body-Centered Representation of 3-D Target Position

    Full text link
    A neural model is described of how the brain may autonomously learn a body-centered representation of 3-D target position by combining information about retinal target position, eye position, and head position in real time. Such a body-centered spatial representation enables accurate movement commands to the limbs to be generated despite changes in the spatial relationships between the eyes, head, body, and limbs through time. The model learns a vector representation--otherwise known as a parcellated distributed representation--of target vergence with respect to the two eyes, and of the horizontal and vertical spherical angles of the target with respect to a cyclopean egocenter. Such a vergence-spherical representation has been reported in the caudal midbrain and medulla of the frog, as well as in psychophysical movement studies in humans. A head-centered vergence-spherical representation of foveated target position can be generated by two stages of opponent processing that combine corollary discharges of outflow movement signals to the two eyes. Sums and differences of opponent signals define angular and vergence coordinates, respectively. The head-centered representation interacts with a binocular visual representation of non-foveated target position to learn a visuomotor representation of both foveated and non-foveated target position that is capable of commanding yoked eye movementes. This head-centered vector representation also interacts with representations of neck movement commands to learn a body-centered estimate of target position that is capable of commanding coordinated arm movements. Learning occurs during head movements made while gaze remains fixed on a foveated target. An initial estimate is stored and a VOR-mediated gating signal prevents the stored estimate from being reset during a gaze-maintaining head movement. As the head moves, new estimates arc compared with the stored estimate to compute difference vectors which act as error signals that drive the learning process, as well as control the on-line merging of multimodal information.Air Force Office of Scientific Research (F49620-92-J-0499); National Science Foundation (IRI -87-16960, IRI-90-24877); Office of Naval Research (N00014-92-J-l309
    corecore