140,252 research outputs found

    Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements

    Get PDF
    How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations.Published versio

    Sensorimotor coordination and metastability in a situated HKB model

    Get PDF
    Oscillatory phenomena are ubiquitous in nature and have become particularly relevant for the study of brain and behaviour. One of the simplest, yet explanatorily powerful, models of oscillatory Coordination Dynamics is the Haken–Kelso–Bunz (HKB) model. The metastable regime described by the HKB equation has been hypothesised to be the signature of brain oscillatory dynamics underlying sensorimotor coordination. Despite evidence supporting such a hypothesis, to our knowledge, there are still very few models (if any) where the HKB equation generates spatially situated behaviour and, at the same time, has its dynamics modulated by the behaviour it generates (by means of the sensory feedback resulting from body movement). This work presents a computational model where the HKB equation controls an agent performing a simple gradient climbing task and shows (i) how different metastable dynamical patterns in the HKB equation are generated and sustained by the continuous interaction between the agent and its environment; and (ii) how the emergence of functional metastable patterns in the HKB equation – i.e. patterns that generate gradient climbing behaviour – depends not only on the structure of the agent's sensory input but also on the coordinated coupling of the agent's motor–sensory dynamics. This work contributes to Kelso's theoretical framework and also to the understanding of neural oscillations and sensorimotor coordination

    Online Discrimination of Nonlinear Dynamics with Switching Differential Equations

    Full text link
    How to recognise whether an observed person walks or runs? We consider a dynamic environment where observations (e.g. the posture of a person) are caused by different dynamic processes (walking or running) which are active one at a time and which may transition from one to another at any time. For this setup, switching dynamic models have been suggested previously, mostly, for linear and nonlinear dynamics in discrete time. Motivated by basic principles of computations in the brain (dynamic, internal models) we suggest a model for switching nonlinear differential equations. The switching process in the model is implemented by a Hopfield network and we use parametric dynamic movement primitives to represent arbitrary rhythmic motions. The model generates observed dynamics by linearly interpolating the primitives weighted by the switching variables and it is constructed such that standard filtering algorithms can be applied. In two experiments with synthetic planar motion and a human motion capture data set we show that inference with the unscented Kalman filter can successfully discriminate several dynamic processes online

    Neural Models of Normal and Abnormal Behavior: What Do Schizophrenia, Parkinsonism, Attention Deficit Disorder, and Depression Have in Common?

    Full text link
    Defense Advanced Research Projects Agency and Office of Naval Research (N00014-95-1-0409); National Science Foundation (IRI-97-20333

    Predict or classify: The deceptive role of time-locking in brain signal classification

    Full text link
    Several experimental studies claim to be able to predict the outcome of simple decisions from brain signals measured before subjects are aware of their decision. Often, these studies use multivariate pattern recognition methods with the underlying assumption that the ability to classify the brain signal is equivalent to predict the decision itself. Here we show instead that it is possible to correctly classify a signal even if it does not contain any predictive information about the decision. We first define a simple stochastic model that mimics the random decision process between two equivalent alternatives, and generate a large number of independent trials that contain no choice-predictive information. The trials are first time-locked to the time point of the final event and then classified using standard machine-learning techniques. The resulting classification accuracy is above chance level long before the time point of time-locking. We then analyze the same trials using information theory. We demonstrate that the high classification accuracy is a consequence of time-locking and that its time behavior is simply related to the large relaxation time of the process. We conclude that when time-locking is a crucial step in the analysis of neural activity patterns, both the emergence and the timing of the classification accuracy are affected by structural properties of the network that generates the signal.Comment: 23 pages, 5 figure

    Attentive Learning of Sequential Handwriting Movements: A Neural Network Model

    Full text link
    Defense Advanced research Projects Agency and the Office of Naval Research (N00014-95-1-0409, N00014-92-J-1309); National Science Foundation (IRI-97-20333); National Institutes of Health (I-R29-DC02952-01)

    Sensation and perception

    Get PDF
    One of the oldest and most difficult questions in science is how we are able to develop an awareness of the world around us from our senses. Topics covered under the title of, 'Sensation and perception' address this very question. Sensation encompasses the processes by which our sense organs (e.g. eyes, ears etc.) receive information from our environment, whereas perception refers to the processes through which the brain selects, integrates, organises and interprets those sensations. The sorts of questions dealt with by psychologists interested in this area include: 'how does visual information get processed by the brain?', 'how is it that I am able to recognise one face out of many many thousands?', and 'what causes visual illusions to occur?: Within New Zealand there are a number of researchers studying visual perception specifically and their research interests range from understanding the biologica

    Cortical Networks for Control of Voluntary Arm Movements Under Variable Force Conditions

    Full text link
    A neural model of voluntary movement and proprioception functionally interprets and simulates cell types in movement related areas of primate cortex. The model circuit maintains accurate proprioception while controlling voluntary reaches to spatial targets, exertion of force against obstacles, posture maintenance despite perturbations, compliance with an imposed movement, and static and inertial load compensations. Computer simulations show that model cell properties mimic cell properties in areas 4 and 5. These include delay period activation, response profiles during movement, kinematic and kinetic sensitivities, and latency of activity onset. Model area 4 phasic and tonic cells compute velocity and position commands which activate alpha and gamma motor neurons, thereby shifting the mechanical equilibrium point. Anterior area 5 cells compute limb position using corollary discharges from area 4 and muscle spindle feedback. Posterior area 5 cells use the perceived position and target position signals to compute a desired movement vector. The cortical loop is closed by a volition-gated projection of this movement vector to area 4 phasic cells. Phasic-tonic cells in area 4 incorporate force command components to compensate for static and inertial loads. Predictions are made for both motor and parietal cell types under novel experimental protocols.Office of Naval Research (N00014-92-J-1309, N00014-93-1-1364, N00014-95-l-0409, N00014-92-J-4015); National Science Foundation (IRI-90-24877, IRI-90-00530

    Self-Organizing Neural Networks for Spatial Planning and Flexible Arm Movement Control

    Full text link
    This talk will survey recent results concerning how the brain self-organizes its planning and control of flexible arm movements to accomplish spatially defined tasks at variable speeds and forces with a redundant arm that may be confronted with obstacles. Recent work from our group on this topic includes the following four themes.Office of Naval Research (N00014-95-1-0657, N00014-95-1-0409

    How we see

    Get PDF
    The visual world is imaged on the retinas of our eyes. However, "seeing"' is not a result of neural functions within the eyes but rather a result of what the brain does with those images. Our visual perceptions are produced by parts of the cerebral cortex dedicated to vision. Although our visual awareness appears unitary, different parts of the cortex analyze color, shape, motion, and depth information. There are also special mechanisms for visual attention, spatial awareness, and the control of actions under visual guidance. Often lesions from stroke or other neurological diseases will impair one of these subsystems, leading to unusual deficits such as the inability to recognize faces, the loss of awareness of half of visual space, or the inability to see motion or color
    • 

    corecore