1,289 research outputs found

    Temporal Dynamics of Decision-Making during Motion Perception in the Visual Cortex

    Get PDF
    How does the brain make decisions? Speed and accuracy of perceptual decisions covary with certainty in the input, and correlate with the rate of evidence accumulation in parietal and frontal cortical "decision neurons." A biophysically realistic model of interactions within and between Retina/LGN and cortical areas V1, MT, MST, and LIP, gated by basal ganglia, simulates dynamic properties of decision-making in response to ambiguous visual motion stimuli used by Newsome, Shadlen, and colleagues in their neurophysiological experiments. The model clarifies how brain circuits that solve the aperture problem interact with a recurrent competitive network with self-normalizing choice properties to carry out probablistic decisions in real time. Some scientists claim that perception and decision-making can be described using Bayesian inference or related general statistical ideas, that estimate the optimal interpretation of the stimulus given priors and likelihoods. However, such concepts do not propose the neocortical mechanisms that enable perception, and make decisions. The present model explains behavioral and neurophysiological decision-making data without an appeal to Bayesian concepts and, unlike other existing models of these data, generates perceptual representations and choice dynamics in response to the experimental visual stimuli. Quantitative model simulations include the time course of LIP neuronal dynamics, as well as behavioral accuracy and reaction time properties, during both correct and error trials at different levels of input ambiguity in both fixed duration and reaction time tasks. Model MT/MST interactions compute the global direction of random dot motion stimuli, while model LIP computes the stochastic perceptual decision that leads to a saccadic eye movement.National Science Foundation (SBE-0354378, IIS-02-05271); Office of Naval Research (N00014-01-1-0624); National Institutes of Health (R01-DC-02852

    Visual motion processing and human tracking behavior

    Full text link
    The accurate visual tracking of a moving object is a human fundamental skill that allows to reduce the relative slip and instability of the object's image on the retina, thus granting a stable, high-quality vision. In order to optimize tracking performance across time, a quick estimate of the object's global motion properties needs to be fed to the oculomotor system and dynamically updated. Concurrently, performance can be greatly improved in terms of latency and accuracy by taking into account predictive cues, especially under variable conditions of visibility and in presence of ambiguous retinal information. Here, we review several recent studies focusing on the integration of retinal and extra-retinal information for the control of human smooth pursuit.By dynamically probing the tracking performance with well established paradigms in the visual perception and oculomotor literature we provide the basis to test theoretical hypotheses within the framework of dynamic probabilistic inference. We will in particular present the applications of these results in light of state-of-the-art computer vision algorithms

    Neural Models of Motion Integration, Segmentation, and Probablistic Decision-Making

    Full text link
    When brain mechanism carry out motion integration and segmentation processes that compute unambiguous global motion percepts from ambiguous local motion signals? Consider, for example, a deer running at variable speeds behind forest cover. The forest cover is an occluder that creates apertures through which fragments of the deer's motion signals are intermittently experienced. The brain coherently groups these fragments into a trackable percept of the deer in its trajectory. Form and motion processes are needed to accomplish this using feedforward and feedback interactions both within and across cortical processing streams. All the cortical areas V1, V2, MT, and MST are involved in these interactions. Figure-ground processes in the form stream through V2, such as the seperation of occluding boundaries of the forest cover from the boundaries of the deer, select the motion signals which determine global object motion percepts in the motion stream through MT. Sparse, but unambiguous, feauture tracking signals are amplified before they propogate across position and are intergrated with far more numerous ambiguous motion signals. Figure-ground and integration processes together determine the global percept. A neural model predicts the processing stages that embody these form and motion interactions. Model concepts and data are summarized about motion grouping across apertures in response to a wide variety of displays, and probabilistic decision making in parietal cortex in response to random dot displays.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Inside the brain of an elite athlete: The neural processes that support high achievement in sports

    Get PDF
    Events like the World Championships in athletics and the Olympic Games raise the public profile of competitive sports. They may also leave us wondering what sets the competitors in these events apart from those of us who simply watch. Here we attempt to link neural and cognitive processes that have been found to be important for elite performance with computational and physiological theories inspired by much simpler laboratory tasks. In this way we hope to inspire neuroscientists to consider how their basic research might help to explain sporting skill at the highest levels of performance

    Cortical Dynamics of Contextually-Cued Attentive Visual Learning and Search: Spatial and Object Evidence Accumulation

    Full text link
    How do humans use predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, a certain combination of objects can define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. A neural model, ARTSCENE Search, is developed to illustrate the neural mechanisms of such memory-based contextual learning and guidance, and to explain challenging behavioral data on positive/negative, spatial/object, and local/distant global cueing effects during visual search. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined by enhancing target-like objects in space as a scene is scanned with saccadic eye movements. The model clarifies the functional roles of neuroanatomical, neurophysiological, and neuroimaging data in visual search for a desired goal object. In particular, the model simulates the interactive dynamics of spatial and object contextual cueing in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortical cells (area 46) prime possible target locations in posterior parietal cortex based on goalmodulated percepts of spatial scene gist represented in parahippocampal cortex, whereas model ventral prefrontal cortical cells (area 47/12) prime possible target object representations in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex. The model hereby predicts how the cortical What and Where streams cooperate during scene perception, learning, and memory to accumulate evidence over time to drive efficient visual search of familiar scenes.CELEST, an NSF Science of Learning Center (SBE-0354378); SyNAPSE program of Defense Advanced Research Projects Agency (HR0011-09-3-0001, HR0011-09-C-0011

    Spatial Attention, Precision, and Bayesian Inference: A Study of Saccadic Response Speed

    Get PDF
    Inferring the environment's statistical structure and adapting behavior accordingly is a fundamental modus operandi of the brain. A simple form of this faculty based on spatial attentional orienting can be studied with Posner's location-cueing paradigm in which a cue indicates the target location with a known probability. The present study focuses on a more complex version of this task, where probabilistic context (percentage of cue validity) changes unpredictably over time, thereby creating a volatile environment. Saccadic response speed (RS) was recorded in 15 subjects and used to estimate subject-specific parameters of a Bayesian learning scheme modeling the subjects' trial-by-trial updates of beliefs. Different response models—specifying how computational states translate into observable behavior—were compared using Bayesian model selection. Saccadic RS was most plausibly explained as a function of the precision of the belief about the causes of sensory input. This finding is in accordance with current Bayesian theories of brain function, and specifically with the proposal that spatial attention is mediated by a precision-dependent gain modulation of sensory input. Our results provide empirical support for precision-dependent changes in beliefs about saccade target locations and motivate future neuroimaging and neuropharmacological studies of how Bayesian inference may determine spatial attentio

    Eye movements and the maximization of value

    Get PDF
    Only the central region of the retina, the fovea, can provide us with high-acuity details of our visual environment. In the periphery however, resolution fades away with increasing eccentricity. As a consequence, humans and other animals with a foveated visual system move their eyes to redirect their gaze towards objects of interest. And with each saccadic eye movement, we choose a different region of the visual field for high-acuity processing. In the recent decades, the eye movement system has thus evolved as a role model to study decision making (Glimcher, 2003), which is also because the oculomotor system is sensitive to valuation processes. Moreover, our eye movements are tightly linked to visual perception, because where we look determines what we see and every eye movement poses a major challenge to the visual system as it shifts the whole visual image on the retina. In three studies, this dissertation project examined whether the eye movement system can adjust saccade latencies to maximize informational and motivational value and whether the visual system can maximize all the information available despite making eye movements. The first study investigated whether the eye movement system is sensitive to the information that can be gained by executing an eye movement. Participants saccaded to a peripherally appearing target and perform a perceptual task. By exchanging the target while the saccade was in flight, we could independently manipulate the pre-saccadic peripheral and the post-saccadic foveal visibility and thus create conditions where participants either lost or gained information by making an eye movement. In the loss condition, the probability of correctly identifying the target increased with saccade latency because participant could benefit longer from high resolution peripheral vision. The opposite pattern was observed in the gain condition. However, eye movement latencies did not differ no matter whether participants could gain or lose information and thus could not maximize the all the information available. Instead, latencies decreased with the probability that visual information at the saccade target was task-relevant, suggesting that saccade eye movements are influenced by the motivation to foveate task-relevant information, but not by the information that can be gained by saccade execution. In Study II, we tested whether the visual system is able to integrate pre-saccadic peripheral and post-saccadic foveal information and whether it weighs the incoming visual information according to its reliability, that is, according to how well something can be seen. This optimal integration would minimize the perceptual uncertainty and thus maximize all the information available to the visual system. For every individual, we separately measured discrimination performance in the fovea and the periphery. Using maximum-likelihood integration (Ernst & BĂĽlthoff, 2004), we predicted the optimal weight given to peripheral information as well as the optimal uncertainty associated with the trans-saccadic percept. Both, in terms of weighting and uncertainty, trans-saccadic performance was not distinguishable from optimality. We thus could show that the visual system is able to integrate information across saccades and that it is close to optimal in doing so. This highlights that the visual system is able to maximize all the visual information available despite making eye movements. Study III investigated whether the influence of expected motivational value on saccades (Milstein & Dorris, 2007, 2011) can only be found in contexts where participants additionally have to choose between multiple rewarded targets. We recorded saccade latencies to rewarded targets differing in reward and manipulated the proportion of interleaved choices within one block. In choice-trials, two targets were displayed and participants could choose between the two to obtain the corresponding reward. Without choices present, we found no evidence for single target saccades to be affected by reward. When choices were interleaved, latencies to less rewarded targets were delayed and the magnitude of this delay increased with the proportion of choices. This delay was elicited by the expectation of an upcoming choice-trial as well as inter-trial priming: After a choice, saccadic reactions to the non-chosen target were delayed. We thus could show that there is no direct relationship between expected motivational value on the one hand and saccade latencies on the other hand. Rather, this relationship only persists in contexts where humans can maximize their reward outcome by preferring one target over the other. In sum, the present dissertation shows that there is no direct relationship between saccade latencies on the one hand and motivational value (Study III) or informational value (Study I) on the other hand. Instead, saccade latencies are sensitive to the probability that information acquired at the saccade target becomes task-relevant (Study I) and the preference of one target over the other (Study III). For perception we could show that the visual system can optimally integrate information about saccades and thus that vision does not correspond to disconnected snapshots, but rather to an integrated stream of continuous information (Study II)
    • …
    corecore