11,006 research outputs found

    Self-Organization of Topographic Mixture Networks Using Attentional Feedback

    Full text link
    This paper proposes a biologically-motivated neural network model of supervised learning. The model possesses two novel learning mechanisms. The first is a network for learning topographic mixtures. The network's internal category nodes are the mixture components, which learn to encode smooth distributions in the input space by taking advantage of topography in the input feature maps. The second mechanism is an attentional biasing feedback circuit. When the network makes an incorrect output prediction, this feedback circuit modulates the learning rates of the category nodes, by amounts based on the sharpness of their tuning, in order to improve the network's prediction accuracy. The network is evaluated on several standard classification benchmarks and shown to perform well in comparison to other classifiers. Possible relationships are discussed between the network's learning properties and those of biological neural networks. Possible future extensions of the network are also discussed.Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-1-0409

    Neural Models of Motion Integration, Segmentation, and Probablistic Decision-Making

    Full text link
    When brain mechanism carry out motion integration and segmentation processes that compute unambiguous global motion percepts from ambiguous local motion signals? Consider, for example, a deer running at variable speeds behind forest cover. The forest cover is an occluder that creates apertures through which fragments of the deer's motion signals are intermittently experienced. The brain coherently groups these fragments into a trackable percept of the deer in its trajectory. Form and motion processes are needed to accomplish this using feedforward and feedback interactions both within and across cortical processing streams. All the cortical areas V1, V2, MT, and MST are involved in these interactions. Figure-ground processes in the form stream through V2, such as the seperation of occluding boundaries of the forest cover from the boundaries of the deer, select the motion signals which determine global object motion percepts in the motion stream through MT. Sparse, but unambiguous, feauture tracking signals are amplified before they propogate across position and are intergrated with far more numerous ambiguous motion signals. Figure-ground and integration processes together determine the global percept. A neural model predicts the processing stages that embody these form and motion interactions. Model concepts and data are summarized about motion grouping across apertures in response to a wide variety of displays, and probabilistic decision making in parietal cortex in response to random dot displays.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Coverage, Continuity and Visual Cortical Architecture

    Get PDF
    The primary visual cortex of many mammals contains a continuous representation of visual space, with a roughly repetitive aperiodic map of orientation preferences superimposed. It was recently found that orientation preference maps (OPMs) obey statistical laws which are apparently invariant among species widely separated in eutherian evolution. Here, we examine whether one of the most prominent models for the optimization of cortical maps, the elastic net (EN) model, can reproduce this common design. The EN model generates representations which optimally trade of stimulus space coverage and map continuity. While this model has been used in numerous studies, no analytical results about the precise layout of the predicted OPMs have been obtained so far. We present a mathematical approach to analytically calculate the cortical representations predicted by the EN model for the joint mapping of stimulus position and orientation. We find that in all previously studied regimes, predicted OPM layouts are perfectly periodic. An unbiased search through the EN parameter space identifies a novel regime of aperiodic OPMs with pinwheel densities lower than found in experiments. In an extreme limit, aperiodic OPMs quantitatively resembling experimental observations emerge. Stabilization of these layouts results from strong nonlocal interactions rather than from a coverage-continuity-compromise. Our results demonstrate that optimization models for stimulus representations dominated by nonlocal suppressive interactions are in principle capable of correctly predicting the common OPM design. They question that visual cortical feature representations can be explained by a coverage-continuity-compromise.Comment: 100 pages, including an Appendix, 21 + 7 figure

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection

    Full text link
    A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discotinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and VIP can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (NSF SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Feature Topography and Sound Intensity Level Encoding in Primary Auditory Cortex

    Get PDF
    The primary auditory cortex: A1) in mammals is one of the first areas in the neocortex that receives auditory related spiking activity from the thalamus. Because the neocortex is implicated in regulating high-level brain phenomena, such as attention and perception, it is therefore important in regards to these high-level behaviors to understand how sounds are represented and transformed by neuronal circuits in this area. The topographic organization of neuronal responses to auditory features in A1 provides evidence for potential mechanisms and functional roles of this neural circuitry. This dissertation presents results from models of topographic organization supporting the notion that if the topographic organization of frequency responses, termed tonotopy or cochleotopy, is aligned along the longest anatomical line segment in A1, as supported by some physiological studies, then it is unlikely that any other topography is mapped monotonically along the orthogonal axis. Thresholds of neuronal responses to sound intensity level represent a particular feature that may have a local, highly periodic topography and that is vital to the sensitivity of the auditory system. The neuronal representation of sound level in A1, particularly as it relates to encoding accuracy, contains a distribution of neurons with varying amounts of inhibition at high sound levels. Neurons with large amounts of this high-level inhibition are described as nonmonotonic or level-tuned. This dissertation presents evidence from single neuron recordings in A1 that neurons exhibiting greater high-level inhibition also exhibit lower neuronal thresholds and that lower thresholds in these nonmonotonic neurons are preserved even when much of the neuronal population is adapted for accurately encoding more intense sounds. Evidence presented in this dissertation also suggests that nonmonotonic neurons have transient responses to time-varying: dynamic) level stimuli that adapt more quickly in response to low-level sounds than those of monotonic neurons. Together these results imply that under static, steady-state-dynamic and transient-dynamic sound level conditions, nonmonotonic neurons are specialized encoders of less intense sounds that allow the auditory system to maintain sensitivity under a variety of environmental conditions

    Top-down effects on early visual processing in humans: a predictive coding framework

    Get PDF
    An increasing number of human electroencephalography (EEG) studies examining the earliest component of the visual evoked potential, the so-called C1, have cast doubts on the previously prevalent notion that this component is impermeable to top-down effects. This article reviews the original studies that (i) described the C1, (ii) linked it to primary visual cortex (V1) activity, and (iii) suggested that its electrophysiological characteristics are exclusively determined by low-level stimulus attributes, particularly the spatial position of the stimulus within the visual field. We then describe conflicting evidence from animal studies and human neuroimaging experiments and provide an overview of recent EEG and magnetoencephalography (MEG) work showing that initial V1 activity in humans may be strongly modulated by higher-level cognitive factors. Finally, we formulate a theoretical framework for understanding top-down effects on early visual processing in terms of predictive coding

    The cognitive neuroscience of visual working memory

    Get PDF
    Visual working memory allows us to temporarily maintain and manipulate visual information in order to solve a task. The study of the brain mechanisms underlying this function began more than half a century ago, with Scoville and Milner’s (1957) seminal discoveries with amnesic patients. This timely collection of papers brings together diverse perspectives on the cognitive neuroscience of visual working memory from multiple fields that have traditionally been fairly disjointed: human neuroimaging, electrophysiological, behavioural and animal lesion studies, investigating both the developing and the adult brain

    Positive emotion broadens attention focus through decreased position-specific spatial encoding in early visual cortex: evidence from ERPs

    Get PDF
    Recent evidence has suggested that not only stimulus-specific attributes or top-down expectations can modulate attention selection processes, but also the actual mood state of the participant. In this study, we tested the prediction that the induction of positive mood can dynamically influence attention allocation and, in turn, modulate early stimulus sensory processing in primary visual cortex (V1). High-density visual event-related potentials (ERPs) were recorded while participants performed a demanding task at fixation and were presented with peripheral irrelevant visual textures, whose position was systematically varied in the upper visual field (close, medium, or far relative to fixation). Either a neutral or a positive mood was reliably induced and maintained throughout the experimental session. The ERP results showed that the earliest retinotopic component following stimulus onset (C1) strongly varied in topography as a function of the position of the peripheral distractor, in agreement with a near-far spatial gradient. However, this effect was altered for participants in a positive relative to a neutral mood. On the contrary, positive mood did not modulate attention allocation for the central (task-relevant) stimuli, as reflected by the P300 component. We ran a control behavioral experiment confirming that positive emotion selectively impaired attention allocation to the peripheral distractors. These results suggest a mood-dependent tuning of position-specific encoding in V1 rapidly following stimulus onset. We discuss these results against the dominant broaden-and-build theory

    A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection

    Full text link
    A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discontinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and posterior parietal cortex can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624
    • 

    corecore