46,803 research outputs found

    A Neural Model of First-order and Second-order Motion Perception and Magnocellular Dynamics

    Full text link
    A neural model of motion perception simulates psychophysical data. concerning first-order and second-order motion stimuli, including the reversal of perceived motion direction with distance from the stimulus (I display), and data about directional judgments as a function of relative spatial phase or spatial and temporal frequency. Many other second-order motion percepts that have been ascribed to a second non-Fourier processing stream can also be explained in the model by interactions between ON and OFF cells within a single, neurobiologically interpreted magnocellular processing stream. Yet other percepts may be traced to interactions between form and motion processing streams, rather than to processing within multiple motion processing strea.ms. The model hereby explains why monkeys with lesions of the parvocellular layers, but not the magnocellular layers, of the lateral geniculate nucleus (LGN) are capable of detecting the correct direction of second-order motion, why most cells in area MT are sensitive to both first-order and second-order motion, and why after APB injection selectively blocks retinal ON bipolar cells, cortical cells are sensitive only to the motion of a moving bright bar's trailing edge. Magnoccllular LGN cells show relatively transient responses while parvoccllular LGN cells show relatively sustained responses. Correspondingly, the model bases its directional estimates on the outputs of model ON and OFF transient cells that are organized in opponent circuits wherein antagonistic rebounds occur in response to stimmulus offset. Center-surround interactions convert these ON and OFF outpr1ts into responses of lightening and darkening cells that are sensitive both to direct inputs and to rebound responses in their receptive field centers and surrounds. The total pattern of activity increments and decrements is used by subsequent processing stages (spatially short-range filters, competitive interactions, spatially long-range filters, and directional grouping cells) to dntermine the perceived direction of motion

    Visual Aftereffect Of Texture Density Contingent On Color Of Frame

    Get PDF
    An aftereffect of perceived texture density contingent on the color of a surrounding region is reported. In a series of experiments, participants were adapted, with fixation, to stimuli in which the relative density of two achromatic texture regions was perfectly correlated with the color presented in a surrounding region. Following adaptation, the perceived relative density of the two regions was contingent on the color of the surrounding region or of the texture elements themselves. For example, if high density on the left was correlated with a blue surround during adaptation (and high density on the right with a yellow surround), then in order for the left and right textures to appear equal in the assessment phase, denser texture was required on the left in the presence of a blue surround (and denser texture on the right in the context of a yellow surround). Contingent aftereffects were found (1) with black-and-white scatter-dot textures, (2) with luminance-balanced textures, and (3) when the texture elements, rather than the surrounds, were colored during assessment. Effect size was decreased when the elements themselves were colored, but also when spatial subportions of the surround were used for the presentation of color. The effect may be mediated by retinal color spreading (Pöppel, 1986) and appears consistent with a local associative account of contingent aftereffects, such as Barlow\u27s (1990) model of modifiable inhibition

    Neural Dynamics of Motion Perception: Direction Fields, Apertures, and Resonant Grouping

    Full text link
    A neural network model of global motion segmentation by visual cortex is described. Called the Motion Boundary Contour System (BCS), the model clarifies how ambiguous local movements on a complex moving shape are actively reorganized into a coherent global motion signal. Unlike many previous researchers, we analyse how a coherent motion signal is imparted to all regions of a moving figure, not only to regions at which unambiguous motion signals exist. The model hereby suggests a solution to the global aperture problem. The Motion BCS describes how preprocessing of motion signals by a Motion Oriented Contrast Filter (MOC Filter) is joined to long-range cooperative grouping mechanisms in a Motion Cooperative-Competitive Loop (MOCC Loop) to control phenomena such as motion capture. The Motion BCS is computed in parallel with the Static BCS of Grossberg and Mingolla (1985a, 1985b, 1987). Homologous properties of the Motion BCS and the Static BCS, specialized to process movement directions and static orientations, respectively, support a unified explanation of many data about static form perception and motion form perception that have heretofore been unexplained or treated separately. Predictions about microscopic computational differences of the parallel cortical streams V1 --> MT and V1 --> V2 --> MT are made, notably the magnocellular thick stripe and parvocellular interstripe streams. It is shown how the Motion BCS can compute motion directions that may be synthesized from multiple orientations with opposite directions-of-contrast. Interactions of model simple cells, complex cells, hypercomplex cells, and bipole cells are described, with special emphasis given to new functional roles in direction disambiguation for endstopping at multiple processing stages and to the dynamic interplay of spatially short-range and long-range interactions.Air Force Office of Scientific Research (90-0175); Defense Advanced Research Projects Agency (90-0083); Office of Naval Research (N00014-91-J-4100

    ARSTREAM: A Neural Network Model of Auditory Scene Analysis and Source Segregation

    Full text link
    Multiple sound sources often contain harmonics that overlap and may be degraded by environmental noise. The auditory system is capable of teasing apart these sources into distinct mental objects, or streams. Such an "auditory scene analysis" enables the brain to solve the cocktail party problem. A neural network model of auditory scene analysis, called the AIRSTREAM model, is presented to propose how the brain accomplishes this feat. The model clarifies how the frequency components that correspond to a give acoustic source may be coherently grouped together into distinct streams based on pitch and spatial cues. The model also clarifies how multiple streams may be distinguishes and seperated by the brain. Streams are formed as spectral-pitch resonances that emerge through feedback interactions between frequency-specific spectral representaion of a sound source and its pitch. First, the model transforms a sound into a spatial pattern of frequency-specific activation across a spectral stream layer. The sound has multiple parallel representations at this layer. A sound's spectral representation activates a bottom-up filter that is sensitive to harmonics of the sound's pitch. The filter activates a pitch category which, in turn, activate a top-down expectation that allows one voice or instrument to be tracked through a noisy multiple source environment. Spectral components are suppressed if they do not match harmonics of the top-down expectation that is read-out by the selected pitch, thereby allowing another stream to capture these components, as in the "old-plus-new-heuristic" of Bregman. Multiple simultaneously occuring spectral-pitch resonances can hereby emerge. These resonance and matching mechanisms are specialized versions of Adaptive Resonance Theory, or ART, which clarifies how pitch representations can self-organize durin learning of harmonic bottom-up filters and top-down expectations. The model also clarifies how spatial location cues can help to disambiguate two sources with similar spectral cures. Data are simulated from psychophysical grouping experiments, such as how a tone sweeping upwards in frequency creates a bounce percept by grouping with a downward sweeping tone due to proximity in frequency, even if noise replaces the tones at their interection point. Illusory auditory percepts are also simulated, such as the auditory continuity illusion of a tone continuing through a noise burst even if the tone is not present during the noise, and the scale illusion of Deutsch whereby downward and upward scales presented alternately to the two ears are regrouped based on frequency proximity, leading to a bounce percept. Since related sorts of resonances have been used to quantitatively simulate psychophysical data about speech perception, the model strengthens the hypothesis the ART-like mechanisms are used at multiple levels of the auditory system. Proposals for developing the model to explain more complex streaming data are also provided.Air Force Office of Scientific Research (F49620-01-1-0397, F49620-92-J-0225); Office of Naval Research (N00014-01-1-0624); Advanced Research Projects Agency (N00014-92-J-4015); British Petroleum (89A-1204); National Science Foundation (IRI-90-00530); American Society of Engineering Educatio

    How Is a Moving Target Continuously Tracked Behind Occluding Cover?

    Full text link
    Office of Naval Research (N00014-95-1-0657, N00014-95-1-0409

    ARSTREAM: A Neural Network Model of Auditory Scene Analysis and Source Segregation

    Full text link
    Multiple sound sources often contain harmonics that overlap and may be degraded by environmental noise. The auditory system is capable of teasing apart these sources into distinct mental objects, or streams. Such an "auditory scene analysis" enables the brain to solve the cocktail party problem. A neural network model of auditory scene analysis, called the AIRSTREAM model, is presented to propose how the brain accomplishes this feat. The model clarifies how the frequency components that correspond to a give acoustic source may be coherently grouped together into distinct streams based on pitch and spatial cues. The model also clarifies how multiple streams may be distinguishes and seperated by the brain. Streams are formed as spectral-pitch resonances that emerge through feedback interactions between frequency-specific spectral representaion of a sound source and its pitch. First, the model transforms a sound into a spatial pattern of frequency-specific activation across a spectral stream layer. The sound has multiple parallel representations at this layer. A sound's spectral representation activates a bottom-up filter that is sensitive to harmonics of the sound's pitch. The filter activates a pitch category which, in turn, activate a top-down expectation that allows one voice or instrument to be tracked through a noisy multiple source environment. Spectral components are suppressed if they do not match harmonics of the top-down expectation that is read-out by the selected pitch, thereby allowing another stream to capture these components, as in the "old-plus-new-heuristic" of Bregman. Multiple simultaneously occuring spectral-pitch resonances can hereby emerge. These resonance and matching mechanisms are specialized versions of Adaptive Resonance Theory, or ART, which clarifies how pitch representations can self-organize durin learning of harmonic bottom-up filters and top-down expectations. The model also clarifies how spatial location cues can help to disambiguate two sources with similar spectral cures. Data are simulated from psychophysical grouping experiments, such as how a tone sweeping upwards in frequency creates a bounce percept by grouping with a downward sweeping tone due to proximity in frequency, even if noise replaces the tones at their interection point. Illusory auditory percepts are also simulated, such as the auditory continuity illusion of a tone continuing through a noise burst even if the tone is not present during the noise, and the scale illusion of Deutsch whereby downward and upward scales presented alternately to the two ears are regrouped based on frequency proximity, leading to a bounce percept. Since related sorts of resonances have been used to quantitatively simulate psychophysical data about speech perception, the model strengthens the hypothesis the ART-like mechanisms are used at multiple levels of the auditory system. Proposals for developing the model to explain more complex streaming data are also provided.Air Force Office of Scientific Research (F49620-01-1-0397, F49620-92-J-0225); Office of Naval Research (N00014-01-1-0624); Advanced Research Projects Agency (N00014-92-J-4015); British Petroleum (89A-1204); National Science Foundation (IRI-90-00530); American Society of Engineering Educatio

    Neural Dynamics of Phonetic Trading Relations for Variable-Rate CV Syllables

    Full text link
    The perception of CV syllables exhibits a trading relationship between voice onset time (VOT) of a consonant and duration of a vowel. Percepts of [ba] and [wa] can, for example, depend on the durations of the consonant and vowel segments, with an increase in the duration of the subsequent vowel switching the percept of the preceding consonant from [w] to [b]. A neural model, called PHONET, is proposed to account for these findings. In the model, C and V inputs are filtered by parallel auditory streams that respond preferentially to transient and sustained properties of the acoustic signal, as in vision. These streams are represented by working memories that adjust their processing rates to cope with variable acoustic input rates. More rapid transient inputs can cause greater activation of the transient stream which, in turn, can automatically gain control the processing rate in the sustained stream. An invariant percept obtains when the relative activations of C and V representations in the two streams remain uncha.nged. The trading relation may be simulated as a result of how different experimental manipulations affect this ratio. It is suggested that the brain can use duration of a subsequent vowel to make the [b]/[w] distinction because the speech code is a resonant event that emerges between working mernory activation patterns and the nodes that categorize them.Advanced Research Projects Agency (90-0083); Air Force Office of Scientific Reseearch (F19620-92-J-0225); Pacific Sierra Research Corporation (91-6075-2

    Infant cortex responds to other humans from shortly after birth

    Get PDF
    A significant feature of the adult human brain is its ability to selectively process information about conspecifics. Much debate has centred on whether this specialization is primarily a result of phylogenetic adaptation, or whether the brain acquires expertise in processing social stimuli as a result of its being born into an intensely social environment. Here we study the haemodynamic response in cortical areas of newborns (1–5 days old) while they passively viewed dynamic human or mechanical action videos. We observed activation selective to a dynamic face stimulus over bilateral posterior temporal cortex, but no activation in response to a moving human arm. This selective activation to the social stimulus correlated with age in hours over the first few days post partum. Thus, even very limited experience of face-to-face interaction with other humans may be sufficient to elicit social stimulus activation of relevant cortical regions
    • …
    corecore