2,433 research outputs found

    A Neuron Model with Variable Ion Concentrations

    Full text link
    Many neuron models exist, but usually the voltage is the central feature of those models. Recently interest in long-term potentiation (LTP) has surged, due to the fact that it is linked to learning. It has been shown that LTP is accompanied by an increase of the internal calcium concentration. Thus models with variable calcium concentration have been proposed. Since the calcium concentration is very low, this has a negligible effect on the membrane potential. In the present model all ion concentrations are variable due to ionic current and due to ion pumps. It is shown that this significantly increases the complexity of neural processing, and thus variable ion concentrations cannot be ignored in neurons with high firing frequency, or with very long depolarizations.Air Force Office of Scientific Research (F49620-92-J-0225, F49620-92-J-0334

    Motion Aftereffects Due to Interocular Summation of Adaptation to Linear Motion

    Full text link
    The motion aftereffect (MAE) can be elicited by adapting observers to global motion before they view a display containing no global motion. Experiments y others have shown that if the left eye of an observer is adapted to motion going in one direction, no MAE is reported during binocular testing. The present study investigated whether no binocular adaption had occured because the monocular motion signals cancelled each other during testing. Observers were adapted to different, but not quite opposite, directions of motion in the two eyes. Either both eyes, the left eye, ot the right eye were tested. observers reported the direction of perceived motion during the test. When they saw the test stimulus with both eyes, observers reported seeing motion in the opposite direction of the vectorial sum of the adaption directions. in the monocular test conditions observers reported MAW directions about halfway between their binocluar report and the direction opposite the corresponding monocular adaptaion directions, indicating that both monocular and binocular sites had adapted. A decomposition of the observed MAEs based on two strictly monocular and one binoclar representation of motion adaptation can account for the data.Air Force Office of Scientific Research (F49620-92-J-0225, F49620-92-J-0334, F49620-92-J-0334); Northeast Consortium for Engineering Education (NCEE A303-21-93); Office of Naval Research (N00014-91-J-4100, N00014-94-1-0597

    Binding of Object Representations by Synchronous Cortical Dynamics Explains Temporal Order and Spatial Pooling Data

    Full text link
    A key problem in cognitive science concerns how the brain binds together parts of an object into a coherent visual object representation. One difficulty that this binding process needs to overcome is that different parts of an object may be processed by the brain at different rates and may thus become desynchronized. Perceptual framing is a mechanism that resynchronizes cortical activities corresponding to the same retinal object. A neural network model based on cooperation between oscillators via feedback from a subsequent processing stage is presented that is able to rapidly resynchronize desynchronized featural activities. Model properties help to explain perceptual framing data, including psychophysical data about temporal order judgments. These cooperative model interactions also simulate data concerning the reduction of threshold contrast as a function of stimulus length. The model hereby provides a unified explanation of temporal order and threshold contrast data as manifestations of a cortical binding process that can rapidly resynchronize image parts which belong together in visual object representations.Air Force Office of Scientific Research (F49620-92-J-0225, F49620-92-J-0334, F49620-92-J-0499); Office of Naval Research (N00014-92- J-4015, N00014-91-J-4100

    Cortical Synchronization and Perceptual Framing

    Full text link
    How does the brain group together different parts of an object into a coherent visual object representation? Different parts of an object may be processed by the brain at different rates and may thus become desynchronized. Perceptual framing is a process that resynchronizes cortical activities corresponding to the same retinal object. A neural network model is presented that is able to rapidly resynchronize clesynchronized neural activities. The model provides a link between perceptual and brain data. Model properties quantitatively simulate perceptual framing data, including psychophysical data about temporal order judgments and the reduction of threshold contrast as a function of stimulus length. Such a model has earlier been used to explain data about illusory contour formation, texture segregation, shape-from-shading, 3-D vision, and cortical receptive fields. The model hereby shows how many data may be understood as manifestations of a cortical grouping process that can rapidly resynchronize image parts which belong together in visual object representations. The model exhibits better synchronization in the presence of noise than without noise, a type of stochastic resonance, and synchronizes robustly when cells that represent different stimulus orientations compete. These properties arise when fast long-range cooperation and slow short-range competition interact via nonlinear feedback interactions with cells that obey shunting equations.Office of Naval Research (N00014-92-J-1309, N00014-95-I-0409, N00014-95-I-0657, N00014-92-J-4015); Air Force Office of Scientific Research (F49620-92-J-0334, F49620-92-J-0225)

    Temporal Dynamics of Binocular Display Processing with Corticogeniculate Interactions

    Full text link
    A neural model of binocular vision is developed to simulate psychophysical and neurobiological data concerning the dynamics of binocular disparity processing. The model shows how feedforward and feedback interactions among LGN ON and OFF cells and cortical simple, complex, and hypercomplex cells can simulate binocular summation, the Pulfrich effect, and the fusion of delayed anticorrelated stereograms. Model retinal ON and OFF cells are linked by an opponent process capable of generating antagonistic rebounds from OFF cells after offset of an ON cell input. Spatially displaced ON and OFF cells excite simple cells. Opposite polarity simple cells compete before their half-wave rectified outputs excite complex cells. Complex cells binocularly match like-polarity simple cell outputs before pooling half-wave rectified signals frorn opposite polarities. Competitive feedback among complex cells leads to sharpening of disparity selectivity and normalizes cell activity. Slow inhibitory interneurons help to reset complex cells after input offset. The Pulfrich effect occurs because the delayed input from the one eye fuses with the present input from the other eye to create a disparity. Binocular summation occurs for stimuli of brief duration or of low contrast because competitive normalization takes time, and cannot occur for very brief or weak stimuli. At brief SOAs, anticorrelatecd stereograms can be fused because the rebound mechanism ensures that the present image to one eye can fuse with the afterimage from a previous image to the other eye. Corticogeniculate feedback embodies a matching process that enhances the speed and temporal accuracy of complex cell disparity tuning. Model mechanisms interact to control the stable development of sharp disparity tuning.Air Force Office of Scientific Research (F19620-92-J-0499, F49620-92-J-0334, F49620-92-J-0225); Office of Naval Research (N00014-95-1-0409, N00014-95-l-0657, N00014-92-J-1015, N00014-91-J-4100

    Synchronized Neural Activities: A Mechanism for Perceptual Framing

    Full text link
    Variability in retinal and geniculate processing rate that is dependent on stimulus properties suggests that some later process can put parts corresponding to the same retinal image back into register. This resynchronization process is called perceptual framing. Here a neural network model of emergent boundary segmentation is used to show that synchronized cortical activities can subserve this role. Psychophysical results about the minimum delay between two visual stimuli that leads to the perception of temporal order can be explained and replicated with this model.Air Force Office of Scientific Research (F49620-92-J-0499, F49620-92-J-0225, F49620-92-J-0334); Office of Naval Research (N00014-92-J-4015, N00014-91-J-4100

    UV Continuum, Physical Conditions and Filling Factor in Active Galactic Nuclei

    Full text link
    The narrow line region of active galaxies is formed by gas clouds surrounded by a diluted gas. Standard one-dimensional photoionization models are usually used to model this region in order to reproduce the observed emission lines. Since the narrow line region is not homogeneous, two major types of models are used: (a) those assuming a homogeneous gas distribution and a filling factor less than unity to mimic the presence of the emitting clouds; (b) those based on a composition of single-cloud models combined in order to obtain the observed spectra. The first method is largely used but may induce to misleading conclusions as shown in this paper. The second one is more appropriate, but requires a large number of observed lines in order to limit the number of single models used. After discussing the case of an extragalactic HII region, for which the ionizing radiation spectrum is better known, we show that 1-D models for the narrow line region with a filling factor less than unit do not properly mimic the clumpiness, but just simulates an overall lower density. Multi-cloud models lead to more reliable results. Both models are tested in this paper, using the emission-line spectra of two well-known Seyfert galaxies, NGC 4151 and NGC 1068. It is shown that ionizing radiation spectra with a blue bump cannot be excluded by multi-cloud models, although excluded by Alexander et al. (1999, 2000)using homogeneous models with a filling factor less than unity.Comment: 23 pages, 7 figures. Accepted for Publication in Ap
    • …
    corecore