45 research outputs found

    On the complex dynamics of intracellular ganglion cell light responses in the cat retina

    Full text link
    We recorded intracellular responses from cat retinal ganglion cells to sinusoidal flickering lights and compared the response dynamics to a theoretical model based on coupled nonlinear oscillators. Flicker responses for several different spot sizes were separated in a 'smooth' generator (G) potential and eorresponding spike trains. We have previously shown that the G-potential reveals complex, stimulus dependent, oscillatory behavior in response to sinusoidally flickering lights. Such behavior could be simulated by a modified van der Pol oscillator. In this paper, we extend the model to account for spike generation as well, by including extended Hodgkin-Huxley equations describing local membrane properties. We quantified spike responses by several parameters describing the mean and standard deviation of spike burst duration, timing (phase shift) of bursts, and the number of spikes in a burst. The dependence of these response parameters on stimulus frequency and spot size could be reproduced in great detail by coupling the van der Pol oscillator, and Hodgkin-Huxley equations. The model mimics many experimentally observed response patterns, including non-phase-locked irregular oscillations. Our findings suggest that the information in the ganglion cell spike train reflects both intraretinal processing, simulated by the van der Pol oscillator) and local membrane properties described by Hodgkin-Huxley equations. The interplay between these complex processes can be simulated by changing the coupling coefficients between the two oscillators. Our simulations therefore show that irregularities in spike trains, which normally are considered to be noise, may be interpreted as complex oscillations that might earry information.Whitehall Foundation (S93-24

    The what and why of perceptual asymmetries in the visual domain

    Get PDF
    Perceptual asymmetry is one of the most important characteristics of our visual functioning. We carefully reviewed the scientific literature in order to examine such asymmetries, separating them into two major categories: within-visual field asymmetries and between-visual field asymmetries. We explain these asymmetries in terms of perceptual aspects or tasks, the what of the asymmetries; and in terms of underlying mechanisms, the why of the asymmetries. Tthe within-visual field asymmetries are fundamental to orientation, motion direction, and spatial frequency processing. between-visual field asymmetries have been reported for a wide range of perceptual phenomena. foveal dominance over the periphery, in particular, has been prominent for visual acuity, contrast sensitivity, and colour discrimination. Tthis also holds true for object or face recognition and reading performance. upper-lower visual field asymmetries in favour of the lower have been demonstrated for temporal and contrast sensitivities, visual acuity, spatial resolution, orientation, hue and motion processing. Iin contrast, the upper field advantages have been seen in visual search, apparent size, and object recognition tasks. left-right visual field asymmetries include the left field dominance in spatial (e.g., orientation) processing and the right field dominance in non-spatial (e.g., temporal) processing. left field is also better at low spatial frequency or global and coordinate spatial processing, whereas the right field is better at high spatial frequency or local and categorical spatial processing. All these asymmetries have inborn neural/physiological origins, the primary why, but can be also susceptible to visual experience, the critical why (promotes or blocks the asymmetries by altering neural functions)

    The Upper and Lower Visual Field of Man: Electrophysiological and Functional Differences

    Get PDF

    Directional motion sensitivity under transparent motion conditions

    No full text
    We measured directional sensitivity to a foreground pattern while an orthogonally directed background pattern was present under transparent motion conditions. For both foreground and background pattern, the speed was varied between 0.5 and 28 deg see-1. A multi-step paradigm was employed which results in a better estimation of the suppressive or facilitator effects than previously applied single-step methods (e.g. measuring Din., or ll~i.). Moreover, our method gives insight into the interactions for a wide range of speeds and not just the extreme motion thresholds (the D-values). We found that high background speeds have an inhibitory effect on the detection of a range of high foreground speeds and low background speeds have an inhibitory effect on a range of low foreground speeds. Intermediate background pattern speeds inhibit the detection of both low and high foreground pattern speeds and do so in a systematic manner

    Slow and fast visual motion channels have independent binocular-rivalry stages.

    No full text
    We have previously reported a transparent motion after-effect indicating that the human visual system comprises separate slow and fast motion channels. Here, we report that the presentation of a fast motion in one eye and a slow motion in the other eye does not result in binocular rivalry but in a clear percept of transparent motion. We call this new visual phenomenon 'dichoptic motion transparency' (DMT). So far only the DMT phenomenon and the two motion after-effects (the 'classical' motion after-effect, seen after motion adaptation on a static test pattern, and the dynamic motion after-effect, seen on a dynamic-noise test pattern) appear to isolate the channels completely. The speed ranges of the slow and fast channels overlap strongly and are observer dependent. A model is presented that links after-effect durations of an observer to the probability of rivalry or DMT as a function of dichoptic velocity combinations. Model results support the assumption of two highly independent channels showing only within-channel rivalry, and no rivalry or after-effect interactions between the channels. The finding of two independent motion vision channels, each with a separate rivalry stage and a private line to conscious perception, might be helpful in visualizing or analysing pathways to consciousness

    The perceived direction of textured gratings and their motion aftereffects.

    No full text
    The stimuli in these experiments are square-wave luminance gratings with an array of small random dots covering the high-luminance regions. Owing to the texture, the direction of these gratings, when seen through a circular aperture, is disambiguated because the visual system is provided with an unambiguous motion energy. Thus, the direction of textured gratings can be varied independently of grating orientation. When subjects are required to judge the direction of textured gratings moving obliquely relative to their orientation, they can do so accurately (experiment 1). This is of interest because most studies of one-dimensional motion perception have involved (textureless) luminance-defined since-wave or square-wave gratings, and the perceived direction of these gratings is constrained by the aperture problem to be orthogonal to their orientation. Thus, direction and orientation have often been confounded. Interestingly, when subjects are required to judge the direction of an obliquely moving textured grating during a period of adaptation and then the direction of the motion aftereffect (MAE) immediately following adaptation (experiments 2 and 3), these directions are not directly opposite each other. MAE directions were always more orthogonal to the orientation of the adapting grating than the corresponding direction judgments during adaptation (by as much as 25 degrees). These results are not readily explained by conventional MAE models and possible accounts are considered

    The perceived direction of textured gratings and their motion aftereffects.

    No full text
    The stimuli in these experiments are square-wave luminance gratings with an array of small random dots covering the high-luminance regions. Owing to the texture, the direction of these gratings, when seen through a circular aperture, is disambiguated because the visual system is provided with an unambiguous motion energy. Thus, the direction of textured gratings can be varied independently of grating orientation. When subjects are required to judge the direction of textured gratings moving obliquely relative to their orientation, they can do so accurately (experiment 1). This is of interest because most studies of one-dimensional motion perception have involved (textureless) luminance-defined since-wave or square-wave gratings, and the perceived direction of these gratings is constrained by the aperture problem to be orthogonal to their orientation. Thus, direction and orientation have often been confounded. Interestingly, when subjects are required to judge the direction of an obliquely moving textured grating during a period of adaptation and then the direction of the motion aftereffect (MAE) immediately following adaptation (experiments 2 and 3), these directions are not directly opposite each other. MAE directions were always more orthogonal to the orientation of the adapting grating than the corresponding direction judgments during adaptation (by as much as 25 degrees). These results are not readily explained by conventional MAE models and possible accounts are considered

    Monocular mechanisms determine plaid motion coherence

    No full text
    Although the neural location of the plaid motion coherence process is not precisely known, the middle temporal (MT) cortical area has been proposed as a likely candidate. This claim rests largely on the neurophysiological findings showing that in response to plaid stimuli, a subgroup of cells in area MT responds to the pattern direction, whereas cells in area V1 respond only to the directions of the component gratings. In Experiment 1, we report that the coherent motion of a plaid pattern can be completely abolished following adaptation to a grating which moves in the plaid direction and has the same spatial period as the plaid features (the so-called 'blobs'). Interestingly, we find this phenomenon is monocular: monocular adaptation destroys plaid coherence in the exposed eye but leaves it unaffected in the other eye. Experiment 2 demonstrates that adaptation to a purely binocular (dichoptic) grating does not affect perceived plaid coherence. These data suggest several conclusions: (1) that the mechanism determining plaid coherence responds to the motion of plaid features, (2) that the coherence mechanism is monocular, and thus (3), that it is probably located at a relatively low level in the visual system and peripherally to the binocular mechanisms commonly presumed to underlie two- dimensional (2- D) motion perception. Experiment 3 examines the spatial tuning of the monocular coherence mechanism and our results suggest it is broadly tuned with a preference for lower spatial frequencies. In Experiment 4, we examine whether perceived plaid direction is determined by the motion of the grating components or the features. Our data strongly support a feature- based model
    corecore