1,986 research outputs found

    Neural Dynamics of Motion Processing and Speed Discrimination

    Full text link
    A neural network model of visual motion perception and speed discrimination is presented. The model shows how a distributed population code of speed tuning, that realizes a size-speed correlation, can be derived from the simplest mechanisms whereby activations of multiple spatially short-range filters of different size are transformed into speed-tuned cell responses. These mechanisms use transient cell responses to moving stimuli, output thresholds that covary with filter size, and competition. These mechanisms are proposed to occur in the Vl→7 MT cortical processing stream. The model reproduces empirically derived speed discrimination curves and simulates data showing how visual speed perception and discrimination can be affected by stimulus contrast, duration, dot density and spatial frequency. Model motion mechanisms are analogous to mechanisms that have been used to model 3-D form and figure-ground perception. The model forms the front end of a larger motion processing system that has been used to simulate how global motion capture occurs, and how spatial attention is drawn to moving forms. It provides a computational foundation for an emerging neural theory of 3-D form and motion perception.Office of Naval Research (N00014-92-J-4015, N00014-91-J-4100, N00014-95-1-0657, N00014-95-1-0409, N00014-94-1-0597, N00014-95-1-0409); Air Force Office of Scientific Research (F49620-92-J-0499); National Science Foundation (IRI-90-00530

    Interocular suppression and contrast gain control in human vision

    Get PDF
    The human visual system combines contrast information from the two eyes to produce a single cyclopean representation of the external world. This task requires both summation of congruent images and inhibition of incongruent images across the eyes. These processes were explored psychophysically using narrowband sinusoidal grating stimuli. Initial experiments focussed on binocular interactions within a single detecting mechanism, using contrast discrimination and contrast matching tasks. Consistent with previous findings, dichoptic presentation produced greater masking than monocular or binocular presentation. Four computational models were compared, two of which performed well on all data sets. Suppression between mechanisms was then investigated, using orthogonal and oblique stimuli. Two distinct suppressive pathways were identified, corresponding to monocular and dichoptic presentation. Both pathways impact prior to binocular summation of signals, and differ in their strengths, tuning, and response to adaptation, consistent with recent single-cell findings in cat. Strikingly, the magnitude of dichoptic masking was found to be spatiotemporally scale invariant, whereas monocular masking was dependent on stimulus speed. Interocular suppression was further explored using a novel manipulation, whereby stimuli were presented in dichoptic antiphase. Consistent with the predictions of a computational model, this produced weaker masking than in-phase presentation. This allowed the bandwidths of suppression to be measured without the complicating factor of additive combination of mask and test. Finally, contrast vision in strabismic amblyopia was investigated. Although amblyopes are generally believed to have impaired binocular vision, binocular summation was shown to be intact when stimuli were normalized for interocular sensitivity differences. An alternative account of amblyopia was developed, in which signals in the affected eye are subject to attenuation and additive noise prior to binocular combination

    Altered visual perception near the hands: a critical review of attentional and neurophysiological models

    No full text
    Visual perception changes as a function of hand proximity. While various theoretical accounts have been offered for this alteration (attentional prioritisation, bimodal cell involvement, detailed evaluation, and magnocellular neuron input enhancement), the current literature lacks consensus on these mechanisms. The purpose of this review, therefore, is to critically review the existing body of literature in light of these distinct theoretical accounts. We find that a growing number of results support the magnocellular (M-cell) enhancement account, and are difficult to reconcile with general attention-based explanations. Despite this key theoretical development in the field, there has been some ambiguity with interpretations offered in recent papers, for example, equating the existing attentional and M-cell based explanations, when in fact they make contrasting predictions. We therefore highlight the differential predictions arising from the distinct theoretical accounts. Importantly, however, we also offer novel perspectives that synthesises the role of attention and neurophysiological mechanisms in understanding altered visual perception near the hands. We envisage that this theoretical development will ensure that the field can progress from documenting behavioural differences, to a consensus on the underlying visual and neurophysiological mechanisms.This research was supported by an Australian Research Council (ARC) Discovery Early Career Research Award (DE140101734) awarded to S.C.G., ARC Discovery Project (DP110104553) awarded to M.E, a Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grants awarded S.F. and J.P

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Measurement of elementary movement vectors in human visual system

    Get PDF
    http://tartu.ester.ee/record=b1001964~S1*es

    High-frequency neural oscillations and visual processing deficits in schizophrenia

    Get PDF
    Visual information is fundamental to how we understand our environment, make predictions, and interact with others. Recent research has underscored the importance of visuo-perceptual dysfunctions for cognitive deficits and pathophysiological processes in schizophrenia. In the current paper, we review evidence for the relevance of high frequency (beta/gamma) oscillations towards visuo-perceptual dysfunctions in schizophrenia. In the first part of the paper, we examine the relationship between beta/gamma band oscillations and visual processing during normal brain functioning. We then summarize EEG/MEG-studies which demonstrate reduced amplitude and synchrony of high-frequency activity during visual stimulation in schizophrenia. In the final part of the paper, we identify neurobiological correlates as well as offer perspectives for future research to stimulate further inquiry into the role of high-frequency oscillations in visual processing impairments in the disorder

    Binocular summation revisited: beyond √2

    Get PDF
    Our ability to detect faint images is better with two eyes than with one, but how great is this improvement? A meta-analysis of 65 studies published across more than five decades shows definitively that psychophysical binocular summation (the ratio of binocular to monocular contrast sensitivity) is significantly greater than the canonical value of √2. Several methodological factors were also found to affect summation estimates. Binocular summation was significantly affected by both the spatial and temporal frequency of the stimulus, and stimulus speed (the ratio of temporal to spatial frequency) systematically predicts summation levels, with slow speeds (high spatial and low temporal frequencies) producing the strongest summation. We furthermore show that empirical summation estimates are affected by the ratio of monocular sensitivities, which varies across individuals, and is abnormal in visual disorders such as amblyopia. A simple modeling framework is presented to interpret the results of summation experiments. In combination with the empirical results, this model suggests that there is no single value for binocular summation, but instead that summation ratios depend on methodological factors that influence the strength of a nonlinearity occurring early in the visual pathway, before binocular combination of signals. Best practice methodological guidelines are proposed for obtaining accurate estimates of neural summation in future studies, including those involving patient groups with impaired binocular vision

    Neural Representation of Vocalizations in Noise in the Primary Auditory Cortex of Marmoset Monkeys

    Get PDF
    Robust auditory perception plays a pivotal function in processing behaviorally relevant sounds, particularly when there are auditory distractions from the environment. The neuronal coding enabling this ability, however, is still not well understood. In this study we recorded single-unit activity from the primary auditory cortex of alert common marmoset monkeys (Callithrix jacchus) while delivering conspecific vocalizations degraded by two different background noises: broadband white noise (WGN) and vocalization babble (Babble). Noise effects on single-unit neural representation of target vocalizations were quantified by measuring the response similarity elicited by natural vocalizations as a function of signal-to-noise ratio (SNR). Four consistent response classes (robust, balanced, insensitive, and brittle) were found under both noise conditions, with an average of about two-thirds of the neurons changing their response class when encountering different noises. These results indicate that the distortion induced by one particular masking background in single-unit responses is not necessarily predictable from that induced by another, which further suggests the low likelihood of a unique group of noise-invariant neurons across different background conditions in the primary auditory cortex. In addition, for a relatively large fraction of neurons, strong synchronized responses can be elicited by white noise alone, countering the conventional wisdom that white noise elicits relatively few temporally aligned spikes in higher auditory regions. The variable single-unit responses yet consistent population responses imply that the primate primary auditory cortex performs scene analysis predominately at the population level. Next, by pooling all single units together, pseudo-population analysis was implemented to gain more insight on how individual neurons work together to encode and discriminate vocalizations at various intensities and SNR levels. Population response variability with respect to time was found to synchronize well with the stimulus-driven firing rate of vocalizations at multiple intensities in a negative way. A much weaker trend was observed for vocalizations in noise. By applying dimensionality reduction techniques to the pooled single neuron responses, we were able to visualize the dynamics of neural ensemble responses to vocalizations in noise as trajectories in low-dimensional space. The resulting trajectories showed a clear separation between neural responses to vocalizations and WGN, while trajectories of neural responses to vocalization and Babble were much closer to each other together. Discrimination of neural populations evaluated by neural response classifiers revealed that a finer optimal temporal resolution and longer time scale of temporal dynamics were needed for vocalizations in noise than vocalizations at multiple different intensities. Last, among the whole population, a subpopulation of neurons yielded optimal discrimination performance. Together, for different background noises, the results in this dissertation provide evidence for heterogeneous responses on the individual neuron level, and for consistent response properties on the population level

    Direction discrimination thresholds in binocular, monocular, and dichoptic viewing:motion opponency and contrast gain control

    Get PDF
    We studied the binocular organization of motion opponency and its relationship to contrast gain control. Luminance contrast thresholds for discriminating direction of motion were measured for drifting Gabor patterns (target) presented on counterphase flickering Gabor patterns (pedestal). There were four presentation conditions: binocular, monocular, dichoptic, and halfbinocular. For the half-binocular presentation, the target was presented to one eye while pedestals were presented to both eyes. In addition, to test for motion opponency, we studied two increment and decrement conditions, in which the target increased contrast for one direction of movement but decreased it for the opposite moving component of the pedestal. Threshold versus pedestal contrast functions showed a dipper shape, and there was a strong interaction between pedestal contrast and test condition. Binocular thresholds were lower than monocular thresholds but only at low pedestal contrasts. Monocular and half-binocular thresholds were similar at low pedestal contrasts, but half-binocular thresholds became higher and closer to dichoptic thresholds as pedestal contrast increased. Adding the decremental target reduced thresholds by a factor of two or more-a strong sign of opponency- when the decrement was in the same eye as the increment or the opposite eye. We compared several computational models fitted to the data. Converging evidence from the present and previous studies (Gorea, Conway, & Blake, 2001) suggests that motion opponency is most likely to be monocular, occurring before direction-specific binocular summation and before divisive, binocular gain control

    Effect of contrast on the perception of direction of a moving pattern

    Get PDF
    A series of experiments examining the effect of contrast on the perception of moving plaids was performed to test the hypothesis that the human visual system determines the direction of a moving plaid in a two-staged process: decomposition into component motion followed by application of the intersection-of-contraints rule. Although there is recent evidence that the first tenet of the hypothesis is correct, i.e., that plaid motion is initially decomposed into the motion of the individual grating components, the nature of the second-stage combination rule has not yet been established. It was found that when the gratings within the plaid are of different contrast the preceived direction is not predicted by the intersection-of-constraints rule. There is a strong (up to 20 deg) bias in the direction of the higher-constrast grating. A revised model, which incorporates a contrast-dependent weighting of perceived grating speed as observed for one-dimensional patterns, can quantitatively predict most of the results. The results are then discussed in the context of various models of human visual motion processing and of physiological responses of neurons in the primate visual system
    corecore