14 research outputs found

    Oscillatory, Computational, and Behavioral Evidence for Impaired GABAergic Inhibition in Schizophrenia

    Get PDF
    The dysconnection hypothesis of schizophrenia (SZ) proposes that psychosis is best understood in terms of aberrant connectivity. Specifically, it suggests that dysconnectivity arises through aberrant synaptic modulation associated with deficits in GABAergic inhibition, excitation-inhibition balance and disturbances of high-frequency oscillations. Using a computational model combined with a graded-difficulty visual orientation discrimination paradigm, we demonstrate that, in SZ, perceptual performance is determined by the balance of excitation-inhibition in superficial cortical layers. Twenty-eight individuals with a DSM-IV diagnosis of SZ, and 30 age- and gender-matched healthy controls participated in a psychophysics orientation discrimination task, a visual grating magnetoencephalography (MEG) recording, and a magnetic resonance spectroscopy (MRS) scan for GABA. Using a neurophysiologically informed model, we quantified group differences in GABA, gamma measures, and the predictive validity of model parameters for orientation discrimination in the SZ group. MEG visual gamma frequency was reduced in SZ, with lower peak frequency associated with more severe negative symptoms. Orientation discrimination performance was impaired in SZ. Dynamic causal modeling of the MEG data showed that local synaptic connections were reduced in SZ and local inhibition correlated negatively with the severity of negative symptoms. The effective connectivity between inhibitory interneurons and superficial pyramidal cells predicted orientation discrimination performance within the SZ group; consistent with graded, behaviorally relevant, disease-related changes in local GABAergic connections. Occipital GABA levels were significantly reduced in SZ but did not predict behavioral performance or oscillatory measures. These findings endorse the importance, and behavioral relevance, of GABAergic synaptic disconnection in schizophrenia that underwrites excitation-inhibition balance

    Perceived Surface Slant Is Systematically Biased in the Actively-Generated Optic Flow

    Get PDF
    Humans make systematic errors in the 3D interpretation of the optic flow in both passive and active vision. These systematic distortions can be predicted by a biologically-inspired model which disregards self-motion information resulting from head movements (Caudek, Fantoni, & Domini 2011). Here, we tested two predictions of this model: (1) A plane that is stationary in an earth-fixed reference frame will be perceived as changing its slant if the movement of the observer's head causes a variation of the optic flow; (2) a surface that rotates in an earth-fixed reference frame will be perceived to be stationary, if the surface rotation is appropriately yoked to the head movement so as to generate a variation of the surface slant but not of the optic flow. Both predictions were corroborated by two experiments in which observers judged the perceived slant of a random-dot planar surface during egomotion. We found qualitatively similar biases for monocular and binocular viewing of the simulated surfaces, although, in principle, the simultaneous presence of disparity and motion cues allows for a veridical recovery of surface slant

    The reference frame for encoding and retention of motion depends on stimulus set size

    Get PDF
    YesThe goal of this study was to investigate the reference frames used in perceptual encoding and storage of visual motion information. In our experiments, observers viewed multiple moving objects and reported the direction of motion of a randomly selected item. Using a vector-decomposition technique, we computed performance during smooth pursuit with respect to a spatiotopic (nonretinotopic) and to a retinotopic component and compared them with performance during fixation, which served as the baseline. For the stimulus encoding stage, which precedes memory, we found that the reference frame depends on the stimulus set size. For a single moving target, the spatiotopic reference frame had the most significant contribution with some additional contribution from the retinotopic reference frame. When the number of items increased (Set Sizes 3 to 7), the spatiotopic reference frame was able to account for the performance. Finally, when the number of items became larger than 7, the distinction between reference frames vanished. We interpret this finding as a switch to a more abstract nonmetric encoding of motion direction. We found that the retinotopic reference frame was not used in memory. Taken together with other studies, our results suggest that, whereas a retinotopic reference frame may be employed for controlling eye movements, perception and memory use primarily nonretinotopic reference frames. Furthermore, the use of nonretinotopic reference frames appears to be capacity limited. In the case of complex stimuli, the visual system may use perceptual grouping in order to simplify the complexity of stimuli or resort to a nonmetric abstract coding of motion information

    Motion perception: behavior and neural substrate

    No full text
    Visual motion perception is vital for survival. Single-unit recordings in primate primary visual cortex (V1) have revealed the existence of specialized motion sensing neurons; perceptual effects such as the motion after-effect demonstrate their importance for motion perception. Human psychophysical data on motion detection can be explained by a computational model of cortical motion sensors. Both psychophysical and physiological data reveal at least two classes of motion sensor capable of sensing motion in luminance-defined and texture-defined patterns, respectively. Psychophysical experiments also reveal that motion can be seen independently of motion sensor output, based on attentive tracking of visual features. Sensor outputs are inherently ambiguous, due to the problem of univariance in neural responses. In order to compute stimulus direction and speed, the visual system must compare the responses of many different sensors sensitive to different directions and speeds. Physiological data show that this computation occurs in the visual middle temporal (MT) area. Recent psychophysical studies indicate that information about spatial form may also play a role in motion computations. Adaptation studies show that the human visual system is selectively sensitive to large-scale optic flow patterns, and physiological studies indicate that cells in the middle superior temporal (MST) area derive this sensitivity from the combined responses of many MT cells. Extraretinal signals used to control eye movements are an important source of signals to cancel out the retinal motion responses generated by eye movements, though visual information also plays a role. A number of issues remain to be resolved at all levels of the motion-processing hierarchy. © 2010 John Wiley & Sons, Ltd

    Sub-pixel accuracy: psychophysical validation of an algorithm for fine positioning and movement of dots on visual displays

    Get PDF
    Many visual experiments call for visual displays in which dots are plotted with very fine positional accuracy. Spatial hyperacuities and motion displacement thresholds can be as low as 5 sec arc. On computer graphics displays small angular displacements of a pixel can be obtained only with long viewing distances which impose a small field of view. To overcome this problem, we describe a method for positioning the centroid of a quadrel (a 2 × 2 block of pixels) with very high accuracy, equivalent to 0.4% of a pixel width. This enables dot displays to be plotted with high positional accuracy at short viewing distances with larger fields of view. We show psychophysically that hyperacuities can be measured with sub-pixel accuracy in quadrel displays. Motion displacement thresholds of 16 sec arc were measured in multiple-dot and single-dot displays even though the pixel spacing was 1.2 min arc. Quadrel displays may be especially useful in studies of optic flow and structure-from-motion which demand a fairly large field of view along with fine positional accuracy

    Vergence effects on the perception of motion-in-depth

    No full text
    When the eyes follow a target that is moving directly towards the head they make a vergence eye movement. Accurate perception of the target's motion requires adequate compensation for the movements of the eyes. The experiments in this paper address the issue of how well the visual system compensates for vergence eye movements when viewing moving targets. We show that there are small but consistent biases across observers: When the eyes follow a target that is moving in depth, it is typically perceived as slower than when the eyes are kept stationary. We also analysed the eye movements that were made by observers. We found that there are considerable differences between observers and between trials, but we did not find evidence that the gains and phase lags of the eye movements were related to psychophysical performance.</p
    corecore