2,522 research outputs found

    Integration of motion information during binocular rivalry

    Get PDF
    AbstractWhen two moving gratings are superimposed in normal viewing they often combine to form a pattern that moves with a single direction of motion. Here, we investigated whether the same mechanism underlies pattern motion when drifting gratings are presented independently to the two eyes. We report that, with relatively large circular grating patches (4 deg), there are periods of monocular dominance in which one eye's orientation alone is perceived, usually moving orthogonal to the contours (component motion). But, during the transitions from one monocular view to the other, a fluid mosaic is perceived, consisting of contiguous patches, each containing contours of only one of the gratings. This entire mosaic often appears to move in a single direction (pattern motion), just as when two gratings are literally superimposed. Although this implies that motion signals from the perceptually suppressed grating continue to influence the perception of motion, an alternative possibility is that it reflects a strategy that involves integrating directional information from the contiguous single-grating patches. To test between these possibilities, we performed a second experiment with very small grating stimuli that were about the same size as the contiguous single-grating patches in the mosaic (1-deg diameter). Despite the fact that the form of only one grating was perceived, we report that pattern motion was still perceived on about one third of trials. Moreover, a decrease in the occurrence of pattern motion was apparent when the contrast and spatial frequency of the gratings were made more different from each other. This phenomenon clearly demonstrates an independent binocular interaction for form and motion

    Stereoscopic Depth Perception during Binocular Rivalry

    Get PDF
    When we view nearby objects, we generate appreciably different retinal images in each eye. Despite this, the visual system can combine these different images to generate a unified view that is distinct from the perception generated from either eye alone (stereopsis). However, there are occasions when the images in the two eyes are too disparate to fuse. Instead, they alternate in perceptual dominance, with the image from one eye being completely excluded from awareness (binocular rivalry). It has been thought that binocular rivalry is the default outcome when binocular fusion is not possible. However, other studies have reported that stereopsis and binocular rivalry can coexist. The aim of this study was to address whether a monocular stimulus that is reported to be suppressed from awareness can continue to contribute to the perception of stereoscopic depth. Our results showed that stereoscopic depth perception was still evident when incompatible monocular images differing in spatial frequency, orientation, spatial phase, or direction of motion engage in binocular rivalry. These results demonstrate a range of conditions in which binocular rivalry and stereopsis can coexist

    Orientation-sensitivity to facial features explains the Thatcher illusion

    Get PDF
    The Thatcher illusion provides a compelling example of the perceptual cost of face inversion. The Thatcher illusion is often thought to result from a disruption to the processing of spatial relations between face features. Here, we show the limitations of this account and instead demonstrate that the effect of inversion in the Thatcher illusion is better explained by a disruption to the processing of purely local facial features. Using a matching task, we found that participants were able to discriminate normal and Thatcherized versions of the same face when they were presented in an upright orientation, but not when the images were inverted. Next, we showed that the effect of inversion was also apparent when only the eye region or only the mouth region was visible. These results demonstrate that a key component of the Thatcher illusion is to be found in orientation-specific encoding of the expressive features (eyes and mouth) of the face

    Idiosyncratic characteristics of saccadic eye movements when viewing different visual environments

    Get PDF
    AbstractEye position was recorded in different viewing conditions to assess whether the temporal and spatial characteristics of saccadic eye movements in different individuals are idiosyncratic. Our aim was to determine the degree to which oculomotor control is based on endogenous factors. A total of 15 naive subjects viewed five visual environments: (1) The absence of visual stimulation (i.e. a dark room); (2) a repetitive visual environment (i.e. simple textured patterns); (3) a complex natural scene; (4) a visual search task; and (5) reading text. Although differences in visual environment had significant effects on eye movements, idiosyncrasies were also apparent. For example, the mean fixation duration and size of an individual’s saccadic eye movements when passively viewing a complex natural scene covaried significantly with those same parameters in the absence of visual stimulation and in a repetitive visual environment. In contrast, an individual’s spatio-temporal characteristics of eye movements during active tasks such as reading text or visual search covaried together, but did not correlate with the pattern of eye movements detected when viewing a natural scene, simple patterns or in the dark. These idiosyncratic patterns of eye movements in normal viewing reveal an endogenous influence on oculomotor control. The independent covariance of eye movements during different visual tasks shows that saccadic eye movements during active tasks like reading or visual search differ from those engaged during the passive inspection of visual scenes

    Ambiguity in the perception of moving stimuli is resolved in favour of the cardinal axes

    Get PDF
    AbstractThe aim of this study was to determine whether there is a link between the statistical properties of natural scenes and our perception of moving surfaces. Accordingly, we devised an ambiguous moving stimulus that could be perceived as moving in one of three directions of motion. The stimulus was a circular patch containing three square-wave drifting gratings. One grating was always either horizontal or vertical; the other two had component directions of drift at 120° to the first (and to each other), producing four possible stimulus geometries. These were presented in a pseudorandom sequence. In brief presentations, subjects always perceived two of the gratings to cohere and move as a pattern in one direction, and the third grating to move independently in the opposite direction (its component direction). Although there were three equally plausible axes (one cardinal and two oblique) along which the coherent and independent motions could occur, subjects routinely saw motion along one of the cardinal axes. Thus, the visual system preferentially combines the two oblique gratings to form a pattern that drifts in the opposite direction to the cardinal grating. It was only when the contrast of one of the oblique gratings was changed that an oblique axis of motion was perceived. This perceptual anisotropy can be related to naturally occurring bias in the visual environment, notably the predominance of horizontal and vertical contours in our visual world

    The transient response of global-mean precipitation to increasing carbon dioxide levels

    Get PDF
    The transient response of global-mean precipitation to an increase in atmospheric carbon dioxide levels of 1% yr(-1) is investigated in 13 fully coupled atmosphere-ocean general circulation models (AOGCMs) and compared to a period of stabilization. During the period of stabilization, when carbon dioxide levels are held constant at twice their unperturbed level and the climate left to warm, precipitation increases at a rate of similar to 2.4% per unit of global-mean surface-air-temperature change in the AOGCMs. However, when carbon dioxide levels are increasing, precipitation increases at a smaller rate of similar to 1.5% per unit of global-mean surface-air-temperature change. This difference can be understood by decomposing the precipitation response into an increase from the response to the global surface-temperature increase (and the climate feedbacks it induces), and a fast atmospheric response to the carbon dioxide radiative forcing that acts to decrease precipitation. According to the multi-model mean, stabilizing atmospheric levels of carbon dioxide would lead to a greater rate of precipitation change per unit of global surface-temperature change

    A matrix isolation and computational study of molecular palladium fluorides : does PdF₆ exist?

    Get PDF
    Palladium atoms generated by thermal evaporation and laser ablation were reacted with and trapped in F₂ /Ar, F₂ /Ne, and neat F₂ matrices. The products were characterized by electronic absorption and infrared spectroscopy, together with relativistic density functional theory calculations as well as coupled cluster calculations. Vibrational modes at 540 and 617 cm⁻¹ in argon matrices were assigned to molecular PdF and PdF₂ , and a band at 692 cm⁻¹ was assigned to molecular PdF₄ . A band at 624 cm⁻¹ can be assigned to either PdF₃ or PdF₆, with the former preferred from experimental considerations. Although calculations might support the latter assignment, our conclusion is that in these detailed experiments there is no convincing evidence for PdF₆

    The emergence of view-symmetric neural responses to familiar and unfamiliar faces

    Get PDF
    Successful recognition of familiar faces is thought to depend on the ability to integrate view-dependent representations of a face into a view-invariant representation. It has been proposed that a key intermediate step in achieving view invariance is the representation of symmetrical views. However, key unresolved questions remain, such as whether these representations are specific for naturally occurring changes in viewpoint and whether view-symmetric representations exist for familiar faces. To address these issues, we compared behavioural and neural responses to natural (canonical) and unnatural (noncanonical) rotations of the face. Similarity judgements revealed that symmetrical viewpoints were perceived to be more similar than non-symmetrical viewpoints for both canonical and non-canonical rotations. Next, we measured patterns of neural response from early to higher level regions of visual cortex. Early visual areas showed a view-dependent representation for natural or canonical rotations of the face, such that the similarity between patterns of response were related to the difference in rotation. View symmetric patterns of neural response to canonically rotated faces emerged in higher visual areas, particularly in face-selective regions. The emergence of a view-symmetric representation from a view-dependent representation for canonical rotations of the face was also evident for familiar faces, suggesting that view-symmetry is an important intermediate step in generating view-invariant representations. Finally, we measured neural responses to unnatural or non-canonical rotations of the face. View-symmetric patterns of response were also found in face-selective regions. However, in contrast to natural or canonical rotations of the face, these view-symmetric responses did not arise from an initial view-dependent representation in early visual areas. This suggests differences in the way that view-symmetrical representations emerge with canonical or non-canonical rotations. The similarity in the neural response to canonical views of familiar and unfamiliar faces in the core face network suggests that the neural correlates of familiarity emerge at later stages of processing

    An evaluation of how connectopic mapping reveals visual field maps in V1

    Get PDF
    Functional gradients, in which response properties change gradually across the cortical surface, have been proposed as a key organising principle of the brain. However, the presence of these gradients remains undetermined in many brain regions. Resting-state neuroimaging studies have suggested these gradients can be reconstructed from patterns of functional connectivity. Here we investigate the accuracy of these reconstructions and establish whether it is connectivity or the functional properties within a region that determine these "connectopic maps". Different manifold learning techniques were used to recover visual field maps while participants were at rest or engaged in natural viewing. We benchmarked these reconstructions against maps measured by traditional visual field mapping. We report an initial exploratory experiment of a publicly available naturalistic imaging dataset, followed by a preregistered replication using larger resting-state and naturalistic imaging datasets from the Human Connectome Project. Connectopic mapping accurately predicted visual field maps in primary visual cortex, with better predictions for eccentricity than polar angle maps. Non-linear manifold learning methods outperformed simpler linear embeddings. We also found more accurate predictions during natural viewing compared to resting-state. Varying the source of the connectivity estimates had minimal impact on the connectopic maps, suggesting the key factor is the functional topography within a brain region. The application of these standardised methods for connectopic mapping will allow the discovery of functional gradients across the brain. PROTOCOL REGISTRATION: The stage 1 protocol for this Registered Report was accepted in principle on 19 April 2022. The protocol, as accepted by the journal, can be found at https://doi.org/10.6084/m9.figshare.19771717
    corecore