6,863 research outputs found

    Neural correlates of motion-induced blindness in the human brain

    Get PDF
    Motion-induced blindness (MIB) is a visual phenomenon in which highly salient visual targets spontaneously disappear from visual awareness (and subsequently reappear) when superimposed on a moving background of distracters. Such fluctuations in awareness of the targets, although they remain physically present, provide an ideal paradigm to study the neural correlates of visual awareness. Existing behavioral data on MIB are consistent both with a role for structures early in visual processing and with involvement of high-level visual processes. To further investigate this issue, we used high field functional MRI to investigate signals in human low-level visual cortex and motion-sensitive area V5/MT while participants reported disappearance and reappearance of an MIB target. Surprisingly, perceptual invisibility of the target was coupled to an increase in activity in low-level visual cortex plus area V5/MT compared with when the target was visible. This increase was largest in retinotopic regions representing the target location. One possibility is that our findings result from an active process of completion of the field of distracters that acts locally in the visual cortex, coupled to a more global process that facilitates invisibility in general visual cortex. Our findings show that the earliest anatomical stages of human visual cortical processing are implicated in MIB, as with other forms of bistable perception

    Dynamics of trimming the content of face representations for categorization in the brain

    Get PDF
    To understand visual cognition, it is imperative to determine when, how and with what information the human brain categorizes the visual input. Visual categorization consistently involves at least an early and a late stage: the occipito-temporal N170 event related potential related to stimulus encoding and the parietal P300 involved in perceptual decisions. Here we sought to understand how the brain globally transforms its representations of face categories from their early encoding to the later decision stage over the 400 ms time window encompassing the N170 and P300 brain events. We applied classification image techniques to the behavioral and electroencephalographic data of three observers who categorized seven facial expressions of emotion and report two main findings: (1) Over the 400 ms time course, processing of facial features initially spreads bilaterally across the left and right occipito-temporal regions to dynamically converge onto the centro-parietal region; (2) Concurrently, information processing gradually shifts from encoding common face features across all spatial scales (e.g. the eyes) to representing only the finer scales of the diagnostic features that are richer in useful information for behavior (e.g. the wide opened eyes in 'fear'; the detailed mouth in 'happy'). Our findings suggest that the brain refines its diagnostic representations of visual categories over the first 400 ms of processing by trimming a thorough encoding of features over the N170, to leave only the detailed information important for perceptual decisions over the P300

    Subjective signal strength distinguishes reality from imagination

    Get PDF
    Humans are voracious imaginers, with internal simulations supporting memory, planning and decision-making. Because the neural mechanisms supporting imagery overlap with those supporting perception, a foundational question is how reality and imagination are kept apart. One possibility is that the intention to imagine is used to identify and discount self-generated signals during imagery. Alternatively, because internally generated signals are generally weaker, sensory strength is used to index reality. Traditional psychology experiments struggle to investigate this issue as subjects can rapidly learn that real stimuli are in play. Here, we combined one-trial-per-participant psychophysics with computational modelling and neuroimaging to show that imagined and perceived signals are in fact intermixed, with judgments of reality being determined by whether this intermixed signal is strong enough to cross a reality threshold. A consequence of this account is that when virtual or imagined signals are strong enough, they become subjectively indistinguishable from reality
    corecore