233 research outputs found

    Reading the mind's eye: Decoding category information during mental imagery

    Get PDF
    Category information for visually presented objects can be read out from multi-voxel patterns of fMRI activity in ventralā€“temporal cortex. What is the nature and reliability of these patterns in the absence of any bottomā€“up visual input, for example, during visual imagery? Here, we first ask how well category information can be decoded for imagined objects and then compare the representations evoked during imagery and actual viewing. In an fMRI study, four object categories (food, tools, faces, buildings) were either visually presented to subjects, or imagined by them. Using pattern classification techniques, we could reliably decode category information (including for non-special categories, i.e., food and tools) from ventralā€“temporal cortex in both conditions, but only during actual viewing from retinotopic areas. Interestingly, in temporal cortex when the classifier was trained on the viewed condition and tested on the imagery condition, or vice versa, classification performance was comparable to within the imagery condition. The above results held even when we did not use information in the specialized category-selective areas. Thus, the patterns of representation during imagery and actual viewing are in fact surprisingly similar to each other. Consistent with this observation, the maps of ā€œdiagnostic voxelsā€ (i.e., the classifier weights) for the perception and imagery classifiers were more similar in ventralā€“temporal cortex than in retinotopic cortex. These results suggest that in the absence of any bottomā€“up input, cortical back projections can selectively re-activate specific patterns of neural activity

    Assessing residual reasoning ability in overtly non-communicative patients using fMRI

    Get PDF
    It is now well established that some patients who are diagnosed as being in a vegetative state or a minimally conscious state show reliable signs of volition that may only be detected by measuring neural responses. A pertinent question is whether these patients are also capable of logical thought. Here, we validate an fMRI paradigm that can detect the neural fingerprint of reasoning processes and moreover, can confirm whether a participant derives logical answers. We demonstrate the efficacy of this approach in a physically non-communicative patient who had been shown to engage in mental imagery in response to simple audi- tory instructions. Our results demonstrate that this individual retains a remarkable capacity for higher cogni- tion, engaging in the reasoning task and deducing logical answers. We suggest that this approach is suitable for detecting residual reasoning ability using neural responses and could readily be adapted to assess other aspects of cognition

    Decoding fMRI events in sensorimotor motor network using sparse paradigm free mapping and activation likelihood estimates

    Get PDF
    Most functional MRI (fMRI) studies map task-driven brain activity using a block or event-related paradigm. Sparse paradigm free mapping (SPFM) can detect the onset and spatial distribution of BOLD events in the brain without prior timing information, but relating the detected events to brain function remains a challenge. In this study, we developed a decoding method for SPFM using a coordinate-based meta-analysis method of activation likelihood estimation (ALE). We defined meta-maps of statistically significant ALE values that correspond to types of events and calculated a summation overlap between the normalized meta-maps and SPFM maps. As a proof of concept, this framework was applied to relate SPFM-detected events in the sensorimotor network (SMN) to six motor functions (left/right fingers, left/right toes, swallowing, and eye blinks). We validated the framework using simultaneous electromyography (EMG)ā€“fMRI experiments and motor tasks with short and long duration, and random interstimulus interval. The decoding scores were considerably lower for eye movements relative to other movement types tested. The average successful rate for short and long motor events were 77ā€‰Ā±ā€‰13% and 74ā€‰Ā±ā€‰16%, respectively, excluding eye movements. We found good agreement between the decoding results and EMG for most events and subjects, with a range in sensitivity between 55% and 100%, excluding eye movements. The proposed method was then used to classify the movement types of spontaneous single-trial events in the SMN during resting state, which produced an average successful rate of 22ā€‰Ā±ā€‰12%. Finally, this article discusses methodological implications and improvements to increase the decoding performance

    Top-down Modulations in the Visual Form Pathway Revealed with Dynamic Causal Modeling

    Get PDF
    Perception entails interactions between activated brain visual areas and the records of previous sensations, allowing for processes like figureā€“ground segregation and object recognition. The aim of this study was to characterize top-down effects that originate in the visual cortex and that are involved in the generation and perception of form. We performed a functional magnetic resonance imaging experiment, where subjects viewed 3 groups of stimuli comprising oriented lines with different levels of recognizable high-order structure (none, collinearity, and meaning). Our results showed that recognizable stimuli cause larger activations in anterior visual and frontal areas. In contrast, when stimuli are random or unrecognizable, activations are greater in posterior visual areas, following a hierarchical organization where areas V1/V2 were less active with ā€œcollinearityā€ and the middle occipital cortex was less active with ā€œmeaning.ā€ An effective connectivity analysis using dynamic causal modeling showed that high-order visual form engages higher visual areas that generate top-down signals, from multiple levels of the visual hierarchy. These results are consistent with a model in which if a stimulus has recognizable attributes, such as collinearity and meaning, the areas specialized for processing these attributes send top-down messages to the lower levels to facilitate more efficient encoding of visual form

    Shifting Attention within Memory Representations Involves Early Visual Areas

    Get PDF
    Prior studies have shown that spatial attention modulates early visual cortex retinotopically, resulting in enhanced processing of external perceptual representations. However, it is not clear whether the same visual areas are modulated when attention is focused on, and shifted within a working memory representation. In the current fMRI study participants were asked to memorize an array containing four stimuli. After a delay, participants were presented with a verbal cue instructing them to actively maintain the location of one of the stimuli in working memory. Additionally, on a number of trials a second verbal cue instructed participants to switch attention to the location of another stimulus within the memorized representation. Results of the study showed that changes in the BOLD pattern closely followed the locus of attention within the working memory representation. A decrease in BOLD-activity (V1ā€“V3) was observed at ROIs coding a memory location when participants switched away from this location, whereas an increase was observed when participants switched towards this location. Continuous increased activity was obtained at the memorized location when participants did not switch. This study shows that shifting attention within memory representations activates the earliest parts of visual cortex (including V1) in a retinotopic fashion. We conclude that even in the absence of visual stimulation, early visual areas support shifting of attention within memorized representations, similar to when attention is shifted in the outside world. The relationship between visual working memory and visual mental imagery is discussed in light of the current findings

    Age and distraction are determinants of performance on a novel visual search task in aged Beagle dogs

    Get PDF
    Aging has been shown to disrupt performance on tasks that require intact visual search and discrimination abilities in human studies. The goal of the present study was to determine if canines show age-related decline in their ability to perform a novel simultaneous visual search task. Three groups of canines were included: a young group (Nā€‰=ā€‰10; 3 to 4.5Ā years), an old group (Nā€‰=ā€‰10; 8 to 9.5Ā years), and a senior group (Nā€‰=ā€‰8; 11 to 15.3Ā years). Subjects were first tested for their ability to learn a simple two-choice discrimination task, followed by the visual search task. Attentional demands in the task were manipulated by varying the number of distracter items; dogs received an equal number of trials with either zero, one, two, or three distracters. Performance on the two-choice discrimination task varied with age, with senior canines making significantly more errors than the young. Performance accuracy on the visual search task also varied with age; senior animals were significantly impaired compared to both the young and old, and old canines were intermediate in performance between young and senior. Accuracy decreased significantly with added distracters in all age groups. These results suggest that aging impairs the ability of canines to discriminate between task-relevant and -irrelevant stimuli. This is likely to be derived from impairments in cognitive domains such as visual memory and learning and selective attention

    Closing the Mind's Eye: Incoming Luminance Signals Disrupt Visual Imagery

    Get PDF
    Mental imagery has been associated with many cognitive functions, both high and low-level. Despite recent scientific advances, the contextual and environmental conditions that most affect the mechanisms of visual imagery remain unclear. It has been previously shown that the greater the level of background luminance the weaker the effect of imagery on subsequent perception. However, in these experiments it was unclear whether the luminance was affecting imagery generation or storage of a memory trace. Here, we report that background luminance can attenuate both mental imagery generation and imagery storage during an unrelated cognitive task. However, imagery generation was more sensitive to the degree of luminance. In addition, we show that these findings were not due to differential dark adaptation. These results suggest that afferent visual signals can interfere with both the formation and priming-memory effects associated with visual imagery. It follows that background luminance may be a valuable tool for investigating imagery and its role in various cognitive and sensory processes

    Attentional modulations of the early and later stages of the neural processing of visual completion

    Get PDF
    The brain effortlessly recognizes objects even when the visual information belonging to an object is widely separated, as well demonstrated by the Kanizsa-type illusory contours (ICs), in which a contour is perceived despite the fragments of the contour being separated by gaps. Such large-range visual completion has long been thought to be preattentive, whereas its dependence on top-down influences remains unclear. Here, we report separate modulations by spatial attention and task relevance on the neural activities in response to the ICs. IC-sensitive event-related potentials that were localized to the lateral occipital cortex were modulated by spatial attention at an early processing stage (130ā€“166ā€…ms after stimulus onset) and modulated by task relevance at a later processing stage (234ā€“290ā€…ms). These results not only demonstrate top-down attentional influences on the neural processing of ICs but also elucidate the characteristics of the attentional modulations that occur in different phases of IC processing

    Auditory Selective Attention to Speech Modulates Activity in the Visual Word Form Area

    Get PDF
    Selective attention to speech versus nonspeech signals in complex auditory input could produce top-down modulation of cortical regions previously linked to perception of spoken, and even visual, words. To isolate such top-down attentional effects, we contrasted 2 equally challenging active listening tasks, performed on the same complex auditory stimuli (words overlaid with a series of 3 tones). Instructions required selectively attending to either the speech signals (in service of rhyme judgment) or the melodic signals (tone-triplet matching). Selective attention to speech, relative to attention to melody, was associated with blood oxygenation levelā€“dependent (BOLD) increases during functional magnetic resonance imaging (fMRI) in left inferior frontal gyrus, temporal regions, and the visual word form area (VWFA). Further investigation of the activity in visual regions revealed overall deactivation relative to baseline rest for both attention conditions. Topographic analysis demonstrated that while attending to melody drove deactivation equivalently across all fusiform regions of interest examined, attending to speech produced a regionally specific modulation: deactivation of all fusiform regions, except the VWFA. Results indicate that selective attention to speech can topographically tune extrastriate cortex, leading to increased activity in VWFA relative to surrounding regions, in line with the well-established connectivity between areas related to spoken and visual word perception in skilled readers

    The Sensory Consequences of Speaking: Parametric Neural Cancellation during Speech in Auditory Cortex

    Get PDF
    When we speak, we provide ourselves with auditory speech input. Efficient monitoring of speech is often hypothesized to depend on matching the predicted sensory consequences from internal motor commands (forward model) with actual sensory feedback. In this paper we tested the forward model hypothesis using functional Magnetic Resonance Imaging. We administered an overt picture naming task in which we parametrically reduced the quality of verbal feedback by noise masking. Presentation of the same auditory input in the absence of overt speech served as listening control condition. Our results suggest that a match between predicted and actual sensory feedback results in inhibition of cancellation of auditory activity because speaking with normal unmasked feedback reduced activity in the auditory cortex compared to listening control conditions. Moreover, during self-generated speech, activation in auditory cortex increased as the feedback quality of the self-generated speech decreased. We conclude that during speaking early auditory cortex is involved in matching external signals with an internally generated model or prediction of sensory consequences, the locus of which may reside in auditory or higher order brain areas. Matching at early auditory cortex may provide a very sensitive monitoring mechanism that highlights speech production errors at very early levels of processing and may efficiently determine the self-agency of speech input
    • ā€¦
    corecore