81 research outputs found
The Language of Dreams: Application of Linguistics-Based Approaches for the Automated Analysis of Dream Experiences
The study of dreams represents a crucial intersection between philosophical, psychological, neuroscientific, and clinical interests. Importantly, one of the main sources of insight into dreaming activity are the (oral or written) reports provided by dreamers upon awakening from their sleep. Classically, two main types of information are commonly extracted from dream reports: structural and semantic, content-related information. Extracted structural information is typically limited to the simple count of words or sentences in a report. Instead, content analysis usually relies on quantitative scores assigned by two or more (blind) human operators through the use of predefined coding systems. Within this review, we will show that methods borrowed from the field of linguistic analysis, such as graph analysis, dictionary-based content analysis, and distributional semantics approaches, could be used to complement and, in many cases, replace classical measures and scales for the quantitative structural and semantic assessment of dream reports. Importantly, these methods allow the direct (operator-independent) extraction of quantitative information from language data, hence enabling a fully objective and reproducible analysis of conscious experiences occurring during human sleep. Most importantly, these approaches can be partially or fully automatized and may thus be easily applied to the analysis of large datasets
Foreground Enhancement and Background Suppression in Human Early Visual System During Passive Perception of Natural Images
One of the major challenges in visual neuroscience is represented by foreground-background segmentation, a process that is supposed to rely on computations in cortical modules, as information progresses from V1 to V4. Data from nonhuman primates (Poort et al., 2016) showed that segmentation leads to two distinct, but associated processes: the enhancement of cortical activity associated to figure processing (i.e., foreground enhancement) and the suppression of ground-related cortical activity (i.e., background suppression). To characterize foreground-background segmentation of natural stimuli in humans, we parametrically modulated low-level properties of 334 images and their behaviorally segmented counterparts. A model based on simple visual features was then adopted to describe the filtered and intact images, and to evaluate their resemblance with fMRI activity in different visual cortices (V1, V2, V3, V3A, V3B, V4, LOC). Results from representational similarity analysis (Kriegeskorte et al., 2008) showed that the correspondence between behaviorally segmented natural images and brain activity increases throughout the visual processing stream. We found evidence of foreground enhancement for all the tested visual regions, while background suppression occurs in V3B, V4 and LOC. Our results suggest that foreground-background segmentation is an automatic process that occurs during natural viewing, and cannot be merely ascribed to differences in objects size or location. Finally, neural images reconstructed from V4 and LOC fMRI activity revealed a preserved spatial resolution of foreground textures, indicating a richer representation of the salient part of natural images, rather than a simplistic model of objects shape
Grammatical classes in the brain: MVPA reveals the cortical signature of verbs, adjectives and nouns
This study identified selective functional brain correlates of the distinct grammatical categories of verb, adjective and noun within a left lateralized language network. These results provide a robust indication of the neural underpinning of nouns and the first evidence on the representation of adjectives as grammatical category, thus making specific contributions also to the study of conceptual combination processes involving noun+adjective combinations, associated with the left anterolateral temporal lobe. Moreover, these data confirm the most consistent neuroanatomical findings from previous studies on verb-selectivity and provide new evidence on how grammatical category-specific information is represented in the brain when stimuli are controlled for crucial semantic features of verbs, as opposed to other word classes, and the effect of familiarity, imageability and concreteness is ruled out. In summary, this study specifically expands the current knowledge on how grammatical categories are captured in the brain, by assessing the role of language-sensitive regions in representing word classes and by identifying the kind of distinctions that drives neural selectivity
How concepts are encoded in the human brain: A modality independent, category-based cortical organization of semantic knowledge
Abstract How conceptual knowledge is represented in the human brain remains to be determined. To address the differential role of low-level sensory-based and high-level abstract features in semantic processing, we combined behavioral studies of linguistic production and brain activity measures by functional magnetic resonance imaging in sighted and congenitally blind individuals while they performed a property-generation task with concrete nouns from eight categories, presented through visual and/or auditory modalities. Patterns of neural activity within a large semantic cortical network that comprised parahippocampal, lateral occipital, temporo-parieto-occipital and inferior parietal cortices correlated with linguistic production and were independent both from the modality of stimulus presentation (either visual or auditory) and the (lack of) visual experience. In contrast, selected modality-dependent differences were observed only when the analysis was limited to the individual regions within the semantic cortical network. We conclude that conceptual knowledge in the human brain relies on a distributed, modality-independent cortical representation that integrates the partial category and modality specific information retained at a regional level
Functional and spatial segregation within the inferior frontal and superior temporal cortices during listening, articulation imagery, and production of vowels
Abstract Classical models of language localize speech perception in the left superior temporal and production in the inferior frontal cortex. Nonetheless, neuropsychological, structural and functional studies have questioned such subdivision, suggesting an interwoven organization of the speech function within these cortices. We tested whether sub-regions within frontal and temporal speech-related areas retain specific phonological representations during both perception and production. Using functional magnetic resonance imaging and multivoxel pattern analysis, we showed functional and spatial segregation across the left fronto-temporal cortex during listening, imagery and production of vowels. In accordance with classical models of language and evidence from functional studies, the inferior frontal and superior temporal cortices discriminated among perceived and produced vowels respectively, also engaging in the non-classical, alternative function – i.e. perception in the inferior frontal and production in the superior temporal cortex. Crucially, though, contiguous and non-overlapping sub-regions within these hubs performed either the classical or non-classical function, the latter also representing non-linguistic sounds (i.e., pure tones). Extending previous results and in line with integration theories, our findings not only demonstrate that sensitivity to speech listening exists in production-related regions and vice versa, but they also suggest that the nature of such interwoven organisation is built upon low-level perception
modality independent encoding of individual concepts in the left parietal cortex
Abstract The organization of semantic information in the brain has been mainly explored through category-based models, on the assumption that categories broadly reflect the organization of conceptual knowledge. However, the analysis of concepts as individual entities, rather than as items belonging to distinct superordinate categories, may represent a significant advancement in the comprehension of how conceptual knowledge is encoded in the human brain. Here, we studied the individual representation of thirty concrete nouns from six different categories, across different sensory modalities (i.e., auditory and visual) and groups (i.e., sighted and congenitally blind individuals) in a core hub of the semantic network, the left angular gyrus, and in its neighboring regions within the lateral parietal cortex. Four models based on either perceptual or semantic features at different levels of complexity (i.e., low- or high-level) were used to predict fMRI brain activity using representational similarity encoding analysis. When controlling for the superordinate component, high-level models based on semantic and shape information led to significant encoding accuracies in the intraparietal sulcus only. This region is involved in feature binding and combination of concepts across multiple sensory modalities, suggesting its role in high-level representation of conceptual knowledge. Moreover, when the information regarding superordinate categories is retained, a large extent of parietal cortex is engaged. This result indicates the need to control for the coarse-level categorial organization when performing studies on higher-level processes related to the retrieval of semantic information
Beyond motor scheme: a supramodal distributed representation in the action-observation network
The representation of actions within the action-observation network is thought to rely on a distributed functional organization. Furthermore, recent findings indicate that the action-observation network encodes not merely the observed motor act, but rather a representation that is independent from a specific sensory modality or sensory experience. In the present study, we wished to determine to what extent this distributed and ‘more abstract’ representation of action is truly supramodal, i.e. shares a common coding across sensory modalities. To this aim, a pattern recognition approach was employed to analyze neural responses in sighted and congenitally blind subjects during visual and/or auditory presentation of hand-made actions. Multivoxel pattern analyses-based classifiers discriminated action from non-action stimuli across sensory conditions (visual and auditory) and experimental groups (blind and sighted). Moreover, these classifiers labeled as ‘action’ the pattern of neural responses evoked during actual motor execution. Interestingly, discriminative information for the action/non action classification was located in a bilateral, but left-prevalent, network that strongly overlaps with brain regions known to form the action-observation network and the human mirror system. The ability to identify action features with a multivoxel pattern analyses-based classifier in both sighted and blind individuals and independently from the sensory modality conveying the stimuli clearly supports the hypothesis of a supramodal, distributed functional representation of actions, mainly within the action-observation network
- …