22 research outputs found

    Reading visually embodied meaning from the brain: Visually grounded computational models decode visual-object mental imagery induced by written text

    Get PDF
    Embodiment theory predicts that mental imagery of object words recruits neural circuits involved in object perception. The degree of visual imagery present in routine thought and how it is encoded in the brain is largely unknown. We test whether fMRI activity patterns elicited by participants reading objects' names include embodied visual-object representations, and whether we can decode the representations using novel computational image-based semantic models. We first apply the image models in conjunction with text-based semantic models to test predictions of visual-specificity of semantic representations in different brain regions. Representational similarity analysis confirms that fMRI structure within ventral-temporal and lateral-occipital regions correlates most strongly with the image models and conversely text models correlate better with posterior-parietal/lateral-temporal/inferior-frontal regions. We use an unsupervised decoding algorithm that exploits commonalities in representational similarity structure found within both image model and brain data sets to classify embodied visual representations with high accuracy (8/10) and then extend it to exploit model combinations to robustly decode different brain regions in parallel. By capturing latent visual-semantic structure our models provide a route into analyzing neural representations derived from past perceptual experience rather than stimulus-driven brain activity. Our results also verify the benefit of combining multimodal data to model human-like semantic representations

    fMRI

    No full text

    Part-of-Speech Classification from Magnetoencephalography Data Using 1-Dimensional Convolutional Neural Network

    No full text
    Neural decoding of speech and language refers to the extraction of information regarding the stimulus and the mental state of subjects from recordings of their brain activity while performing linguistic tasks. Recent years have seen significant progress in the decoding of speech from cortical activity. This study instead focuses on decoding linguistic information. We present a deep parallel temporal convolutional neural network (1DCNN) trained on part-of-speech (PoS) classification from magnetoencephalography (MEG) data collected during natural language reading. The network is trained on data from 15 human subjects separately, and yields above-chance accuracies on test data for all of them. The level of PoS was targeted because it offers a clean linguistic benchmark level that represents syntactic information and abstracts away from semantic or conceptual representations

    Dependency Parsing with your Eyes: Dependency Structure Predicts Eye Regressions During Reading

    No full text
    Backward saccades during reading have been hypothesized to be involved in structural reanalysis, or to be related to the level of text difficulty. We test the hypothesis that backward saccades are involved in online syntactic analysis. If this is the case we expect that saccades will coincide, at least partially, with the edges of the relations computed by a dependency parser. In order to test this, we analyzed a large eye-tracking dataset collected while 102 participants read three short narrative texts. Our results show a relation between backward saccades and the syntactic structure of sentences

    Dependency Parsing with your Eyes: Dependency Structure Predicts Eye Regressions During Reading

    No full text
    Backward saccades during reading have been hypothesized to be involved in structural reanalysis, or to be related to the level of text difficulty. We test the hypothesis that backward saccades are involved in online syntactic analysis. If this is the case we expect that saccades will coincide, at least partially, with the edges of the relations computed by a dependency parser. In order to test this, we analyzed a large eye-tracking dataset collected while 102 participants read three short narrative texts. Our results show a relation between backward saccades and the syntactic structure of sentences

    Using stochastic language models (SLM) to map lexical, syntactic, and phonological information processing in the brain.

    Get PDF
    Language comprehension involves the simultaneous processing of information at the phonological, syntactic, and lexical level. We track these three distinct streams of information in the brain by using stochastic measures derived from computational language models to detect neural correlates of phoneme, part-of-speech, and word processing in an fMRI experiment. Probabilistic language models have proven to be useful tools for studying how language is processed as a sequence of symbols unfolding in time. Conditional probabilities between sequences of words are at the basis of probabilistic measures such as surprisal and perplexity which have been successfully used as predictors of several behavioural and neural correlates of sentence processing. Here we computed perplexity from sequences of words and their parts of speech, and their phonemic transcriptions. Brain activity time-locked to each word is regressed on the three model-derived measures. We observe that the brain keeps track of the statistical structure of lexical, syntactic and phonological information in distinct areas

    NBD (Narrative Brain Dataset)

    No full text
    NBD is an fMRI dataset that was collected during spoken presentation of short excerpts of three stories in Dutch. Written versions of the stimuli are annotated with part of speech tags (PoS) and phonemic transcriptions. In addition, the texts are accompanied with stochastic (perplexity and entropy) and semantic computational linguistic measures. The richness and unconstrained nature of the data allows the study of language processing in the brain, in a more naturalistic setting than is common for fMRI studies
    corecore