496 research outputs found

    Decoding Visual Percepts Induced by Word Reading with fMRI

    Get PDF
    International audienceWord reading involves multiple cognitive processes. To infer which word is being visualized, the brain first processes the visual percept, deciphers the letters, bigrams, and activates different words based on context or prior expectation like word frequency. In this contribution, we use supervised machine learning techniques to decode the first step of this processing stream using functional Magnetic Resonance Images (fMRI). We build a decoder that predicts the visual percept formed by four letter words, allowing us to identify words that were not present in the training data. To do so, we cast the learning problem as multiple classification problems after describing words with multiple binary attributes. This work goes beyond the identification or reconstruction of single letters or simple geometrical shapes and addresses a challenging estimation problem, that is the prediction of multiple variables from a single observation, hence facing the problem of learning multiple predictors from correlated inputs

    HRF estimation improves sensitivity of fMRI encoding and decoding models

    Get PDF
    Extracting activation patterns from functional Magnetic Resonance Images (fMRI) datasets remains challenging in rapid-event designs due to the inherent delay of blood oxygen level-dependent (BOLD) signal. The general linear model (GLM) allows to estimate the activation from a design matrix and a fixed hemodynamic response function (HRF). However, the HRF is known to vary substantially between subjects and brain regions. In this paper, we propose a model for jointly estimating the hemodynamic response function (HRF) and the activation patterns via a low-rank representation of task effects.This model is based on the linearity assumption behind the GLM and can be computed using standard gradient-based solvers. We use the activation patterns computed by our model as input data for encoding and decoding studies and report performance improvement in both settings.Comment: 3nd International Workshop on Pattern Recognition in NeuroImaging (2013

    The Compositional Nature of Verb and Argument Representations in the Human Brain

    Get PDF
    How does the human brain represent simple compositions of objects, actors,and actions? We had subjects view action sequence videos during neuroimaging (fMRI) sessions and identified lexical descriptions of those videos by decoding (SVM) the brain representations based only on their fMRI activation patterns. As a precursor to this result, we had demonstrated that we could reliably and with high probability decode action labels corresponding to one of six action videos (dig, walk, etc.), again while subjects viewed the action sequence during scanning (fMRI). This result was replicated at two different brain imaging sites with common protocols but different subjects, showing common brain areas, including areas known for episodic memory (PHG, MTL, high level visual pathways, etc.,i.e. the 'what' and 'where' systems, and TPJ, i.e. 'theory of mind'). Given these results, we were also able to successfully show a key aspect of language compositionality based on simultaneous decoding of object class and actor identity. Finally, combining these novel steps in 'brain reading' allowed us to accurately estimate brain representations supporting compositional decoding of a complex event composed of an actor, a verb, a direction, and an object.Comment: 11 pages, 6 figure

    Lexical and audiovisual bases of perceptual adaptation in speech

    Get PDF

    Reading visually embodied meaning from the brain: Visually grounded computational models decode visual-object mental imagery induced by written text

    Get PDF
    Embodiment theory predicts that mental imagery of object words recruits neural circuits involved in object perception. The degree of visual imagery present in routine thought and how it is encoded in the brain is largely unknown. We test whether fMRI activity patterns elicited by participants reading objects' names include embodied visual-object representations, and whether we can decode the representations using novel computational image-based semantic models. We first apply the image models in conjunction with text-based semantic models to test predictions of visual-specificity of semantic representations in different brain regions. Representational similarity analysis confirms that fMRI structure within ventral-temporal and lateral-occipital regions correlates most strongly with the image models and conversely text models correlate better with posterior-parietal/lateral-temporal/inferior-frontal regions. We use an unsupervised decoding algorithm that exploits commonalities in representational similarity structure found within both image model and brain data sets to classify embodied visual representations with high accuracy (8/10) and then extend it to exploit model combinations to robustly decode different brain regions in parallel. By capturing latent visual-semantic structure our models provide a route into analyzing neural representations derived from past perceptual experience rather than stimulus-driven brain activity. Our results also verify the benefit of combining multimodal data to model human-like semantic representations

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Decoding Brain Activity Associated with Literal and Metaphoric Sentence Comprehension Using Distributional Semantic Models

    Get PDF
    Recent years have seen a growing interest within the natural language processing (NLP)community in evaluating the ability of semantic models to capture human meaning representation in the brain. Existing research has mainly focused on applying semantic models to de-code brain activity patterns associated with the meaning of individual words, and, more recently, this approach has been extended to sentences and larger text fragments. Our work is the first to investigate metaphor process-ing in the brain in this context. We evaluate a range of semantic models (word embeddings, compositional, and visual models) in their ability to decode brain activity associated with reading of both literal and metaphoric sentences. Our results suggest that compositional models and word embeddings are able to capture differences in the processing of literal and metaphoric sentences, providing sup-port for the idea that the literal meaning is not fully accessible during familiar metaphor comprehension

    The Constructive Nature of Color Vision and Its Neural Basis

    Get PDF
    Our visual world is made up of colored surfaces. The color of a surface is physically determined by its reflectance, i.e., how much energy it reflects as a function of wavelength. Reflected light, however, provides only ambiguous information about the color of a surface as it depends on the spectral properties of both the surface and the illumination. Despite the confounding effects of illumination on the reflected light, the visual system is remarkably good at inferring the reflectance of a surface, enabling observers to perceive surface colors as stable across illumination changes. This capacity of the visual system is called color constancy and it highlights that color vision is a constructive process. The research presented here investigates the neural basis of some of the most relevant aspects of the constructive nature of human color vision using machine learning algorithms and functional neuroimaging. The experiments demonstrate that color-related prior knowledge influences neural signals already in the earliest area of visual processing in the cortex, area V1, whereas in object imagery, perceived color shared neural representations with the color of the imagined objects in human V4. A direct test for illumination-invariant surface color representation showed that neural coding in V1 as well as a region anterior to human V4 was robust against illumination changes. In sum, the present research shows how different aspects of the constructive nature of color vision can be mapped to different regions in the ventral visual pathway
    corecore