2,660 research outputs found

    Cortico-hippocampal activations for high entropy visual stimulus: an fMRI perspective

    Get PDF
    We perceive the environment around us in order to act upon it. To gain the desirable outcome effectively, we not only need the incoming information to be processed efficiently but we also need to know how reliable this information is. How this uncertainty is extracted from the visual input and how is it represented in the brain are still open questions. The hippocampus reacts to different measures of uncertainty. Because it is strongly connected to different cortical and subcortical regions, the hippocampus has the resources to communicate such information to other brain regions involved in visual processing and other cognitive processes. In this thesis, we investigate the aspects of uncertainty to which the hippocampus reacts. Is it the uncertainty in the ongoing recognition attempt of a temporally unfolding stimulus or is it the low-level spatiotemporal entropy? To answer this question, we used a dynamic visual stimulus with varying spatial and spatiotemporal entropy. We used well-structured virtual tunnel videos and the corresponding phase-scrambled videos with matching local luminance and contrast per frame. We also included pixel scrambled videos with high spatial and spatiotemporal entropy in our stimulus set. Brain responses (fMRI images) from the participants were recorded while they watched these videos and performed an engaging but cognitively independent task. Using the General Linear Model (GLM), we modeled the brain responses corresponding to different video types and found that the early visual cortex and the hippocampus had a stronger response to videos with higher spatiotemporal entropy. Using independent component analysis, we further investigated which underlying networks were recruited in processing high entropy visual information. We also discovered how these networks might influence each other. We found two cortico-hippocampal networks involved in processing our stimulus videos. While one of them represented a general primary visual processing network, the other was activated strongly by the high entropy videos and deactivated by the well-structured virtual tunnel videos. We also found a hierarchy in the processing stream with information flowing from less stimulus-specific to more stimulus-specific networks

    Lexical and audiovisual bases of perceptual adaptation in speech

    Get PDF

    Using action understanding to understand the left inferior parietal cortex in the human brain

    Full text link
    Published in final edited form as: Brain Res. 2014 September 25; 1582: 64–76. doi:10.1016/j.brainres.2014.07.035.Humans have a sophisticated knowledge of the actions that can be performed with objects. In an fMRI study we tried to establish whether this depends on areas that are homologous with the inferior parietal cortex (area PFG) in macaque monkeys. Cells have been described in area PFG that discharge differentially depending upon whether the observer sees an object being brought to the mouth or put in a container. In our study the observers saw videos in which the use of different objects was demonstrated in pantomime; and after viewing the videos, the subject had to pick the object that was appropriate to the pantomime. We found a cluster of activated voxels in parietal areas PFop and PFt and this cluster was greater in the left hemisphere than in the right. We suggest a mechanism that could account for this asymmetry, relate our results to handedness and suggest that they shed light on the human syndrome of apraxia. Finally, we suggest that during the evolution of the hominids, this same pantomime mechanism could have been used to ‘name’ or request objects.We thank Steve Wise for very detailed comments on a draft of this paper. We thank Rogier Mars for help with identifying the areas that were activated in parietal cortex and for comments on a draft of this paper. Finally, we thank Michael Nahhas for help with the imaging figures. This work was supported in part by the NIH grant RO1NS064100 to LMV. (RO1NS064100 - NIH)Accepted manuscrip

    Automatic imitation of biomechanically possible and impossible actions: effects of priming movements versus goals

    Get PDF
    Recent behavioral, neuroimaging, and neurophysiological research suggests a common representational code mediating the observation and execution of actions; yet, the nature of this representational code is not well understood. The authors address this question by investigating (a) whether this observation execution matching system (or mirror system) codes both the constituent movements of an action as well as its goal and (b) how such sensitivity is influenced by top-down effects of instructions. The authors tested the automatic imitation of observed finger actions while manipulating whether the movements were biomechanically possible or impossible, but holding the goal constant. When no mention was made of this difference (Experiment 1), comparable automatic imitation was elicited from possible and impossible actions, suggesting that the actions had been coded at the level of the goal. When attention was drawn to this difference (Experiment 2), however, only possible movements elicited automatic imitation. This sensitivity was specific to imitation, not affecting spatial stimulus–response compatibility (Experiment 3). These results suggest that automatic imitation is modulated by top-down influences, coding actions in terms of both movements and goals depending on the focus of attention

    Neural correlates of phonetic adaptation as induced by lexical and audiovisual context

    Get PDF
    When speech perception is difficult, one way listeners adjust is by reconfiguring phoneme category boundaries, drawing on contextual information. Both lexical knowledge and lipreading cues are used in this way, but it remains unknown whether these two differing forms of perceptual learning are similar at a neural level. This study compared phoneme boundary adjustments driven by lexical or audiovisual cues, using ultra-high-field 7-T fMRI. During imaging, participants heard exposure stimuli and test stimuli. Exposure stimuli for lexical retuning were audio recordings of words, and those for audiovisual recalibration were audio–video recordings of lip movements during utterances of pseudowords. Test stimuli were ambiguous phonetic strings presented without context, and listeners reported what phoneme they heard. Reports reflected phoneme biases in preceding exposure blocks (e.g., more reported /p/ after /p/-biased exposure). Analysis of corresponding brain responses indicated that both forms of cue use were associated with a network of activity across the temporal cortex, plus parietal, insula, and motor areas. Audiovisual recalibration also elicited significant occipital cortex activity despite the lack of visual stimuli. Activity levels in several ROIs also covaried with strength of audiovisual recalibration, with greater activity accompanying larger recalibration shifts. Similar activation patterns appeared for lexical retuning, but here, no significant ROIs were identified. Audiovisual and lexical forms of perceptual learning thus induce largely similar brain response patterns. However, audiovisual recalibration involves additional visual cortex contributions, suggesting that previously acquired visual information (on lip movements) is retrieved and deployed to disambiguate auditory perception

    Hierarchical Reinforcement Learning in Behavior and the Brain

    Get PDF
    Dissertation presented to obtain the Ph.D degree in Biology, NeuroscienceReinforcement learning (RL) has provided key insights to the neurobiology of learning and decision making. The pivotal nding is that the phasic activity of dopaminergic cells in the ventral tegmental area during learning conforms to a reward prediction error (RPE), as speci ed in the temporal-di erence learning algorithm (TD). This has provided insights to conditioning, the distinction between habitual and goal-directed behavior, working memory, cognitive control and error monitoring. It has also advanced the understanding of cognitive de cits in Parkinson's disease, depression, ADHD and of personality traits such as impulsivity.(...

    Challenges in Multimodal Data Fusion

    No full text
    International audienceIn various disciplines, information about the same phenomenon can be acquired from different types of detectors, at different conditions, different observations times, in multiple experiments or subjects, etc. We use the term "modality" to denote each such type of acquisition framework. Due to the rich characteristics of natural phenomena, as well as of the environments in which they occur, it is rare that a single modality can provide complete knowledge of the phenomenon of interest. The increasing availability of several modalities at once introduces new degrees of freedom, which raise questions beyond those related to exploiting each modality separately. It is the aim of this paper to evoke and promote various challenges in multimodal data fusion at the conceptual level, without focusing on any specific model, method or application
    • …
    corecore