162 research outputs found

    Using the past to estimate sensory uncertainty

    Get PDF
    To form a more reliable percept of the environment, the brain needs to estimate its own sensory uncertainty. Current theories of perceptual inference assume that the brain computes sensory uncertainty instantaneously and independently for each stimulus. We evaluated this assumption in four psychophysical experiments, in which human observers localized auditory signals that were presented synchronously with spatially disparate visual signals. Critically, the visual noise changed dynamically over time continuously or with intermittent jumps. Our results show that observers integrate audiovisual inputs weighted by sensory uncertainty estimates that combine information from past and current signals consistent with an optimal Bayesian learner that can be approximated by exponential discounting. Our results challenge leading models of perceptual inference where sensory uncertainty estimates depend only on the current stimulus. They demonstrate that the brain capitalizes on the temporal dynamics of the external world and estimates sensory uncertainty by combining past experiences with new incoming sensory signals

    Investigating human audio-visual object perception with a combination of hypothesis-generating and hypothesis-testing fMRI analysis tools

    Get PDF
    Primate multisensory object perception involves distributed brain regions. To investigate the network character of these regions of the human brain, we applied data-driven group spatial independent component analysis (ICA) to a functional magnetic resonance imaging (fMRI) data set acquired during a passive audio-visual (AV) experiment with common object stimuli. We labeled three group-level independent component (IC) maps as auditory (A), visual (V), and AV, based on their spatial layouts and activation time courses. The overlap between these IC maps served as definition of a distributed network of multisensory candidate regions including superior temporal, ventral occipito-temporal, posterior parietal and prefrontal regions. During an independent second fMRI experiment, we explicitly tested their involvement in AV integration. Activations in nine out of these twelve regions met the max-criterion (A < AV > V) for multisensory integration. Comparison of this approach with a general linear model-based region-of-interest definition revealed its complementary value for multisensory neuroimaging. In conclusion, we estimated functional networks of uni- and multisensory functional connectivity from one dataset and validated their functional roles in an independent dataset. These findings demonstrate the particular value of ICA for multisensory neuroimaging research and using independent datasets to test hypotheses generated from a data-driven analysis

    Feasibility of Azacitidine Added to Standard Chemotherapy in Older Patients with Acute Myeloid Leukemia — A Randomised SAL Pilot Study

    Full text link
    Introduction: Older patients with acute myeloid leukemia (AML) experience short survival despite intensive chemotherapy. Azacitidine has promising activity in patients with low proliferating AML. The aim of this dose-finding part of this trial was to evaluate feasibility and safety of azacitidine combined with a cytarabine- and daunorubicin-based chemotherapy in older patients with AML. Trial Design: Prospective, randomised, open, phase II trial with parallel group design and fixed sample size. Patients and Methods: Patients aged 61 years or older, with untreated acute myeloid leukemia with a leukocyte count of ,20,000/ml at the time of study entry and adequate organ function were eligible. Patients were randomised to receive azacitidine either 37.5 (dose level 1) or 75 mg/sqm (dose level 2) for five days before each cycle of induction (7+3 cytarabine plus daunorubicine) and consolidation (intermediate-dose cytarabine) therapy. Dose-limiting toxicity was the primary endpoint. Results: Six patients each were randomised into each dose level and evaluable for analysis. No dose-limiting toxicity occurred in either dose level. Nine serious adverse events occurred in five patients (three in the 37.5 mg, two in the 75 mg arm) with two fatal outcomes. Two patients at the 37.5 mg/sqm dose level and four patients at the 75 mg/sqm level achieved a complete remission after induction therapy. Median overall survival was 266 days and median event-free survival 215 days after a median follow up of 616 days. Conclusions: The combination of azacitidine 75 mg/sqm with standard induction therapy is feasible in older patients with AML and was selected as an investigational arm in the randomised controlled part of this phase-II study, which is currently halted due to an increased cardiac toxicity observed in the experimental arm. Trial Registration: This trial is registered at clinical trials.gov (identifier: NCT00915252)

    Alterations in functional connectivity for language in prematurely born adolescents

    Get PDF
    Recent data suggest recovery of language systems but persistent structural abnormalities in the prematurely born. We tested the hypothesis that subjects who were born prematurely develop alternative networks for processing language. Subjects who were born prematurely (n = 22; 600–1250 g birth weight), without neonatal brain injury on neonatal cranial ultrasound, and 26 term control subjects were examined with a functional magnetic resonance imaging (fMRI) semantic association task, the Wechsler Intelligence Scale for Children-III (WISC-III) and the Clinical Evaluation of Language Fundamentals (CELF). In-magnet task accuracy and response times were calculated, and fMRI data were evaluated for the effect of group on blood oxygen level dependent (BOLD) activation, the correlation between task accuracy and activation and the functional connectivity between regions activating to task. Although there were differences in verbal IQ and CELF scores between the preterm (PT) and term control groups, there were no significant differences for either accuracy or response time for the in-magnet task. Both groups activated classic semantic processing areas including the left superior and middle temporal gyri and inferior frontal gyrus, and there was no significant difference in activation patterns between groups. Clear differences between the groups were observed in the correlation between task accuracy and activation to task at P < 0.01, corrected for multiple comparisons. Left inferior frontal gyrus correlated with accuracy only for term controls and left sensory motor areas correlated with accuracy only for PT subjects. Left middle temporal gyri correlated with task accuracy for both groups. Connectivity analyses at P < 0.001 revealed the importance of a circuit between left middle temporal gyri and inferior frontal gyrus for both groups. In addition, the PT subjects evidenced greater connectivity between traditional language areas and sensory motor areas but significantly fewer correlated areas within the frontal lobes when compared to term controls. We conclude that at 12 years of age, children born prematurely and children born at term had no difference in performance on a simple lexical semantic processing task and activated similar areas. Connectivity analyses, however, suggested that PT subjects rely upon different neural pathways for lexical semantic processing when compared to term controls. Plasticity in network connections may provide the substrate for improving language skills in the prematurely born

    Spatio-Temporal Dynamics of Human Intention Understanding in Temporo-Parietal Cortex: A Combined EEG/fMRI Repetition Suppression Paradigm

    Get PDF
    Inferring the intentions of other people from their actions recruits an inferior fronto-parietal action observation network as well as a putative social network that includes the posterior superior temporal sulcus (STS). However, the functional dynamics within and among these networks remains unclear. Here we used functional magnetic resonance imaging (fMRI) and high-density electroencephalogram (EEG), with a repetition suppression design, to assess the spatio-temporal dynamics of decoding intentions. Suppression of fMRI activity to the repetition of the same intention was observed in inferior frontal lobe, anterior intraparietal sulcus (aIPS), and right STS. EEG global field power was reduced with repeated intentions at an early (starting at 60 ms) and a later (∼330 ms) period after the onset of a hand-on-object encounter. Source localization during these two intervals involved right STS and aIPS regions highly consistent with RS effects observed with fMRI. These results reveal the dynamic involvement of temporal and parietal networks at multiple stages during the intention decoding and without a strict segregation of intention decoding between these networks

    Spatial Language Processing in the Blind: Evidence for a Supramodal Representation and Cortical Reorganization

    Get PDF
    Neuropsychological and imaging studies have shown that the left supramarginal gyrus (SMG) is specifically involved in processing spatial terms (e.g. above, left of), which locate places and objects in the world. The current fMRI study focused on the nature and specificity of representing spatial language in the left SMG by combining behavioral and neuronal activation data in blind and sighted individuals. Data from the blind provide an elegant way to test the supramodal representation hypothesis, i.e. abstract codes representing spatial relations yielding no activation differences between blind and sighted. Indeed, the left SMG was activated during spatial language processing in both blind and sighted individuals implying a supramodal representation of spatial and other dimensional relations which does not require visual experience to develop. However, in the absence of vision functional reorganization of the visual cortex is known to take place. An important consideration with respect to our finding is the amount of functional reorganization during language processing in our blind participants. Therefore, the participants also performed a verb generation task. We observed that only in the blind occipital areas were activated during covert language generation. Additionally, in the first task there was functional reorganization observed for processing language with a high linguistic load. As the visual cortex was not specifically active for spatial contents in the first task, and no reorganization was observed in the SMG, the latter finding further supports the notion that the left SMG is the main node for a supramodal representation of verbal spatial relations

    It Takes Two–Skilled Recognition of Objects Engages Lateral Areas in Both Hemispheres

    Get PDF
    Our object recognition abilities, a direct product of our experience with objects, are fine-tuned to perfection. Left temporal and lateral areas along the dorsal, action related stream, as well as left infero-temporal areas along the ventral, object related stream are engaged in object recognition. Here we show that expertise modulates the activity of dorsal areas in the recognition of man-made objects with clearly specified functions. Expert chess players were faster than chess novices in identifying chess objects and their functional relations. Experts' advantage was domain-specific as there were no differences between groups in a control task featuring geometrical shapes. The pattern of eye movements supported the notion that experts' extensive knowledge about domain objects and their functions enabled superior recognition even when experts were not directly fixating the objects of interest. Functional magnetic resonance imaging (fMRI) related exclusively the areas along the dorsal stream to chess specific object recognition. Besides the commonly involved left temporal and parietal lateral brain areas, we found that only in experts homologous areas on the right hemisphere were also engaged in chess specific object recognition. Based on these results, we discuss whether skilled object recognition does not only involve a more efficient version of the processes found in non-skilled recognition, but also qualitatively different cognitive processes which engage additional brain areas

    Dissociation between the Activity of the Right Middle Frontal Gyrus and the Middle Temporal Gyrus in Processing Semantic Priming

    Get PDF
    The aim of this event-related functional magnetic resonance imaging (fMRI) study was to test whether the right middle frontal gyrus (MFG) and middle temporal gyrus (MTG) would show differential sensitivity to the effect of prime-target association strength on repetition priming. In the experimental condition (RP), the target occurred after repetitive presentation of the prime within an oddball design. In the control condition (CTR), the target followed a single presentation of the prime with equal probability of the target as in RP. To manipulate semantic overlap between the prime and the target both conditions (RP and CTR) employed either the onomatopoeia “oink” as the prime and the referent “pig” as the target (OP) or vice-versa (PO) since semantic overlap was previously shown to be greater in OP. The results showed that the left MTG was sensitive to release of adaptation while both the right MTG and MFG were sensitive to sequence regularity extraction and its verification. However, dissociated activity between OP and PO was revealed in RP only in the right MFG. Specifically, target “pig” (OP) and the physically equivalent target in CTR elicited comparable deactivations whereas target “oink” (PO) elicited less inhibited response in RP than in CTR. This interaction in the right MFG was explained by integrating these effects into a competition model between perceptual and conceptual effects in priming processing

    Seeing Emotion with Your Ears: Emotional Prosody Implicitly Guides Visual Attention to Faces

    Get PDF
    Interpersonal communication involves the processing of multimodal emotional cues, particularly facial expressions (visual modality) and emotional speech prosody (auditory modality) which can interact during information processing. Here, we investigated whether the implicit processing of emotional prosody systematically influences gaze behavior to facial expressions of emotion. We analyzed the eye movements of 31 participants as they scanned a visual array of four emotional faces portraying fear, anger, happiness, and neutrality, while listening to an emotionally-inflected pseudo-utterance (Someone migged the pazing) uttered in a congruent or incongruent tone. Participants heard the emotional utterance during the first 1250 milliseconds of a five-second visual array and then performed an immediate recall decision about the face they had just seen. The frequency and duration of first saccades and of total looks in three temporal windows ([0–1250 ms], [1250–2500 ms], [2500–5000 ms]) were analyzed according to the emotional content of faces and voices. Results showed that participants looked longer and more frequently at faces that matched the prosody in all three time windows (emotion congruency effect), although this effect was often emotion-specific (with greatest effects for fear). Effects of prosody on visual attention to faces persisted over time and could be detected long after the auditory information was no longer present. These data imply that emotional prosody is processed automatically during communication and that these cues play a critical role in how humans respond to related visual cues in the environment, such as facial expressions

    Minor and Unsystematic Cortical Topographic Changes of Attention Correlates between Modalities

    Get PDF
    In this study we analyzed the topography of induced cortical oscillations in 20 healthy individuals performing simple attention tasks. We were interested in qualitatively replicating our recent findings on the localization of attention-induced beta bands during a visual task [1], and verifying whether significant topographic changes would follow the change of attention to the auditory modality. We computed corrected latency averaging of each induced frequency bands, and modeled their generators by current density reconstruction with Lp-norm minimization. We quantified topographic similarity between conditions by an analysis of correlations, whereas the inter-modality significant differences in attention correlates were illustrated in each individual case. We replicated the qualitative result of highly idiosyncratic topography of attention-related activity to individuals, manifested both in the beta bands, and previously studied slow potential distributions [2]. Visual inspection of both scalp potentials and distribution of cortical currents showed minor changes in attention-related bands with respect to modality, as compared to the theta and delta bands, known to be major contributors to the sensory-related potentials. Quantitative results agreed with visual inspection, supporting to the conclusion that attention-related activity does not change much between modalities, and whatever individual changes do occur, they are not systematic in cortical localization across subjects. We discuss our results, combined with results from other studies that present individual data, with respect to the function of cortical association areas
    corecore