129 research outputs found

    An information theoretic characterisation of auditory encoding.

    Get PDF
    The entropy metric derived from information theory provides a means to quantify the amount of information transmitted in acoustic streams like speech or music. By systematically varying the entropy of pitch sequences, we sought brain areas where neural activity and energetic demands increase as a function of entropy. Such a relationship is predicted to occur in an efficient encoding mechanism that uses less computational resource when less information is present in the signal: we specifically tested the hypothesis that such a relationship is present in the planum temporale (PT). In two convergent functional MRI studies, we demonstrated this relationship in PT for encoding, while furthermore showing that a distributed fronto-parietal network for retrieval of acoustic information is independent of entropy. The results establish PT as an efficient neural engine that demands less computational resource to encode redundant signals than those with high information content

    Visual sensory cortices causally contribute to auditory word recognition following sensorimotor-enriched vocabulary training

    Get PDF
    Funding This work was funded by German Research Foundation (grant KR 3735/3-1), a Max Planck Research Group to K.v.K., and an Erasmus Mundus Postdoctoral Fellowship to B.M.. B.M. is also supported by the European Research Council Consolidator grant SENSOCOM 647051 to K.v.K..Peer reviewedPublisher PD

    How can we learn foreign language vocabulary more easily?

    Get PDF
    The authors would like to thank those who assisted in the translation of the articles in this Collection to make them more accessible to kids outside English-speaking countries, and for the Jacobs Foundation for providing the funds necessary to translate the articles. For this article, they would especially like to thank Nienke van Atteveldt and Sabine Peters for the Dutch translation. This work was funded by the German Research Foundation grant KR 3735/3-1, a Schulbezogene Forschung grant from the Saxony Zentrum für Lehrerbildung und Schulforschung (ZLS), and an Erasmus Mundus Postdoctoral Fellowship in Auditory Cognitive Neuroscience. BM also supported by the European Research Council Consolidator Grant SENSOCOM 647051 to KvK.Peer reviewedPublisher PD

    Well-Being as Harmony

    Get PDF
    In this paper, I sketch out a novel theory of well-being according to which well-being is constituted by harmony between mind and world. The notion of harmony I develop has three aspects. First there is correspondence between mind and world in the sense that events in the world match the content of our mental states. Second there is positive orientation towards the world, meaning that we have pro-attitudes towards the world we find ourselves in. Third there is fitting response to the world. Taken together these three aspects make up an ideal of being attuned to, or at home in, the world. Such harmony between mind and world constitutes well-being. Its opposite – being disoriented, ill-at-ease in, or hostile to the world – makes a life go poorly. And, as we shall see, many of the things that intuitively contribute to well-being are instantiating one or more of the three aspects of harmony

    Cortical Plasticity of Audio–Visual Object Representations

    Get PDF
    Several regions in human temporal and frontal cortex are known to integrate visual and auditory object features. The processing of audio–visual (AV) associations in these regions has been found to be modulated by object familiarity. The aim of the present study was to explore training-induced plasticity in human cortical AV integration. We used functional magnetic resonance imaging to analyze the neural correlates of AV integration for unfamiliar artificial object sounds and images in naïve subjects (PRE training) and after a behavioral training session in which subjects acquired associations between some of these sounds and images (POST-training). In the PRE-training session, unfamiliar artificial object sounds and images were mainly integrated in right inferior frontal cortex (IFC). The POST-training results showed extended integration-related IFC activations bilaterally, and a recruitment of additional regions in bilateral superior temporal gyrus/sulcus and intraparietal sulcus. Furthermore, training-induced differential response patterns to mismatching compared with matching (i.e., associated) artificial AV stimuli were most pronounced in left IFC. These effects were accompanied by complementary training-induced congruency effects in right posterior middle temporal gyrus and fusiform gyrus. Together, these findings demonstrate that short-term cross-modal association learning was sufficient to induce plastic changes of both AV integration of object stimuli and mechanisms of AV congruency processing

    Evidence for Training-Induced Plasticity in Multisensory Brain Structures: An MEG Study

    Get PDF
    Multisensory learning and resulting neural brain plasticity have recently become a topic of renewed interest in human cognitive neuroscience. Music notation reading is an ideal stimulus to study multisensory learning, as it allows studying the integration of visual, auditory and sensorimotor information processing. The present study aimed at answering whether multisensory learning alters uni-sensory structures, interconnections of uni-sensory structures or specific multisensory areas. In a short-term piano training procedure musically naive subjects were trained to play tone sequences from visually presented patterns in a music notation-like system [Auditory-Visual-Somatosensory group (AVS)], while another group received audio-visual training only that involved viewing the patterns and attentively listening to the recordings of the AVS training sessions [Auditory-Visual group (AV)]. Training-related changes in cortical networks were assessed by pre- and post-training magnetoencephalographic (MEG) recordings of an auditory, a visual and an integrated audio-visual mismatch negativity (MMN). The two groups (AVS and AV) were differently affected by the training. The results suggest that multisensory training alters the function of multisensory structures, and not the uni-sensory ones along with their interconnections, and thus provide an answer to an important question presented by cognitive models of multisensory training

    The Glasgow Voice Memory Test: Assessing the ability to memorize and recognize unfamiliar voices

    Get PDF
    One thousand one hundred and twenty subjects as well as a developmental phonagnosic subject (KH) along with age-matched controls performed the Glasgow Voice Memory Test, which assesses the ability to encode and immediately recognize, through an old/new judgment, both unfamiliar voices (delivered as vowels, making language requirements minimal) and bell sounds. The inclusion of non-vocal stimuli allows the detection of significant dissociations between the two categories (vocal vs. non-vocal stimuli). The distributions of accuracy and sensitivity scores (d’) reflected a wide range of individual differences in voice recognition performance in the population. As expected, KH showed a dissociation between the recognition of voices and bell sounds, her performance being significantly poorer than matched controls for voices but not for bells. By providing normative data of a large sample and by testing a developmental phonagnosic subject, we demonstrated that the Glasgow Voice Memory Test, available online and accessible fromall over the world, can be a valid screening tool (~5 min) for a preliminary detection of potential cases of phonagnosia and of “super recognizers” for voices

    The Natural Statistics of Audiovisual Speech

    Get PDF
    Humans, like other animals, are exposed to a continuous stream of signals, which are dynamic, multimodal, extended, and time varying in nature. This complex input space must be transduced and sampled by our sensory systems and transmitted to the brain where it can guide the selection of appropriate actions. To simplify this process, it's been suggested that the brain exploits statistical regularities in the stimulus space. Tests of this idea have largely been confined to unimodal signals and natural scenes. One important class of multisensory signals for which a quantitative input space characterization is unavailable is human speech. We do not understand what signals our brain has to actively piece together from an audiovisual speech stream to arrive at a percept versus what is already embedded in the signal structure of the stream itself. In essence, we do not have a clear understanding of the natural statistics of audiovisual speech. In the present study, we identified the following major statistical features of audiovisual speech. First, we observed robust correlations and close temporal correspondence between the area of the mouth opening and the acoustic envelope. Second, we found the strongest correlation between the area of the mouth opening and vocal tract resonances. Third, we observed that both area of the mouth opening and the voice envelope are temporally modulated in the 2–7 Hz frequency range. Finally, we show that the timing of mouth movements relative to the onset of the voice is consistently between 100 and 300 ms. We interpret these data in the context of recent neural theories of speech which suggest that speech communication is a reciprocally coupled, multisensory event, whereby the outputs of the signaler are matched to the neural processes of the receiver
    corecore