11 research outputs found

    Effects of Unexpected Chords and of Performer's Expression on Brain Responses and Electrodermal Activity

    Get PDF
    BACKGROUND: There is lack of neuroscientific studies investigating music processing with naturalistic stimuli, and brain responses to real music are, thus, largely unknown. METHODOLOGY/PRINCIPAL FINDINGS: This study investigates event-related brain potentials (ERPs), skin conductance responses (SCRs) and heart rate (HR) elicited by unexpected chords of piano sonatas as they were originally arranged by composers, and as they were played by professional pianists. From the musical excerpts played by the pianists (with emotional expression), we also created versions without variations in tempo and loudness (without musical expression) to investigate effects of musical expression on ERPs and SCRs. Compared to expected chords, unexpected chords elicited an early right anterior negativity (ERAN, reflecting music-syntactic processing) and an N5 (reflecting processing of meaning information) in the ERPs, as well as clear changes in the SCRs (reflecting that unexpected chords also elicited emotional responses). The ERAN was not influenced by emotional expression, whereas N5 potentials elicited by chords in general (regardless of their chord function) differed between the expressive and the non-expressive condition. CONCLUSIONS/SIGNIFICANCE: These results show that the neural mechanisms of music-syntactic processing operate independently of the emotional qualities of a stimulus, justifying the use of stimuli without emotional expression to investigate the cognitive processing of musical structure. Moreover, the data indicate that musical expression affects the neural mechanisms underlying the processing of musical meaning. Our data are the first to reveal influences of musical performance on ERPs and SCRs, and to show physiological responses to unexpected chords in naturalistic music

    Mechanisms of Voice Processing: Evidence from Autism Spectrum Disorder

    Get PDF
    Die korrekte Wahrnehmung stimmlicher Information ist eine Grundvoraussetzung erfolgreicher zwischenmenschlicher Kommunikation. Die Stimme einer anderen Person liefert Information darüber wer spricht (Sprechererkennung), was gesagt wird (stimmliche Spracherkennung) und über den emotionalen Zustand einer Person (stimmliche Emotionserkennung). Autismus Spektrum Störungen (ASS) sind mit Einschränkungen in der Sprechererkennung und der stimmlichen Emotionserkennung assoziiert, während die Wahrnehmung stimmlicher Sprache relativ intakt ist. Die zugrunde liegenden Mechanismen dieser Einschränkungen sind bisher jedoch unklar. Es ist beispielsweise unklar, auf welcher Verarbeitungsstufe diese Einschränkungen in der Stimmenwahrnehmung entstehen oder ob sie mit einer Dysfunktion stimmensensitiver Hirnregionen in Verbindung stehen. Im Rahmen meiner Dissertation haben wir systematisch Stimmenverarbeitung und dessen Einschränkungen bei Erwachsenen mit hochfunktionalem ASS und typisch entwickelten Kontrollprobanden (vergleichbar in Alter, Geschlecht und intellektuellen Fähigkeiten) untersucht. In den ersten beiden Studien charakterisierten wir Sprechererkennung bei ASS mittels einer umfassenden verhaltensbezogenen Testbatterie und zweier funktionaler Magnet Resonanz Tomographie (fMRT) Experimente. In der dritten Studie untersuchten wir Mechanismen eingeschränkter stimmlicher Emotionserkennung bei ASS. Unsere Ergebnisse bringen neue Kenntnisse für Modelle zwischenmenschlicher Kommunikation und erhöhen unser Verständnis elementarer Mechanismen, die den Kernsymptomen in ASS wie Schwierigkeiten in der Kommunikation, zugrunde liegen könnten. Beispielsweise unterstützen unsere Ergebnisse die Annahme, dass Einschränkungen in der Wahrnehmung und Integration basaler sensorischer Merkmale (i.S. akustischer Merkmale der Stimme) entscheidend zu Einschränkungen in sozialer Kognition (i.S. Sprechererkennung und stimmliche Emotionserkennung) beitragen.The correct perception of information carried by the voice is a key requirement for successful human communication. Hearing another person’s voice provides information about who is speaking (voice identity), what is said (vocal speech) and the emotional state of a person (vocal emotion). Autism spectrum disorder (ASD) is associated with impaired voice identity and vocal emotion perception while the perception of vocal speech is relatively intact. However, the underlying mechanisms of these voice perception impairments are unclear. For example, it is unclear at which processing stage voice perception difficulties occur, i.e. whether they are rather of apperceptive or associative nature or whether impairments in voice identity processing in ASD are associated with dysfunction of voice-sensitive brain regions. Within the scope of my dissertation we systematically investigated voice perception and its impairments in adults with high-functioning ASD and typically developing matched controls (matched pairwise on age, gender, and intellectual abilities). In the first two studies we characterised the behavioural and neuronal profile of voice identity recognition in ASD using two functional magnetic resonance imaging (fMRI) experiments and a comprehensive behavioural test battery. In the third study we investigated the underlying behavioural mechanisms of impaired vocal emotion recognition in ASD. Our results inform models on human communication and advance our understanding for basic mechanisms which might contribute to core symptoms in ASD, such as difficulties in communication. For example, our results converge to support the view that in ASD difficulties in perceiving and integrating lower-level sensory features, i.e. acoustic characteristics of the voice might critically contribute to difficulties in higher-level social cognition, i.e. voice identity and vocal emotion recognition

    Temporal voice areas exist in autism spectrum disorder but are dysfunctional for voice identity recognition

    Get PDF
    The ability to recognise the identity of others is a key requirement for successful communication. Brain regions that respond selectively to voices exist in humans from early infancy on. Currently, it is unclear whether dysfunction of these voice-sensitive regions can explain voice identity recognition impairments. Here, we used two independent functional magnetic resonance imaging studies to investigate voice processing in a population that has been reported to have no voicesensitive regions: autism spectrum disorder (ASD). Our results refute the earlier report that individuals with ASD have no responses in voice-sensitive regions: Passive listening to vocal, compared to non-vocal, sounds elicited typical responses in voice-sensitive regions in the high-functioning ASD group and controls. In contrast, the ASD group had a dysfunction in voice-sensitive regions during voice identity but not speech recognition in the right posterior superior temporal sulcus/ gyrus (STS/STG)—a region implicated in processing complex spectrotemporal voice features and unfamiliar voices. The right anterior STS/STG correlated with voice identity recognition performance in controls but not in the ASD group. The findings suggest that right STS/STG dysfunction is critical for explaining voice recognition impairments in high-functioning ASD and show that ASD is not characterised by a general lack of voice-sensitive responses.Peer Reviewe

    Recognizing visual speech: Reduced responses in visual-movement regions, but not other speech regions in autism

    No full text
    Speech information inherent in face movements is important for understanding what is said in face-to-face communication. Individuals with autism spectrum disorders (ASD) have difficulties in extracting speech information from face movements, a process called visual-speech recognition. Currently, it is unknown what dysfunctional brain regions or networks underlie the visual-speech recognition deficit in ASD.We conducted a functional magnetic resonance imaging (fMRI) study with concurrent eye tracking to investigate visual-speech recognition in adults diagnosed with high-functioning autism and pairwise matched typically developed controls.Compared to the control group (n = 17), the ASD group (n = 17) showed decreased Blood Oxygenation Level Dependent (BOLD) response during visual-speech recognition in the right visual area 5 (V5/MT) and left temporal visual speech area (TVSA) – brain regions implicated in visual-movement perception. The right V5/MT showed positive correlation with visual-speech task performance in the ASD group, but not in the control group. Psychophysiological interaction analysis (PPI) revealed that functional connectivity between the left TVSA and the bilateral V5/MT and between the right V5/MT and the left IFG was lower in the ASD than in the control group. In contrast, responses in other speech-motor regions and their connectivity were on the neurotypical level.Reduced responses and network connectivity of the visual-movement regions in conjunction with intact speech-related mechanisms indicate that perceptual mechanisms might be at the core of the visual-speech recognition deficit in ASD. Communication deficits in ASD might at least partly stem from atypical sensory processing and not higher-order cognitive processing of socially relevant information. Keywords: High-functioning autism, Lip reading, Atypical perception, Motion, fMRI, Fac

    Grand-average of brain electric responses to expressive and non-expressive chords (averaged across expected, unexpected, and very unexpected conditions).

    No full text
    <p>Expressive chords elicited a negative effect in the N100-range (being maximal at central electrodes), and an N5 that was larger than the N5 elicited by non-expressive chords. The bottom insets show isopotential maps of the N1 and N5 effect (non-expressive subtracted from expressive chords).</p

    Grand-average of brain electric responses to expected, unexpected (original), and very unexpected chords (averaged across expressive and non-expressive conditions).

    No full text
    <p>Compared to expected chords, both unexpected and very unexpected chords elicited an ERAN and an N5. The insets in the two bottom panels show isopotential maps of the ERAN and the N5 effect (expected subtracted from [very] unexpected chords).</p

    Summary of valence-, arousal-, and surprise-ratings (1 corresponded to most unpleasant, least arousing, and least surprising, and 9 to most pleasant, most arousing, and most surprising).

    No full text
    <p><b>A</b> shows ratings (mean and SEM) averaged across all excerpts with expected chords only, with an unexpected (original) chord, and a very unexpected chord, as well as ratings averaged across all expressive and all non-expressive excerpts. <b>B</b> shows ratings (mean and SEM) separately for each of the six experimental conditions.</p

    Skin conductance responses (SCRs).

    No full text
    <p>A: Grand-average of SCRs elicited by expected, unexpected (original), and very unexpected chords (averaged across expressive and non-expressive conditions). Compared to expected chords, unexpected and very unexpected chords elicited clear SCRs. Notably, the SCR elicited by very unexpected chords was larger than the SCR to unexpected (original) chords, showing that the magnitude of SCRs is related to the degree of harmonic expectancy violation. B: Grand-average of SCRs elicited by expressive and non-expressive chords (averaged across expected, unexpected, and very unexpected conditions). Compared to non-expressive chords, chords played with musical expression elicited a clear SCR.</p

    Examples of experimental stimuli.

    No full text
    <p>First, the original version of a piano sonata was played by a pianist. This original version contained an unexpected chord as arranged by the composer (see middle panel in the lower right). After the recording, the MIDI file with the unexpected (original) chord was modified offline using MIDI software so that the unexpected chord became expected, or very unexpected chord (see top and bottom panels). From each of these three versions, another version without musical expression was created by eliminating variations in tempo and key-stroke velocities (excerpts were modified offline using MIDI software). Thus, there were six versions of each piano sonata: Versions with expected, unexpected, and very unexpected chords, and each of these versions played with and without musical expression.</p
    corecore