976 research outputs found

    Cracking the social code of speech prosody using reverse correlation

    Get PDF
    Human listeners excel at forming high-level social representations about each other, even from the briefest of utterances. In particular, pitch is widely recognized as the auditory dimension that conveys most of the information about a speaker's traits, emotional states, and attitudes. While past research has primarily looked at the influence of mean pitch, almost nothing is known about how intonation patterns, i.e., finely tuned pitch trajectories around the mean, may determine social judgments in speech. Here, we introduce an experimental paradigm that combines state-of-the-art voice transformation algorithms with psychophysical reverse correlation and show that two of the most important dimensions of social judgments, a speaker's perceived dominance and trustworthiness, are driven by robust and distinguishing pitch trajectories in short utterances like the word "Hello," which remained remarkably stable whether male or female listeners judged male or female speakers. These findings reveal a unique communicative adaptation that enables listeners to infer social traits regardless of speakers' physical characteristics, such as sex and mean pitch. By characterizing how any given individual's mental representations may differ from this generic code, the method introduced here opens avenues to explore dysprosody and social-cognitive deficits in disorders like autism spectrum and schizophrenia. In addition, once derived experimentally, these prototypes can be applied to novel utterances, thus providing a principled way to modulate personality impressions in arbitrary speech signals

    The gray matter volume of the amygdala is correlated with the perception of melodic intervals: a voxel-based morphometry study

    Get PDF
    Music is not simply a series of organized pitches, rhythms, and timbres, it is capable of evoking emotions. In the present study, voxel-based morphometry (VBM) was employed to explore the neural basis that may link music to emotion. To do this, we identified the neuroanatomical correlates of the ability to extract pitch interval size in a music segment (i.e., interval perception) in a large population of healthy young adults (N = 264). Behaviorally, we found that interval perception was correlated with daily emotional experiences, indicating the intrinsic link between music and emotion. Neurally, and as expected, we found that interval perception was positively correlated with the gray matter volume (GMV) of the bilateral temporal cortex. More important, a larger GMV of the bilateral amygdala was associated with better interval perception, suggesting that the amygdala, which is the neural substrate of emotional processing, is also involved in music processing. In sum, our study provides one of first neuroanatomical evidence on the association between the amygdala and music, which contributes to our understanding of exactly how music evokes emotional responses

    The cognitive organization of music knowledge: a clinical analysis

    Get PDF
    Despite much recent interest in the clinical neuroscience of music processing, the cognitive organization of music as a domain of non-verbal knowledge has been little studied. Here we addressed this issue systematically in two expert musicians with clinical diagnoses of semantic dementia and Alzheimer’s disease, in comparison with a control group of healthy expert musicians. In a series of neuropsychological experiments, we investigated associative knowledge of musical compositions (musical objects), musical emotions, musical instruments (musical sources) and music notation (musical symbols). These aspects of music knowledge were assessed in relation to musical perceptual abilities and extra-musical neuropsychological functions. The patient with semantic dementia showed relatively preserved recognition of musical compositions and musical symbols despite severely impaired recognition of musical emotions and musical instruments from sound. In contrast, the patient with Alzheimer’s disease showed impaired recognition of compositions, with somewhat better recognition of composer and musical era, and impaired comprehension of musical symbols, but normal recognition of musical emotions and musical instruments from sound. The findings suggest that music knowledge is fractionated, and superordinate musical knowledge is relatively more robust than knowledge of particular music. We propose that music constitutes a distinct domain of non-verbal knowledge but shares certain cognitive organizational features with other brain knowledge systems. Within the domain of music knowledge, dissociable cognitive mechanisms process knowledge derived from physical sources and the knowledge of abstract musical entities

    Perceptual learning of pitch direction in congenital amusia: evidence from Chinese speakers

    Get PDF
    Congenital amusia is a lifelong disorder of musical processing for which no effective treatments have been found. The present study aimed to treat amusics’ impairments in pitch direction identification through auditory training. Prior to training, twenty Chinese-speaking amusics and 20 matched controls were tested on the Montreal Battery of Evaluation of Amusia (MBEA) and two psychophysical pitch threshold tasks for identification of pitch direction in speech and music. Subsequently, ten of the twenty amusics undertook 10 sessions of adaptive-tracking pitch direction training, while the remaining 10 received no training. Post training, all amusics were re-tested on the pitch threshold tasks and on the three pitch-based MBEA subtests. Compared with those untrained, trained amusics demonstrated significantly improved thresholds for pitch direction identification in both speech and music, to the level of non-amusic control participants, although no significant difference was observed between trained and untrained amusics in the MBEA subtests. This provides the first clear positive evidence for improvement in pitch direction processing through auditory training in amusia. Further training studies are required to target different deficit areas in congenital amusia, so as to reveal which aspects of improvement will be most beneficial to the normal functioning of musical processing

    Neural basis of acquired amusia and its recovery after stroke

    Get PDF
    Although acquired amusia is a relatively common disorder after stroke, its precise neuroanatomical basis is still unknown. To evaluate which brain regions form the neural substrate for acquired amusia and its recovery, we performed a voxel-based lesion-symptom mapping (VLSM) and morphometry (VBM) study with 77 human stroke subjects. Structural MRIs were acquired at acute and 6 month poststroke stages. Amusia and aphasia were behaviorally assessed at acute and 3 month poststroke stages using the Scale and Rhythm subtests of the Montreal Battery of Evaluation of Amusia (MBEA) and language tests. VLSM analyses indicated that amusia was associated with a lesion area comprising the superior temporal gyrus, Heschl's gyrus, insula, and striatum in the right hemisphere, clearly different from the lesion pattern associated with aphasia. Parametric analyses of MBEA Pitch and Rhythm scores showed extensive lesion overlap in the right striatum, as well as in the right Heschl's gyrus and superior temporal gyrus. Lesions associated with Rhythm scores extended more superiorly and posterolaterally. VBM analysis of volume changes from the acute to the 6 month stage showed a clear decrease in gray matter volume in the right superior and middle temporal gyri in nonrecovered amusic patients compared with nonamusic patients. This increased atrophy was more evident in anterior temporal areas in rhythm amusia and in posterior temporal and temporoparietal areas in pitch amusia. Overall, the results implicate right temporal and subcortical regions as the crucial neural substrate for acquired amusia and highlight the importance of different temporal lobe regions for the recovery of amusia after stroke

    Effects of culture on musical pitch perception.

    Get PDF
    The strong association between music and speech has been supported by recent research focusing on musicians' superior abilities in second language learning and neural encoding of foreign speech sounds. However, evidence for a double association--the influence of linguistic background on music pitch processing and disorders--remains elusive. Because languages differ in their usage of elements (e.g., pitch) that are also essential for music, a unique opportunity for examining such language-to-music associations comes from a cross-cultural (linguistic) comparison of congenital amusia, a neurogenetic disorder affecting the music (pitch and rhythm) processing of about 5% of the Western population. In the present study, two populations (Hong Kong and Canada) were compared. One spoke a tone language in which differences in voice pitch correspond to differences in word meaning (in Hong Kong Cantonese, /si/ means 'teacher' and 'to try' when spoken in a high and mid pitch pattern, respectively). Using the On-line Identification Test of Congenital Amusia, we found Cantonese speakers as a group tend to show enhanced pitch perception ability compared to speakers of Canadian French and English (non-tone languages). This enhanced ability occurs in the absence of differences in rhythmic perception and persists even after relevant factors such as musical background and age were controlled. Following a common definition of amusia (5% of the population), we found Hong Kong pitch amusics also show enhanced pitch abilities relative to their Canadian counterparts. These findings not only provide critical evidence for a double association of music and speech, but also argue for the reconceptualization of communicative disorders within a cultural framework. Along with recent studies documenting cultural differences in visual perception, our auditory evidence challenges the common assumption of universality of basic mental processes and speaks to the domain generality of culture-to-perception influences.published_or_final_versio

    Is there a tape recorder in your head? How the brain stores and retrieves musical melodies

    Get PDF
    Music consists of strings of sound that vary over time. Technical devices, such as tape recorders, store musical melodies by transcribing event times of temporal sequences into consecutive locations on the storage medium. Playback occurs by reading out the stored information in the same sequence. However, it is unclear how the brain stores and retrieves auditory sequences. Neurons in the anterior lateral belt of auditory cortex are sensitive to the combination of sound features in time, but the integration time of these neurons is not sufficient to store longer sequences that stretch over several seconds, minutes or more. Functional imaging studies in humans provide evidence that music is stored instead within the auditory dorsal stream, including premotor and prefrontal areas. In monkeys, these areas are the substrate for learning of motor sequences. It appears, therefore, that the auditory dorsal stream transforms musical into motor sequence information and vice versa, realizing what are known as forward and inverse models. The basal ganglia and the cerebellum are involved in setting up the sensorimotor associations, translating timing information into spatial codes and back again
    corecore