33 research outputs found

    Voice-sensitive regions in the dog and human brain are revealed by comparative fMRI

    Get PDF
    During the approximately 18–32 thousand years of domestication [1], dogs and humans have shared a similar social environment [2]. Dog and human vocalizations are thus familiar and relevant to both species [3], although they belong to evolutionarily distant taxa, as their lineages split approximately 90–100 million years ago [4]. In this first comparative neuroimaging study of a nonprimate and a primate species, we made use of this special combination of shared environment and evolutionary distance. We presented dogs and humans with the same set of vocal and nonvocal stimuli to search for functionally analogous voice-sensitive cortical regions. We demonstrate that voice areas exist in dogs and that they show a similar pattern to anterior temporal voice areas in humans. Our findings also reveal that sensitivity to vocal emotional valence cues engages similarly located nonprimary auditory regions in dogs and humans. Although parallel evolution cannot be excluded, our findings suggest that voice areas may have a more ancient evolutionary origin than previously known

    Humans assess the emotional content of conspecific and dog vocalizations on similar acoustical bases

    Get PDF
    Compared to other mammals, humans are extremely vocal. Using language highly eases the expression of inner states, but there is an evolutionarily more conservative set of vocalizations, the non-verbal vocal bursts (or calls) that play an important role in human emotion expressions. We can draw homologies between some of these calls with mammalian vocalizations: e.g. based on its acoustics, the evolution of human laughter can be derived from pleasure vocalizations of the apes. Humans are also good at recognizing the emotional states of conspecifics based solely on these vocal bursts. Moreover, they can perform surprisingly well in assessing inner states of other species. Several studies showed, for example, that human listeners can attribute probable inner states to dog barks. Dogs are a good source of emotion expressing calls, due to their rich and variable repertoire and the fact that they live with humans for more than thousand years. It is not known, however, whether human listeners use the same acoustic cues to assess emotional content in conspecific and non-conspecific vocalizations. Here we aimed to compare how human listeners rated the emotional content of dog and human non-verbal vocalizations, and also to explore what acoustical parameters affected their responses. To test this, we compiled a pool of 100-100 various types of dog and human non-verbal vocalizations from diverse social contexts, and designed an online survey, in which every sound sample could be rated parallel along two dimensions: emotional valence (ranging from negative to positive) and emotional intensity (ranging from not aroused to maximally aroused). The sound samples were presented in random order for each subject (N=39). We calculated the mean of the valence and intensity scores for each sound sample. We also measured the average length of bursts within each sample (call length), the fundamental frequency (f0) and the harmonics-to-noise ratio (HNR). Comparisons of these acoustic measures showed that on average, dog vocalizations had shorter call lengths and were noisier, but their overall f0 did not differ from the human vocalizations. Valence ratings did not differ across species, but human vocalizations were rated less intense. Importantly, linear regressions revealed similar relationships for human and dog vocalizations between acoustic features and emotional ratings. Call length had a significant effect on valence: both dog and human sounds with shorter call lengths were rated as more positive. F0, in contrast, influenced the intensity scores mainly: both higher pitched human and dog sounds were rated more intense. We also found some species-specific relationships between acoustics and perceptual scores: dog vocalizations with a shorter call length or with a higher harmonics-to-noise ratio were rated less intense. In sum, acoustical parameters affected humans’ emotional ratings independently from the source species of these vocal expressions, despite the acoustical and emotional differences between human and dog vocalizations. These findings suggest that humans utilize the same mental mechanisms for recognizing conspecific and heterospecific vocal emotions

    Acoustical basis of human emotion assessment of conspecific and dog vocalizations

    Get PDF
    Human non-verbal vocal bursts are evolutionary conservative emotional expressions. Humans can easily assess inner states of conspecifics based on these calls. Moreover, they can attribute emotions to non-human animal vocalizations too. However, whether the same acoustic cues are used to assess emotional content in conspecific and nonconspecific vocalizations is not clarified yet. To test this, we compiled a pool of 100-100 various dog and human non-verbal vocalizations from diverse social contexts, and designed an online survey, in which every sample could be rated along emotional valence and intensity. We also measured within each sample the average length of calls, the fundamental frequency and the harmonics-to-noise ratio. While valence ratings did not differ across species, human vocalizations were less intense. Linear regressions revealed that both shorter dog and human calls were rated as more positive. In contrast, subjects scored higher pitched human and dog sounds to be more intense. We also found dog vocalizations with shorter call length or with higher HNR were rated less intense. In conclusion, acoustical parameters affected humans’ emotional ratings independently from the source species of these vocalizations. These findings suggest that humans utilize the same mental mechanisms for recognizing conspecific and heterospecific vocal emotions

    Humans rely on the same rules to assess emotional valence and intensity in conspecific and dog vocalizations

    Get PDF
    Humans excel at assessing conspecific emotional valence and intensity, based solely on non-verbal vocal bursts that are also common in other mammals. It is not known, however, whether human listeners rely on similar acoustic cues to assess emotional content in conspecific and heterospecific vocalizations, and which acoustical parameters affect their performance. Here, for the first time, we directly compared the emotional valence and intensity perception of dog and human non-verbal vocalizations. We revealed similar relationships between acoustic features and emotional valence and intensity ratings of human and dog vocalizations: those with shorter call lengths were rated as more positive, whereas those with a higher pitch were rated as more intense. Our findings demonstrate that humans rate conspecific emotional vocalizations along basic acoustic rules, and that they apply similar rules when processing dog vocal expressions. This suggests that humans may utilize similar mental mechanisms for recognizing human and heterospecific vocal emotions

    The acoustic bases of human voice identity processing in dogs

    Get PDF
    Speech carries identity-diagnostic acoustic cues that help individuals recognize each other during vocal–social interactions. In humans, fundamental frequency, formant dispersion and harmonics-to-noise ratio serve as characteristics along which speakers can be reliably separated. The ability to infer a speaker’s identity is also adaptive for members of other species (like companion animals) for whom humans (as owners) are relevant. The acoustic bases of speaker recognition in non-humans are unknown. Here, we tested whether dogs can recognize their owner’s voice and whether they rely on the same acoustic parameters for such recognition as humans use to discriminate speakers. Stimuli were pre-recorded sentences spoken by the owner and control persons, played through loudspeakers placed behind two non-transparent screens (with each screen hiding a person). We investigated the association between acoustic distance of speakers (examined along several dimensions relevant in intraspecific voice identification) and dogs’ behavior. Dogs chose their owner’s voice more often than that of control persons’, suggesting that they can identify it. Choosing success and time spent looking in the direction of the owner’s voice were positively associated, showing that looking time is an index of the ease of choice. Acoustic distance of speakers in mean fundamental frequency and jitter were positively associated with looking time, indicating that the shorter the acoustic distance between speakers with regard to these parameters, the harder the decision. So, dogs use these cues to discriminate their owner’s voice from unfamiliar voices. These findings reveal that dogs use some but probably not all acoustic parameters that humans use to identify speakers. Although dogs can detect fine changes in speech, their perceptual system may not be fully attuned to identity-diagnostic cues in the human voice

    Social relationship-dependent neural response to speech in dogs: Social relationship-based neural responses in dogs

    Get PDF
    In humans, social relationship with the speaker affects neural processing of speech, as exemplified by children's auditory and reward responses to their mother's utterances. Family dogs show human analogue attachment behavior towards the owner, and neuroimaging revealed auditory cortex and reward center sensitivity to verbal praises in dog brains. Combining behavioral and non-invasive fMRI data, we investigated the effect of dogs' social relationship with the speaker on speech processing. Dogs listened to praising and neutral speech from their owners and a control person. We found positive correlation between dogs' behaviorally measured attachment scores towards their owners and neural activity increase for the owner's voice in the caudate nucleus; and activity increase in the secondary auditory caudal ectosylvian gyrus and the caudate nucleus for the owner's praise. Through identifying social relationship-dependent neural reward responses, our study reveals similarities in neural mechanisms modulated by infant-mother and dog-owner attachment
    corecore