33 research outputs found

    Voice cues are used in a similar way by blind and sighted adults when assessing women’s body size

    Get PDF
    Humans’ ability to gauge another person’s body size from their voice alone may serve multiple functions ranging from threat assessment to speaker normalization. However, how this ability is acquired remains unknown. In two experiments we tested whether sighted, congenitally blind and late blind adults could accurately judge the relative heights of women from paired voice stimuli, and importantly, whether errors in size estimation varied with task difficulty across groups. Both blind (n = 56) and sighted (n = 61) listeners correctly judged women’s relative heights on approximately 70% of low difficulty trials, corroborating previous findings for judging men’s heights. However, accuracy dropped to chance levels for intermediate difficulty trials and to 25% for high difficulty trials, regardless of the listener’s sightedness, duration of vision loss, sex, or age. Thus, blind adults estimated women’s height with the same degree of accuracy, but also the same pattern of errors, as did sighted controls. Our findings provide further evidence that visual experience is not necessary for accurate body size estimation. Rather, both blind and sighted listeners appear to follow a general rule, mapping low auditory frequencies to largeness across a range of contexts. This sound-size mapping emerges without visual experience, and is likely very important for humans

    Volitional exaggeration of body size through fundamental and formant frequency modulation in humans

    Get PDF
    Several mammalian species scale their voice fundamental frequency (F0) and formant frequencies in competitive and mating contexts, reducing vocal tract and laryngeal allometry thereby exaggerating apparent body size. Although humans’ rare capacity to volitionally modulate these same frequencies is thought to subserve articulated speech, the potential function of voice frequency modulation in human nonverbal communication remains largely unexplored. Here, the voices of 167 men and women from Canada, Cuba, and Poland were recorded in a baseline condition and while volitionally imitating a physically small and large body size. Modulation of F0, formant spacing (∆F), and apparent vocal tract length (VTL) were measured using Praat. Our results indicate that men and women spontaneously and systemically increased VTL and decreased F0 to imitate a large body size, and reduced VTL and increased F0 to imitate small size. These voice modulations did not differ substantially across cultures, indicating potentially universal sound-size correspondences or anatomical and biomechanical constraints on voice modulation. In each culture, men generally modulated their voices (particularly formants) more than did women. This latter finding could help to explain sexual dimorphism in F0 and formants that is currently unaccounted for by sexual dimorphism in human vocal anatomy and body size

    Spontaneous Voice Gender Imitation Abilities in Adult Speakers

    Get PDF
    Background The frequency components of the human voice play a major role in signalling the gender of the speaker. A voice imitation study was conducted to investigate individuals' ability to make behavioural adjustments to fundamental frequency (F0), and formants (Fi) in order to manipulate their expression of voice gender. Methodology/Principal Findings Thirty-two native British-English adult speakers were asked to read out loud different types of text (words, sentence, passage) using their normal voice and then while sounding as ‘masculine’ and ‘feminine’ as possible. Overall, the results show that both men and women raised their F0 and Fi when feminising their voice, and lowered their F0 and Fi when masculinising their voice. Conclusions/Significance These observations suggest that adult speakers are capable of spontaneous glottal and vocal tract length adjustments to express masculinity and femininity in their voice. These results point to a “gender code”, where speakers make a conventionalized use of the existing sex dimorphism to vary the expression of their gender and gender-related attributes

    Evolutionary Developmental Biology and Human Language Evolution: Constraints on Adaptation

    Get PDF

    Crossmodal correspondences: A tutorial review

    Full text link

    The processing and perception of size information in speech sounds

    No full text
    There is information in speech sounds about the length of the vocal tract; specifically, as a child grows, the resonators in the vocal tract grow and the formant frequencies of the vowels decrease. It has been hypothesized that the auditory system applies a scale transform to all sounds to segregate size information from resonator shape information, and thereby enhance both size perception and speech recognition [Irino and Patterson, Speech Commun. 36, 181-203 (2002)]. This paper describes size discrimination experiments and vowel recognition, experiments designed to provide evidence for an auditory scaling mechanism. Vowels were scaled to represent people with vocal tracts much longer and shorter than normal, and with pitches much higher and lower than normal. The results of the discrimination experiments show that listeners can make fine judgments about the relative size of speakers, and they can do so for vowels scaled well beyond the normal range. Similarly, the recognition experiments show good performance for vowels in the normal range, and for vowels scaled well beyond the normal range of experience. Together, the experiments support the hypothesis that the auditory system automatically normalizes for the size information in communication sounds. © 2005 Acoustical Society of America

    Vowel normalisation: Time-domain processing of the internal dynamics of speech

    No full text
    Human listeners can identify vowels regardless of speaker size, although the sound waves for an adult and a child speaking the ’same’ vowel would differ enormously. The differences are mainly due to the differences in vocal tract length (VTL) and glottal pulse rate (GPR) which are both related to body size. Automatic speech recognition machines are notoriously bad at understanding children if they have been trained on the speech of an adult. In this paper, we propose that the auditory system adapts its analysis of speech sounds, dynamically and automatically to the GPR and VTL of the speaker on a syllable-to-syllable basis. We illustrate how this rapid adaptation might be performed with the aid of a computational version of the auditory image model, and we propose that an auditory preprocessor of this form would improve the robustness of speech recognisers

    Neural Representation of Auditory Size in the Human Voice and in Sounds from Other Resonant Sources

    No full text
    The size of a resonant source can be estimated by the acoustic-scale information in the sound [1-3]. Previous studies revealed that posterior superior temporal gyrus (STG) responds to acoustic scale in human speech when it is controlled for spectral-envelope change (unpublished data). Here we investigate whether the STG activity is specific to the processing of acoustic scale in human voice or whether it reflects a generic mechanism for the analysis of acoustic scale in resonant sources. In two functional magnetic resonance imaging (fMRI) experiments, we measured brain activity in response to changes in acoustic scale in different categories of resonant sound (human voice, animal call, and musical instrument). We show that STG is activated bilaterally for spectral-envelope changes in general; it responds to changes in category as well as acoustic scale. Activity in left posterior STG is specific to acoustic scale in human voices and not responsive to acoustic scale in other resonant sources. In contrast, the anterior temporal lobe and intraparietal sulcus are activated by changes in acoustic scale across categories. The results imply that the human voice requires special processing of acoustic scale, whereas the anterior temporal lobe and intraparietal sulcus process auditory size information independent of source category. \ua9 2007 Elsevier Ltd. All rights reserved
    corecore