1,183 research outputs found

    Visualizing sound emission of elephant vocalizations: evidence for two rumble production types

    Get PDF
    Recent comparative data reveal that formant frequencies are cues to body size in animals, due to a close relationship between formant frequency spacing, vocal tract length and overall body size. Accordingly, intriguing morphological adaptations to elongate the vocal tract in order to lower formants occur in several species, with the size exaggeration hypothesis being proposed to justify most of these observations. While the elephant trunk is strongly implicated to account for the low formants of elephant rumbles, it is unknown whether elephants emit these vocalizations exclusively through the trunk, or whether the mouth is also involved in rumble production. In this study we used a sound visualization method (an acoustic camera) to record rumbles of five captive African elephants during spatial separation and subsequent bonding situations. Our results showed that the female elephants in our analysis produced two distinct types of rumble vocalizations based on vocal path differences: a nasally- and an orally-emitted rumble. Interestingly, nasal rumbles predominated during contact calling, whereas oral rumbles were mainly produced in bonding situations. In addition, nasal and oral rumbles varied considerably in their acoustic structure. In particular, the values of the first two formants reflected the estimated lengths of the vocal paths, corresponding to a vocal tract length of around 2 meters for nasal, and around 0.7 meters for oral rumbles. These results suggest that African elephants may be switching vocal paths to actively vary vocal tract length (with considerable variation in formants) according to context, and call for further research investigating the function of formant modulation in elephant vocalizations. Furthermore, by confirming the use of the elephant trunk in long distance rumble production, our findings provide an explanation for the extremely low formants in these calls, and may also indicate that formant lowering functions to increase call propagation distances in this species'

    Reconstruction of Phonated Speech from Whispers Using Formant-Derived Plausible Pitch Modulation

    Get PDF
    Whispering is a natural, unphonated, secondary aspect of speech communications for most people. However, it is the primary mechanism of communications for some speakers who have impaired voice production mechanisms, such as partial laryngectomees, as well as for those prescribed voice rest, which often follows surgery or damage to the larynx. Unlike most people, who choose when to whisper and when not to, these speakers may have little choice but to rely on whispers for much of their daily vocal interaction. Even though most speakers will whisper at times, and some speakers can only whisper, the majority of today’s computational speech technology systems assume or require phonated speech. This article considers conversion of whispers into natural-sounding phonated speech as a noninvasive prosthetic aid for people with voice impairments who can only whisper. As a by-product, the technique is also useful for unimpaired speakers who choose to whisper. Speech reconstruction systems can be classified into those requiring training and those that do not. Among the latter, a recent parametric reconstruction framework is explored and then enhanced through a refined estimation of plausible pitch from weighted formant differences. The improved reconstruction framework, with proposed formant-derived artificial pitch modulation, is validated through subjective and objective comparison tests alongside state-of-the-art alternatives

    Consonant Context Effects on Vowel Sensorimotor Adaptation

    Get PDF
    Speech sensorimotor adaptation is the short-term learning of modified articulator movements evoked through sensory-feedback perturbations. A common experimental method manipulates acoustic parameters, such as formant frequencies, using real time resynthesis of the participant\u27s speech to perturb auditory feedback. While some studies have examined phrases comprised of vowels, diphthongs, and semivowels, the bulk of research on auditory feedback-driven sensorimotor adaptation has focused on vowels in neutral contexts (/hVd/). The current study investigates coarticulatory influences of adjacent consonants on sensorimotor adaptation. The purpose is to evaluate differences in the adaptation effects for vowels in consonant environments that vary by place and manner of articulation. In particular, we addressed the hypothesis that contexts with greater intra-articulator coarticulation and more static articulatory postures (alveolars and fricatives) offer greater resistance to vowel adaptation than contexts with primarily inter-articulator coarticulation and more dynamic articulatory patterns (bilabials and stops). Participants completed formant perturbation-driven vowel adaptation experiments for varying CVCs. Results from discrete formant measures at the vowel midpoint were generally consistent with the hypothesis. Analyses of more complete formant trajectories suggest that adaptation can also (or alternatively) influence formant onsets, offsets, and transitions, resulting in complex formant pattern changes that may reflect modifications to consonant articulatio

    Reducing Audible Spectral Discontinuities

    Get PDF
    In this paper, a common problem in diphone synthesis is discussed, viz., the occurrence of audible discontinuities at diphone boundaries. Informal observations show that spectral mismatch is most likely the cause of this phenomenon.We first set out to find an objective spectral measure for discontinuity. To this end, several spectral distance measures are related to the results of a listening experiment. Then, we studied the feasibility of extending the diphone database with context-sensitive diphones to reduce the occurrence of audible discontinuities. The number of additional diphones is limited by clustering consonant contexts that have a similar effect on the surrounding vowels on the basis of the best performing distance measure. A listening experiment has shown that the addition of these context-sensitive diphones significantly reduces the amount of audible discontinuities

    Effect of formant frequency spacing on perceived gender in pre-pubertal children's voices

    Get PDF
    <div><p>Background</p><p>It is usually possible to identify the sex of a pre-pubertal child from their voice, despite the absence of sex differences in fundamental frequency at these ages. While it has been suggested that the overall spacing between formants (formant frequency spacing - ΔF) is a key component of the expression and perception of sex in children's voices, the effect of its continuous variation on sex and gender attribution has not yet been investigated.</p><p>Methodology/Principal findings</p><p>In the present study we manipulated voice ΔF of eight year olds (two boys and two girls) along continua covering the observed variation of this parameter in pre-pubertal voices, and assessed the effect of this variation on adult ratings of speakers' sex and gender in two separate experiments. In the first experiment (sex identification) adults were asked to categorise the voice as either male or female. The resulting identification function exhibited a gradual slope from male to female voice categories. In the second experiment (gender rating), adults rated the voices on a continuum from “masculine boy” to “feminine girl”, gradually decreasing their masculinity ratings as ΔF increased.</p><p>Conclusions/Significance</p><p>These results indicate that the role of ΔF in voice gender perception, which has been reported in adult voices, extends to pre-pubertal children's voices: variation in ΔF not only affects the perceived sex, but also the perceived masculinity or femininity of the speaker. We discuss the implications of these observations for the expression and perception of gender in children's voices given the absence of anatomical dimorphism in overall vocal tract length before puberty.</p></div

    Praat Tutorial_2: van Lieshout

    Get PDF

    Classification of Malaysian vowels using formant based features

    Get PDF
    Automatic speech recognition (ASR) has made great strides with the development of digital signal processing hardware and software, especially using English as the language of choice. Despite of all these advances, machines cannot match the performance of their human counterparts in terms of accuracy and speed, especially in case of speaker independent speech recognition. In this paper, a new feature based on formant is presented and evaluated on Malaysian spoken vowels. These features were classified and used to identify vowels recorded from 80 Malaysian speakers. A back propagation neural network (BPNN) model was developed to classify the vowels. Six formant features were evaluated, which were the first three formant frequencies and the distances between each of them. Results, showed that overall vowel classification rate of these three formant combinations are comparatively the same but differs in terms of individual vowel classification

    Changes in the McGurk Effect Across Phonetic Contexts

    Full text link
    To investigate the process underlying audiovisual speech perception, the McGurk illusion was examined across a range of phonetic contexts. Two major changes were found. First, the frequency of illusory /g/ fusion percepts increased relative to the frequency of illusory /d/ fusion percepts as vowel context was shifted from /i/ to /a/ to /u/. This trend could not be explained by biases present in perception of the unimodal visual stimuli. However, the change found in the McGurk fusion effect across vowel environments did correspond systematically with changes in second format frequency patterns across contexts. Second, the order of consonants in illusory combination percepts was found to depend on syllable type. This may be due to differences occuring across syllable contexts in the timecourses of inputs from the two modalities as delaying the auditory track of a vowel-consonant stimulus resulted in a change in the order of consonants perceived. Taken together, these results suggest that the speech perception system either fuses audiovisual inputs into a visually compatible percept with a similar second formant pattern to that of the acoustic stimulus or interleaves the information from different modalities, at a phonemic or subphonemic level, based on their relative arrival times.National Institutes of Health (R01 DC02852

    Modeling the Liquid, Nasal, AND Vowel Transitions OF North American English Using Linear Predictive Filters and Line Spectral Frequency Interpolations for Use in a Speech Synthesis System

    Get PDF
    A speech synthesis system with an original user interface is being developed. In contrast to most modern synthesizers, this system is not text to speech (TTS). This system allows the user to control vowels, vowel transitions, and consonant sounds through a simple 2-d vowel pad and consonant buttons. In this system, a synthesized glottal waveform is passed through vowel filters to create vowel sounds. Several filters were calculated from recordings of vowels using linear predictive coding (LPC). The rest of the vowels in the North American English vowel space were found using interpolation techniques with line spectral frequencies (LSF). The effectiveness and naturalness of the speech created from transitions between these filters was tested. In addition to the vowel filters, filters for nasal and liquid consonants were found using LPC analysis. Transition filters between these consonants and vowels were determined using LSFs. These transitions were tested as well
    corecore