627 research outputs found

    Cross-linguistic Influences on Sentence Accent Detection in Background Noise.

    Get PDF
    This paper investigates whether sentence accent detection in a non-native language is dependent on (relative) similarity between prosodic cues to accent between the non-native and the native language, and whether cross-linguistic differences in the use of local and more widely distributed (i.e., non-local) cues to sentence accent detection lead to differential effects of the presence of background noise on sentence accent detection in a non-native language. We compared Dutch, Finnish, and French non-native listeners of English, whose cueing and use of prosodic prominence is gradually further removed from English, and compared their results on a phoneme monitoring task in different levels of noise and a quiet condition to those of native listeners. Overall phoneme detection performance was high for the native and the non-native listeners, but deteriorated to the same extent in the presence of background noise. Crucially, relative similarity between the prosodic cues to sentence accent of one's native language compared to that of a non-native language does not determine the ability to perceive and use sentence accent for speech perception in that non-native language. Moreover, proficiency in the non-native language is not a straightforward predictor of sentence accent perception performance, although high proficiency in a non-native language can seemingly overcome certain differences at the prosodic level between the native and non-native language. Instead, performance is determined by the extent to which listeners rely on local cues (English and Dutch) versus cues that are more distributed (Finnish and French), as more distributed cues survive the presence of background noise better

    Prosodic Representations of Prominence Classification Neural Networks and Autoencoders Using Bottleneck Features

    Get PDF
    Prominence perception has been known to correlate with a complex interplay of the acoustic features of energy, fundamental frequency, spectral tilt, and duration. The contribution and importance of each of these features in distinguishing between prominent and non-prominent units in speech is not always easy to determine, and more so, the prosodic representations that humans and automatic classifiers learn have been difficult to interpret. This work focuses on examining the acoustic prosodic representations that binary prominence classification neural networks and autoencoders learn for prominence. We investigate the complex features learned at different layers of the network as well as the 10-dimensional bottleneck features (BNFs), for the standard acoustic prosodic correlates of prominence separately and in combination. We analyze and visualize the BNFs obtained from the prominence classification neural networks as well as their network activations. The experiments are conducted on a corpus of Dutch continuous speech with manually annotated prominence labels. Our results show that the prosodic representations obtained from the BNFs and higher-dimensional non-BNFs provide good separation of the two prominence categories, with, however, different partitioning of the BNF space for the distinct features, and the best overall separation obtained for F0.Peer reviewe

    The listening talker: A review of human and algorithmic context-induced modifications of speech

    Get PDF
    International audienceSpeech output technology is finding widespread application, including in scenarios where intelligibility might be compromised - at least for some listeners - by adverse conditions. Unlike most current algorithms, talkers continually adapt their speech patterns as a response to the immediate context of spoken communication, where the type of interlocutor and the environment are the dominant situational factors influencing speech production. Observations of talker behaviour can motivate the design of more robust speech output algorithms. Starting with a listener-oriented categorisation of possible goals for speech modification, this review article summarises the extensive set of behavioural findings related to human speech modification, identifies which factors appear to be beneficial, and goes on to examine previous computational attempts to improve intelligibility in noise. The review concludes by tabulating 46 speech modifications, many of which have yet to be perceptually or algorithmically evaluated. Consequently, the review provides a roadmap for future work in improving the robustness of speech output

    Acoustic voice characteristics with and without wearing a facemask

    Get PDF
    Facemasks are essential for healthcare workers but characteristics of the voice whilst wearing this personal protective equipment are not well understood. In the present study, we compared acoustic voice measures in recordings of sixteen adults producing standardised vocal tasks with and without wearing either a surgical mask or a KN95 mask. Data were analysed for mean spectral levels at 0–1 kHz and 1–8 kHz regions, an energy ratio between 0–1 and 1–8 kHz (LH1000), harmonics-to-noise ratio (HNR), smoothed cepstral peak prominence (CPPS), and vocal intensity. In connected speech there was significant attenuation of mean spectral level at 1–8 kHz region and there was no significant change in this measure at 0–1 kHz. Mean spectral levels of vowel did not change significantly in mask-wearing conditions. LH1000 for connected speech significantly increased whilst wearing either a surgical mask or KN95 mask but no significant change in this measure was found for vowel. HNR was higher in the mask-wearing conditions than the no-mask condition. CPPS and vocal intensity did not change in mask-wearing conditions. These findings implied an attenuation effects of wearing these types of masks on the voice spectra with surgical mask showing less impact than the KN95

    Acoustic measurement of overall voice quality in sustained vowels and continuous speech

    Get PDF
    Measurement of dysphonia severity involves auditory-perceptual evaluations and acoustic analyses of sound waves. Meta-analysis of proportional associations between these two methods showed that many popular perturbation metrics and noise-to-harmonics and others ratios do not yield reasonable results. However, this meta-analysis demonstrated that the validity of specific autocorrelation- and cepstrum-based measures was much more convincing, and appointed ‘smoothed cepstral peak prominence’ as the most promising metric of dysphonia severity. Original research confirmed this inferiority of perturbation measures and superiority of cepstral indices in dysphonia measurement of laryngeal-vocal and tracheoesophageal voice samples. However, to be truly representative for daily voice use patterns, measurement of overall voice quality is ideally founded on the analysis of sustained vowels ánd continuous speech. A customized method for including both sample types and calculating the multivariate Acoustic Voice Quality Index (i.e., AVQI), was constructed for this purpose. Original study of the AVQI revealed acceptable results in terms of initial concurrent validity, diagnostic precision, internal and external cross-validity and responsiveness to change. It thus was concluded that the AVQI can track changes in dysphonia severity across the voice therapy process. There are many freely and commercially available computer programs and systems for acoustic metrics of dysphonia severity. We investigated agreements and differences between two commonly available programs (i.e., Praat and Multi-Dimensional Voice Program) and systems. The results indicated that clinicians better not compare frequency perturbation data across systems and programs and amplitude perturbation data across systems. Finally, acoustic information can also be utilized as a biofeedback modality during voice exercises. Based on a systematic literature review, it was cautiously concluded that acoustic biofeedback can be a valuable tool in the treatment of phonatory disorders. When applied with caution, acoustic algorithms (particularly cepstrum-based measures and AVQI) have merited a special role in assessment and/or treatment of dysphonia severity

    Cepstral and Perceptual Investigations in Female Teachers With Functionally Healthy Voice

    Get PDF
    Purpose. The present study aimed at measuring the smoothed and non-smoothed cepstral peak prominence (CPPS and CPP) in teachers who considered themselves to have normal voice but some of them had laryngeal pathology. The changes of CPP, CPPS, sound pressure level (SPL) and perceptual ratings with different voice tasks were investigated and the influence of vocal pathology on these measures was studied. Method. Eighty-four Finnish female primary school teachers volunteered as participants. Laryngoscopically, 52.4% of these had laryngeal changes (39.3% mild, 13.1% disordered). Sound recordings were made for phonations of comfortable sustained vowel, comfortable speech, and speech produced at increased loudness level as used during teaching. CPP, CPPS and SPL values were extracted using Praat software for all three voice samples. Sound samples were also perceptually evaluated by five voice experts for overall voice quality (10 point scale from poor to excellent) and vocal firmness (10 point scale from breathy to pressed, with normal in the middle). Results. The CPP, CPPS and SPL values were significantly higher for vowels than for comfortable speech and for loud speech compared to comfortable speech (P 0.05). Conclusion. Neither the acoustic measures (CPP, CPPS, and SPL) nor the perceptual evaluations could clearly distinguish teachers with laryngeal changes from laryngeally healthy teachers. Considering no vocal complaints of the subjects, the data could be considered representative of teachers with functionally healthy voice.Peer reviewe

    Machine-learning applied to classify flow-induced sound parameters from simulated human voice

    Full text link
    Disorders of voice production have severe effects on the quality of life of the affected individuals. A simulation approach is used to investigate the cause-effect chain in voice production showing typical characteristics of voice such as sub-glottal pressure and of functional voice disorders as glottal closure insufficiency and left-right asymmetry. Therewith, 24 different voice configurations are simulated in a parameter study using a previously published hybrid aeroacoustic simulation model. Based on these 24 simulation configurations, selected acoustic parameters (HNR, CPP, ...) at simulation evaluation points are correlated with these simulation configuration details to derive characteristic insight in the flow-induced sound generation of human phonation based on simulation results. Recently, several institutions studied experimental data, of flow and acoustic properties and correlated it with healthy and disordered voice signals. Upon this, the study is a next step towards a detailed dataset definition, the dataset is small, but the definition of relevant characteristics are precise based on the existing simulation methodology of simVoice. The small datasets are studied by correlation analysis, and a Support Vector Machine classifier with RBF kernel is used to classify the representations. With the use of Linear Discriminant Analysis the dimensions of the individual studies are visualized. This allows to draw correlations and determine the most important features evaluated from the acoustic signals in front of the mouth. The GC type can be best discriminated based on CPP and boxplot visualizations. Furthermore and using the LDA-dimensionality-reduced feature space, one can best classify subglottal pressure with 91.7\% accuracy, independent of healthy or disordered voice simulation parameters.Comment: 17 pages, 11 figures, v0.1, work in progress, working pape

    Stochastic suprasegmentals: relationships between redundancy, prosodic structure and care of articulation in spontaneous speech

    Get PDF
    Within spontaneous speech there are wide variations in the articulation of the same word by the same speaker. This paper explores two related factors which influence variation in articulation, prosodic structure and redundancy. We argue that the constraint of producing robust communication while efficiently expending articulatory effort leads to an inverse relationship between language redundancy and care of articulation. The inverse relationship improves robustness by spreading the information more evenly across the speech signal leading to a smoother signal redundancy profile. We argue that prosodic prominence is a linguistic means of achieving smooth signal redundancy. Prosodic prominence increases care of articulation and coincides with unpredictable sections of speech. By doing so, prosodic prominence leads to a smoother signal redundancy. Results confirm the strong relationship between prosodic prominence and care of articulation as well as an inverse relationship between langu..
    corecore