57 research outputs found

    Models and Analysis of Vocal Emissions for Biomedical Applications

    Get PDF
    The International Workshop on Models and Analysis of Vocal Emissions for Biomedical Applications (MAVEBA) came into being in 1999 from the particularly felt need of sharing know-how, objectives and results between areas that until then seemed quite distinct such as bioengineering, medicine and singing. MAVEBA deals with all aspects concerning the study of the human voice with applications ranging from the newborn to the adult and elderly. Over the years the initial issues have grown and spread also in other fields of research such as occupational voice disorders, neurology, rehabilitation, image and video analysis. MAVEBA takes place every two years in Firenze, Italy. This edition celebrates twenty-two years of uninterrupted and successful research in the field of voice analysis

    On the applicability of models for outdoor sound (A)

    Get PDF

    Effects of forensically-relevant facial concealment on acoustic and perceptual properties of consonants

    Get PDF
    This thesis offers a thorough investigation into the effects of forensically-relevant facial concealment on speech acoustics and perception. Specifically, it explores the extent to which selected acoustic-phonetic and auditory-perceptual properties of consonants are affected when the talker is wearing ‘facewear’ while speaking. In this context, the term ‘facewear’ refers to the various types of face-concealing garments and headgear that are worn by people in common daily communication situations; for work and leisure, or as an expression of religious, social and cultural affiliation (e.g. surgical masks, motorcycle helmets, ski and cycling masks, or full-face veils such as the niqāb). It also denotes the face or head coverings that are typically used as deliberate (visual) disguises during the commission of crimes and in situations of public disorder (e.g. balaclavas, hooded sweatshirts, or scarves). The present research centres on the question: does facewear influence the way that consonants are produced, transmitted, and perceived? To examine the effects of facewear on the acoustic speech signal, various intensity, spectral, and temporal properties of spoken English consonants were measured. It was found that facewear can considerably alter the acoustic-phonetic characteristics of consonants. This was likely to be the result of both deliberate and involuntary changes to the talker’s speech productions, and of sound energy absorption by the facewear material. The perceptual consequences of the acoustic modifications to speech were assessed by way of a consonant identification study and a talker discrimination study. The results of these studies showed that auditory-only and auditory-visual consonant intelligibility, as well as the discrimination of unfamiliar talkers, may be greatly compromised when the observer’s judgements are based on ‘facewear speech’. The findings reported in this thesis contribute to our understanding of how auditory and visual information interact during natural speech processing. Furthermore, the results have important practical implications for legal cases in which speech produced through facewear is of pivotal importance. Forensic speech scientists are therefore advised to take the possible effects of facewear on speech into account when interpreting the outcome of their acoustic and auditory analyses of evidential speech recordings, and when evaluating the reliability of earwitness testimony

    Models and analysis of vocal emissions for biomedical applications

    Get PDF
    This book of Proceedings collects the papers presented at the 3rd International Workshop on Models and Analysis of Vocal Emissions for Biomedical Applications, MAVEBA 2003, held 10-12 December 2003, Firenze, Italy. The workshop is organised every two years, and aims to stimulate contacts between specialists active in research and industrial developments, in the area of voice analysis for biomedical applications. The scope of the Workshop includes all aspects of voice modelling and analysis, ranging from fundamental research to all kinds of biomedical applications and related established and advanced technologies

    Understanding hearing aid sound quality for music-listening

    Get PDF
    To improve speech intelligibility for individuals with hearing loss, hearing aids amplify speech using gains derived from evidence-based prescriptive methods, in addition to other advanced signal processing mechanisms. While the evidence supports the use of hearing aid signal processing for speech intelligibility, these signal processing adjustments can also be detrimental to hearing aid sound quality, with poor hearing aid sound quality cited as a barrier to device adoption. Poor sound quality is also of concern for music-listening, in which intelligibility is likely not a consideration. A series of electroacoustic and behavioural studies were conducted to study sound quality issues in hearing aids, with a focus on music. An objective sound quality metric was validated for real hearing aid fittings, enabling researchers to predict sound quality impacts of signal processing adjustments. Qualitative interviews with hearing aid user musicians revealed that users’ primary concern was understanding the conductor’s speech during rehearsals, with hearing aid music sound quality issues a secondary concern. However, reported sound quality issues were consistent with music-listening sound quality complaints in the literature. Therefore, follow-up experiments focused on sound quality issues. An examination of different manufacturers’ hearing aids revealed significant music sound quality preferences for some devices over others. Electroacoustic measurements on these devices revealed that bass content varied more between devices than levels in other spectral ranges or nonlinearity, and increased bass levels were most associated with improved sound quality ratings. In a sound quality optimization study, listeners increased the bass and reduced the treble relative to typically-prescribed gains, for both speech and music. However, adjustments were smaller in magnitude for speech compared to music because they were also associated with a decline in speech intelligibility. These findings encourage the increase of bass and reduction of treble to improve hearing aid music sound quality, but only to the degree that speech intelligibility is not compromised. Future research is needed on the prediction of hearing aid music quality, the provision of low-frequency gain in open-fit hearing aids, genre-specific adjustments, hearing aid compression and music, and direct-to-consumer technology

    Audio-Visual Speech Enhancement Based on Deep Learning

    Get PDF

    Ultrasonic splitting of oil-in-water emulsions

    Get PDF

    Speech communication strategies in older children: acoustic-phonetic and linguistic adaptations to a hearing-impaired peer

    Get PDF
    This thesis examines the communication strategies used by both hearing (NH) and hearing-impaired (HI) children when interacting with a peer with hearing loss, focusing on the acoustic-phonetic and linguistic properties of their speech. To elicit frequent repetitions of segmental contrasts in HI children’s spontaneous speech in interaction, a new task was developed using minimal pair keywords in a communicative game context. In addition, another referential communication task, the ‘spot the difference’ Diapix task (Van Engen et al., 2010), was used. Eighteen NH and eighteen HI children between 9 and 15 years of age performed the two tasks in pairs, once with a friend with normal hearing (NH-directed speech) and once with a friend with a hearing-impairment (HI-directed speech). Task difficulty increased in interactions involving a HI interlocutor, implying a need for speaker- listener adaptations. Participants’ global acoustic-phonetic (articulation rate, F0 median and range, speech intensity and pausing), segmental (/p/-/b/, /s/- /ʃ/, and /i/-/ÉȘ/) and linguistic (phrase length, lexical frequency, lexical diversity and speech overlap) adaptations to a HI interlocutor were explored. Although HI speakers were found to differ from NH speakers in many aspects of their speech and language, the two groups used similar, mostly global and linguistic, strategies to adapt to the needs of their HI friend – and the HI children’s ability to adapt did not seem to be related to their own speech level. Only a subset of speakers was found to increase the discriminability of phonetic contrasts in speech, perhaps partly due to speakers using segmental and linguistic strategies as alternative methods in adaptation. Both NH and HI speakers appeared to adjust the extent of adaptations made to the specific needs of their HI interlocutor, therefore implying surprising sensitivity to listener needs. Implications to models of speech communication are discussed
    • 

    corecore