447 research outputs found

    Binaural scene analysis : localization, detection and recognition of speakers in complex acoustic scenes

    Get PDF
    The human auditory system has the striking ability to robustly localize and recognize a specific target source in complex acoustic environments while ignoring interfering sources. Surprisingly, this remarkable capability, which is referred to as auditory scene analysis, is achieved by only analyzing the waveforms reaching the two ears. Computers, however, are presently not able to compete with the performance achieved by the human auditory system, even in the restricted paradigm of confronting a computer algorithm based on binaural signals with a highly constrained version of auditory scene analysis, such as localizing a sound source in a reverberant environment or recognizing a speaker in the presence of interfering noise. In particular, the problem of focusing on an individual speech source in the presence of competing speakers, termed the cocktail party problem, has been proven to be extremely challenging for computer algorithms. The primary objective of this thesis is the development of a binaural scene analyzer that is able to jointly localize, detect and recognize multiple speech sources in the presence of reverberation and interfering noise. The processing of the proposed system is divided into three main stages: localization stage, detection of speech sources, and recognition of speaker identities. The only information that is assumed to be known a priori is the number of target speech sources that are present in the acoustic mixture. Furthermore, the aim of this work is to reduce the performance gap between humans and machines by improving the performance of the individual building blocks of the binaural scene analyzer. First, a binaural front-end inspired by auditory processing is designed to robustly determine the azimuth of multiple, simultaneously active sound sources in the presence of reverberation. The localization model builds on the supervised learning of azimuthdependent binaural cues, namely interaural time and level differences. Multi-conditional training is performed to incorporate the uncertainty of these binaural cues resulting from reverberation and the presence of competing sound sources. Second, a speech detection module that exploits the distinct spectral characteristics of speech and noise signals is developed to automatically select azimuthal positions that are likely to correspond to speech sources. Due to the established link between the localization stage and the recognition stage, which is realized by the speech detection module, the proposed binaural scene analyzer is able to selectively focus on a predefined number of speech sources that are positioned at unknown spatial locations, while ignoring interfering noise sources emerging from other spatial directions. Third, the speaker identities of all detected speech sources are recognized in the final stage of the model. To reduce the impact of environmental noise on the speaker recognition performance, a missing data classifier is combined with the adaptation of speaker models using a universal background model. This combination is particularly beneficial in nonstationary background noise

    On enhancing model-based expectation maximization source separation in dynamic reverberant conditions using automatic Clifton effect

    Full text link
    [EN] Source separation algorithms based on spatial cues generally face two major problems. The first one is their general performance degradation in reverberant environments and the second is their inability to differentiate closely located sources due to similarity of their spatial cues. The latter problem gets amplified in highly reverberant environments as reverberations have a distorting effect on spatial cues. In this paper, we have proposed a separation algorithm, in which inside an enclosure, the distortions due to reverberations in a spatial cue based source separation algorithm namely model-based expectation-maximization source separation and localization (MESSL) are minimized by using the Precedence effect. The Precedence effect acts as a gatekeeper which restricts the reverberations entering the separation system resulting in its improved separation performance. And this effect is automatically transformed into the Clifton effect to deal with the dynamic acoustic conditions. Our proposed algorithm has shown improved performance over MESSL in all kinds of reverberant conditions including closely located sources. On average, 22.55% improvement in SDR (signal to distortion ratio) and 15% in PESQ (perceptual evaluation of speech quality) is observed by using the Clifton effect to tackle dynamic reverberant conditions.This project is funded by Higher Education Commission (HEC), Pakistan, under project no. 6330/KPK/NRPU/R&D/HEC/2016.Gul, S.; Khan, MS.; Shah, SW.; Lloret, J. (2020). On enhancing model-based expectation maximization source separation in dynamic reverberant conditions using automatic Clifton effect. International Journal of Communication Systems. 33(3):1-18. https://doi.org/10.1002/dac.421011833

    Engineering data compendium. Human perception and performance. User's guide

    Get PDF
    The concept underlying the Engineering Data Compendium was the product of a research and development program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design and military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from the existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by systems designers. The present four volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is the first volume, the User's Guide, containing a description of the program and instructions for its use
    • …
    corecore