2 research outputs found

    On enhancing model-based expectation maximization source separation in dynamic reverberant conditions using automatic Clifton effect

    Full text link
    [EN] Source separation algorithms based on spatial cues generally face two major problems. The first one is their general performance degradation in reverberant environments and the second is their inability to differentiate closely located sources due to similarity of their spatial cues. The latter problem gets amplified in highly reverberant environments as reverberations have a distorting effect on spatial cues. In this paper, we have proposed a separation algorithm, in which inside an enclosure, the distortions due to reverberations in a spatial cue based source separation algorithm namely model-based expectation-maximization source separation and localization (MESSL) are minimized by using the Precedence effect. The Precedence effect acts as a gatekeeper which restricts the reverberations entering the separation system resulting in its improved separation performance. And this effect is automatically transformed into the Clifton effect to deal with the dynamic acoustic conditions. Our proposed algorithm has shown improved performance over MESSL in all kinds of reverberant conditions including closely located sources. On average, 22.55% improvement in SDR (signal to distortion ratio) and 15% in PESQ (perceptual evaluation of speech quality) is observed by using the Clifton effect to tackle dynamic reverberant conditions.This project is funded by Higher Education Commission (HEC), Pakistan, under project no. 6330/KPK/NRPU/R&D/HEC/2016.Gul, S.; Khan, MS.; Shah, SW.; Lloret, J. (2020). On enhancing model-based expectation maximization source separation in dynamic reverberant conditions using automatic Clifton effect. International Journal of Communication Systems. 33(3):1-18. https://doi.org/10.1002/dac.421011833

    Multimodal blind source separation with a circular microphone array and robust beamforming

    No full text
    A novel multimodal (audio-visual) approach to the problem of blind source separation (BSS) is evaluated in room environments. The main challenges of BSS in realistic environments are: 1) sources are moving in complex motions and 2) the room impulse responses are long. For moving sources the unmixing filters to separate the audio signals are difficult to calculate from only statistical information available from a limited number of audio samples. For physically stationary sources measured in rooms with long impulse responses, the performance of audio only BSS methods is limited. Therefore, visual modality is utilized to facilitate the separation. The movement of the sources is detected with a 3-D tracker based on a Markov Chain Monte Carlo particle filter (MCMC-PF), and the direction of arrival information of the sources to the microphone array is estimated. A robust least squares frequency invariant data independent (RLSFIDI) beamformer is implemented to perform real time speech enhancement. The uncertainties in source localization and direction of arrival information are also controlled by using a convex optimization approach in the beamformer design. A 16 element circular array configuration is used. Simulation studies based on objective and subjective measures confirm the advantage of beamforming based processing over conventional BSS methods. © 2011 EURASIP
    corecore