556 research outputs found

    Differential fast fixed-point algorithms for underdetermined instantaneous and convolutive partial blind source separation

    Full text link
    This paper concerns underdetermined linear instantaneous and convolutive blind source separation (BSS), i.e., the case when the number of observed mixed signals is lower than the number of sources.We propose partial BSS methods, which separate supposedly nonstationary sources of interest (while keeping residual components for the other, supposedly stationary, "noise" sources). These methods are based on the general differential BSS concept that we introduced before. In the instantaneous case, the approach proposed in this paper consists of a differential extension of the FastICA method (which does not apply to underdetermined mixtures). In the convolutive case, we extend our recent time-domain fast fixed-point C-FICA algorithm to underdetermined mixtures. Both proposed approaches thus keep the attractive features of the FastICA and C-FICA methods. Our approaches are based on differential sphering processes, followed by the optimization of the differential nonnormalized kurtosis that we introduce in this paper. Experimental tests show that these differential algorithms are much more robust to noise sources than the standard FastICA and C-FICA algorithms.Comment: this paper describes our differential FastICA-like algorithms for linear instantaneous and convolutive underdetermined mixture

    New Negentropy Optimization Schemes for Blind Signal Extraction of Complex Valued Sources

    Get PDF
    Blind signal extraction, a hot issue in the field of communication signal processing, aims to retrieve the sources through the optimization of contrast functions. Many contrasts based on higher-order statistics such as kurtosis, usually behave sensitive to outliers. Thus, to achieve robust results, nonlinear functions are utilized as contrasts to approximate the negentropy criterion, which is also a classical metric for non-Gaussianity. However, existing methods generally have a high computational cost, hence leading us to address the problem of efficient optimization of contrast function. More precisely, we design a novel “reference-based” contrast function based on negentropy approximations, and then propose a new family of algorithms (Alg.1 and Alg.2) to maximize it. Simulations confirm the convergence of our method to a separating solution, which is also analyzed in theory. We also validate the theoretic complexity analysis that Alg.2 has a much lower computational cost than Alg.1 and existing optimization methods based on negentropy criterion. Finally, experiments for the separation of single sideband signals illustrate that our method has good prospects in real-world applications

    An adaptive stereo basis method for convolutive blind audio source separation

    Get PDF
    NOTICE: this is the author’s version of a work that was accepted for publication in Neurocomputing. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in PUBLICATION, [71, 10-12, June 2008] DOI:neucom.2007.08.02

    Probabilistic Modeling Paradigms for Audio Source Separation

    Get PDF
    This is the author's final version of the article, first published as E. Vincent, M. G. Jafari, S. A. Abdallah, M. D. Plumbley, M. E. Davies. Probabilistic Modeling Paradigms for Audio Source Separation. In W. Wang (Ed), Machine Audition: Principles, Algorithms and Systems. Chapter 7, pp. 162-185. IGI Global, 2011. ISBN 978-1-61520-919-4. DOI: 10.4018/978-1-61520-919-4.ch007file: VincentJafariAbdallahPD11-probabilistic.pdf:v\VincentJafariAbdallahPD11-probabilistic.pdf:PDF owner: markp timestamp: 2011.02.04file: VincentJafariAbdallahPD11-probabilistic.pdf:v\VincentJafariAbdallahPD11-probabilistic.pdf:PDF owner: markp timestamp: 2011.02.04Most sound scenes result from the superposition of several sources, which can be separately perceived and analyzed by human listeners. Source separation aims to provide machine listeners with similar skills by extracting the sounds of individual sources from a given scene. Existing separation systems operate either by emulating the human auditory system or by inferring the parameters of probabilistic sound models. In this chapter, the authors focus on the latter approach and provide a joint overview of established and recent models, including independent component analysis, local time-frequency models and spectral template-based models. They show that most models are instances of one of the following two general paradigms: linear modeling or variance modeling. They compare the merits of either paradigm and report objective performance figures. They also,conclude by discussing promising combinations of probabilistic priors and inference algorithms that could form the basis of future state-of-the-art systems

    Convolutive ICA for Audio Signals

    Get PDF

    A Unifying View on Blind Source Separation of Convolutive Mixtures based on Independent Component Analysis

    Full text link
    In many daily-life scenarios, acoustic sources recorded in an enclosure can only be observed with other interfering sources. Hence, convolutive Blind Source Separation (BSS) is a central problem in audio signal processing. Methods based on Independent Component Analysis (ICA) are especially important in this field as they require only few and weak assumptions and allow for blindness regarding the original source signals and the acoustic propagation path. Most of the currently used algorithms belong to one of the following three families: Frequency Domain ICA (FD-ICA), Independent Vector Analysis (IVA), and TRIple-N Independent component analysis for CONvolutive mixtures (TRINICON). While the relation between ICA, FD-ICA and IVA becomes apparent due to their construction, the relation to TRINICON is not well established yet. This paper fills this gap by providing an in-depth treatment of the common building blocks of these algorithms and their differences, and thus provides a common framework for all considered algorithms
    • 

    corecore