385 research outputs found

    Non-negative mixtures

    Get PDF
    This is the author's accepted pre-print of the article, first published as M. D. Plumbley, A. Cichocki and R. Bro. Non-negative mixtures. In P. Comon and C. Jutten (Ed), Handbook of Blind Source Separation: Independent Component Analysis and Applications. Chapter 13, pp. 515-547. Academic Press, Feb 2010. ISBN 978-0-12-374726-6 DOI: 10.1016/B978-0-12-374726-6.00018-7file: Proof:p\PlumbleyCichockiBro10-non-negative.pdf:PDF owner: markp timestamp: 2011.04.26file: Proof:p\PlumbleyCichockiBro10-non-negative.pdf:PDF owner: markp timestamp: 2011.04.2

    Underdetermined Separation of Speech Mixture Based on Sparse Bayesian Learning

    Get PDF
    This paper describes a novel algorithm for underdetermined speech separation problem based on compressed sensing which is an emerging technique for efficient data reconstruction. The proposed algorithm consists of two steps. The unknown mixing matrix is firstly estimated from the speech mixtures in the transform domain by using K-means clustering algorithm. In the second step, the speech sources are recovered based on an autocalibration sparse Bayesian learning algorithm for speech signal. Numerical experiments including the comparison with other sparse representation approaches are provided to show the achieved performance improvement

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1

    Source Separation in the Presence of Side-information

    Get PDF
    The source separation problem involves the separation of unknown signals from their mixture. This problem is relevant in a wide range of applications from audio signal processing, communication, biomedical signal processing and art investigation to name a few. There is a vast literature on this problem which is based on either making strong assumption on the source signals or availability of additional data. This thesis proposes new algorithms for source separation with side information where one observes the linear superposition of two source signals plus two additional signals that are correlated with the mixed ones. The first algorithm is based on two ingredients: first, we learn a Gaussian mixture model (GMM) for the joint distribution of a source signal and the corresponding correlated side information signal; second, we separate the signals using standard computationally efficient conditional mean estimators. This also puts forth new recovery guarantees for this source separation algorithm. In particular, under the assumption that the signals can be perfectly described by a GMM model, we characterize necessary and sufficient conditions for reliable source separation in the asymptotic regime of low-noise as a function of the geometry of the underlying signals and their interaction. It is shown that if the subspaces spanned by the innovation components of the source signals with respect to the side information signals have zero intersection, provided that we observe a certain number of linear measurements from the mixture, then we can reliably separate the sources; otherwise we cannot. The second algorithms is based on deep learning where we introduce a novel self-supervised algorithm for the source separation problem. Source separation is intrinsically unsupervised and the lack of training data makes it a difficult task for artificial intelligence to solve. The proposed framework takes advantage of the available data and delivers near perfect separation results in real data scenarios. Our proposed frameworks – which provide new ways to incorporate side information to aid the solution of the source separation problem – are also employed in a real-world art investigation application involving the separation of mixtures of X-Ray images. The simulation results showcase the superiority of our algorithm against other state-of-the-art algorithms

    One-stage blind source separation via a sparse autoencoder framework

    Get PDF
    Blind source separation (BSS) is the process of recovering individual source transmissions from a received mixture of co-channel signals without a priori knowledge of the channel mixing matrix or transmitted source signals. The received co-channel composite signal is considered to be captured across an antenna array or sensor network and is assumed to contain sparse transmissions, as users are active and inactive aperiodically over time. An unsupervised machine learning approach using an artificial feedforward neural network sparse autoencoder with one hidden layer is formulated for blindly recovering the channel matrix and source activity of co-channel transmissions. The BSS sparse autoencoder provides one-stage learning using the receive signal data only, which solves for the channel matrix and signal sources simultaneously. The recovered co-channel source signals are produced at the encoded output of the sparse autoencoder hidden layer. A complex-valued soft-threshold operator is used as the activation function at the hidden layer to preserve the ordered pairs of real and imaginary components. Once the weights of the sparse autoencoder are learned, the latent signals are recovered at the hidden layer without requiring any additional optimization steps. The generalization performance on future received data demonstrates the ability to recover signal transmissions on untrained data and outperform the two-stage BSS process
    corecore