9 research outputs found

    Geometrical Method Using Simplicial Cones for Overdetermined Nonnegative Blind Source Separation: Application to Real PET Images

    Get PDF
    International audienceThis paper presents a geometrical method for solving the overdetermined Nonnegative Blind Source Separation (N-BSS) problem. Considering each column of the mixed data as a point in the data space, we develop a Simplicial Cone Shrinking Algorithm for Unmixing Nonnegative Sources (SCSA-UNS). The proposed method estimates the mixing matrix and the sources by fitting a simplicial cone to the scatter plot of the mixed data. It requires weak assumption on the sources distribution, in particular the independence of the different sources is not necessary. Simulations on synthetic data show that SCSA-UNS outperforms other existing geometrical methods in noiseless case. Experiment on real Dynamic Positon Emission Tomography (PET) images illustrates the efficiency of the proposed method

    Regularized Gradient Algorithm for Non-Negative Independent Component Analysis

    Get PDF
    International audienceIndependent Component Analysis (ICA) is a well-known technique for solving blind source separation (BSS) problem. However "classical" ICA algorithms seem not suited for non-negative sources. This paper proposes a gradient descent approach for solving the Non- Negative Independent Component Analysis problem (NNICA). NNICA original separation criterion contains the discontinuous sign function whose minimization may lead to ill convergence (local minima) especially for sparse sources. Replacing the discontinuous function by a continuous one tanh, we propose a more accurate regularized Gradient algorithm called "Exact" Regularized Gradient (ERG) for NNICA. Experiments on synthetic data with different sparsity degrees illustrate the efficiency of the proposed method and a comparison shows that the proposed ERG outperforms existing methods

    Perceptually Controlled Reshaping of Sound Histograms

    No full text
    International audienceMany audio processing algorithms have optimal performance for specific signal statistical distributions that may not be fulfilled for all signals. When the original signal is available, we propose to add an inaudible noise so that the distribution of the signal-plus-noise mixture is as close as possible to a given target distribution. The proposed generic algorithm (independent from the application) adds iteratively a low-power white noise to a flat-spectrum version of the signal, until the target distribution or the noise audibility is reached. The latter is assessed through a frequency masking model. Two implementations of this sound reshaping are described, according to the level of the targeted transformation and to the foreseen application: Histogram Global Reshaping (HGR) to change the global shape of the histogram and Histogram Local Reshaping (HLR) to locally " chisel " the histogram, but keeping the global shape unchanged. These two variants are illustrated by two applications where the inaudibility of the noise generated by the algorithm is required: " sparsification " for source separation, and low-pass filtering of the histogram for application of the quantization theorem, respectively. In both cases, the target histogram is reached or almost reached and the transformation is inaudible. The experiments show that the source separation performs better with HGR and that the HLR allows a better application of the quantization theorem

    Précision finie et non alignement en codage/décodage MICDA

    No full text
    L'implantation d'une chaîne de codage/décodage MICDA est présentée en précision finie en utilisant une arithmétique en virgule flottante. Il est montré que l'alignement du décodeur sur le codeur est réalisé sous des conditions plus restrictives que celles trouvées en précision infinie. On introduit une mesure d'alignement montrant que celui-ci dépend de la longueur binaire des signaux et paramètres en jeu et aussi du type de signal en entrée : l'alignement est d'autant plus mauvais que les zéros du filtre prédicteur sont proches du cercle unité

    Watermark-Driven Acoustic Echo Cancellation

    No full text
    International audienceThe performance of adaptive acoustic echo cancelers (AEC) is sensitive to the non-stationarity and correlation of speech signals. In this article, we explore a new approach based on an adaptive AEC driven by data hidden in speech, to enhance the AEC robustness. We propose a two-stage AEC, where the first stage is a classical NLMS-based AEC driven by the far-end speech. In the signal, we embed-in an extended conception of data hiding-an imperceptible white and stationary signal, i.e. a watermark. The goal of the second stage AEC is to identify the misalignment of the first stage. It is driven by the watermark solely, and takes advantage of its appropriate properties (stationary and white) to improve the robustness of the two-stage AEC to the non-stationarity and correlation of speech, and thus reduce the overall system misadjustment. We test two kinds of implementations: in the first implementation, referred to as A-WdAEC (Adaptive Watermark driven AEC), the watermark is a white stationary Gaussian noise. Driven by this signal, the second stage converges faster than the classical AEC and provides better performance in steady state. In the second implementation, referred to as MLS-WdAEC, the watermark is built from maximum length sequences (MLS). Thus, the second stage performs a block identification of the first stage misalignment, given by the circular correlation watermark/pre-processed version of the first stage residual echo. The advantage of this implementation lies in its robustness against noise and under-modeling. Simulation results show the relevance of the "watermark-driven AEC" approach, compared to the classical "error driven AEC"
    corecore