10 research outputs found

    Hybrid sparse and low-rank time-frequency signal decomposition

    Get PDF
    International audienceWe propose a new hybrid (or morphological) generative model that decomposes a signal into two (and possibly more) layers. Each layer is a linear combination of localised atoms from a time-frequency dictionary. One layer has a low-rank time-frequency structure while the other as a sparse structure. The time-frequency resolutions of the dictionaries describing each layer may be different. Our contribution builds on the recently introduced Low-Rank Time-Frequency Synthesis (LRTFS) model and proposes an iterative algorithm similar to the popular iterative shrinkage/thresholding algorithm. We illustrate the capacities of the proposed model and estimation procedure on a tonal + transient audio decomposition example. Index Terms— Low-rank time-frequency synthesis, sparse component analysis, hybrid/morphological decom-positions, non-negative matrix factorisation

    Sparse Gaussian Process Audio Source Separation Using Spectrum Priors in the Time-Domain

    Full text link
    Gaussian process (GP) audio source separation is a time-domain approach that circumvents the inherent phase approximation issue of spectrogram based methods. Furthermore, through its kernel, GPs elegantly incorporate prior knowledge about the sources into the separation model. Despite these compelling advantages, the computational complexity of GP inference scales cubically with the number of audio samples. As a result, source separation GP models have been restricted to the analysis of short audio frames. We introduce an efficient application of GPs to time-domain audio source separation, without compromising performance. For this purpose, we used GP regression, together with spectral mixture kernels, and variational sparse GPs. We compared our method with LD-PSDTF (positive semi-definite tensor factorization), KL-NMF (Kullback-Leibler non-negative matrix factorization), and IS-NMF (Itakura-Saito NMF). Results show that the proposed method outperforms these techniques.Comment: Paper submitted to the 44th International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019. To be held in Brighton, United Kingdom, between May 12 and May 17, 201

    Data-driven Signal Decomposition Approaches: A Comparative Analysis

    Full text link
    Signal decomposition (SD) approaches aim to decompose non-stationary signals into their constituent amplitude- and frequency-modulated components. This represents an important preprocessing step in many practical signal processing pipelines, providing useful knowledge and insight into the data and relevant underlying system(s) while also facilitating tasks such as noise or artefact removal and feature extraction. The popular SD methods are mostly data-driven, striving to obtain inherent well-behaved signal components without making many prior assumptions on input data. Among those methods include empirical mode decomposition (EMD) and variants, variational mode decomposition (VMD) and variants, synchrosqueezed transform (SST) and variants and sliding singular spectrum analysis (SSA). With the increasing popularity and utility of these methods in wide-ranging application, it is imperative to gain a better understanding and insight into the operation of these algorithms, evaluate their accuracy with and without noise in input data and gauge their sensitivity against algorithmic parameter changes. In this work, we achieve those tasks through extensive experiments involving carefully designed synthetic and real-life signals. Based on our experimental observations, we comment on the pros and cons of the considered SD algorithms as well as highlighting the best practices, in terms of parameter selection, for the their successful operation. The SD algorithms for both single- and multi-channel (multivariate) data fall within the scope of our work. For multivariate signals, we evaluate the performance of the popular algorithms in terms of fulfilling the mode-alignment property, especially in the presence of noise.Comment: Resubmission with changes in the reference lis

    A diagonal plus low-rank covariance model for computationally efficient source separation

    Get PDF
    International audienceThis paper presents an accelerated version of positive semidef-inite tensor factorization (PSDTF) for blind source separation. PSDTF works better than nonnegative matrix factoriza-tion (NMF) by dropping the arguable assumption that audio signals can be whitened in the frequency domain by using short-term Fourier transform (STFT). Indeed, this assumption only holds true in an ideal situation where each frame is infinitely long and the target signal is completely stationary in each frame. PSDTF thus deals with full covariance matrices over frequency bins instead of forcing them to be diagonal as in NMF. Although PSDTF significantly outperforms NMF in terms of separation performance, it suffers from a heavy computational cost due to the repeated inversion of big covariance matrices. To solve this problem, we propose an intermediate model based on diagonal plus low-rank covariance matrices and derive the expectation-maximization (EM) algorithm for efficiently updating the parameters of PSDTF. Experimental results showed that our method can dramatically reduce the complexity of PSDTF by several orders of magnitude without a significant decrease in separation performance. Index Terms— Blind source separation, nonnegative matrix factorization, positive semidefinite tensor factorization, low-rank approximation

    Dissimilarity-based multiple instance classification and dictionary learning for bioacoustic signal recognition

    Get PDF
    In this thesis, two promising and actively researched fields from pattern recognition (PR) and digital signal processing (DSP) are studied, adapted and applied for the automated recognition of bioacoustic signals: (i) learning from weakly-labeled data, and (ii) dictionary-based decomposition. The document begins with an overview of the current methods and techniques applied for the automated recognition of bioacoustic signals, and an analysis of the impact of this technology at global and local scales. This is followed by a detailed description of my research on studying two approaches from the above-mentioned fields, multiple instance learning (MIL) and dictionary learning (DL), as solutions to particular challenges in bioacoustic data analysis. The most relevant contributions and findings of this thesis are the following ones: 1) the proposal of an unsupervised recording segmentation method of audio birdsong recordings that improves species classification with the benefit of an easier implementation since no manual handling of recordings is required; 2) the confirmation that, in the analyzed audio datasets, appropriate dissimilarity measures are those which capture most of the overall differences between bags, such as the modified Hausdorff distance and the mean minimum distance; 3) the adoption of dissimilarity adaptation techniques for the enhancement of dissimilarity-based multiple instance classification, along with the potential further enhancement of the classification performance by building dissimilarity spaces and increasing training set sizes; 4) the proposal of a framework for solving MIL problems by using the one nearest neighbor (1-NN) classifier; 5) a novel convolutive DL method for learning a representative dictionary from a collection of multiple-bird audio recordings; 6) such a DL method is successfully applied to spectrogram denoising and species classification; and, 7) an efficient online version of the DL method that outperforms other state-of-the-art batch and online methods, in both, computational cost and quality of the discovered patternsResumen : En esta tesis se estudian, adaptan y aplican dos prometedoras y activas áreas del reconocimiento de patrones (PR) y procesamiento digital de señales (DSP): (i) aprendizaje débilmente supervisado y (ii) descomposiciones basadas en diccionarios. Inicialmente se hace una revisión de los métodos y técnicas que actualmente se aplican en tareas de reconocimiento automatizado de señales bioacústicas y se describe el impacto de esta tecnología a escalas nacional y global. Posteriormente, la investigación se enfoca en el estudio de dos técnicas de las áreas antes mencionadas, aprendizaje multi-instancia (MIL) y aprendizaje de diccionarios (DL), como soluciones a retos particulares del análisis de datos bioacústicos. Las contribuciones y hallazgos ms relevantes de esta tesis son los siguientes: 1) se propone un método de segmentacin de grabaciones de audio que mejora la clasificación automatizada de especies, el cual es fácil de implementar ya que no necesita información supervisada de entrenamiento; 2) se confirma que, en los conjuntos de datos analizados, las medidas de disimilitudes que capturan las diferencias globales entre bolsas funcionan apropiadamente, tales como la distancia modificada de Hausdorff y la distancia media de los mínimos; 3) la adopción de técnicas de adaptación de disimilitudes para mejorar la clasificación multi-instancia, junto con el incremento potencial del desempeño por medio de la construcción de espacios de disimilitudes y el aumento del tamaño de los conjuntos de entrenamiento; 4) se presenta un esquema para la solución de problemas MIL por medio del clasificador del vecino ms cercano (1-NN); 5) se propone un método novedoso de DL, basado en convoluciones, para el aprendizaje automatizado de un diccionario representativo a partir de un conjunto de grabaciones de audio de múltiples vocalizaciones de aves; 6) dicho mtodo DL se utiliza exitosamente como técnica de reducción de ruido en espectrogramas y clasificación de grabaciones bioacústicas; y 7) un método DL, de procesamiento en línea, que supera otros métodos del estado del arte en costo computacional y calidad de los patrones descubiertosDoctorad

    Low-Rank Time-Frequency Synthesis

    Get PDF
    International audienceMany single-channel signal decomposition techniques rely on a low-rank factor-ization of a time-frequency transform. In particular, nonnegative matrix factoriza-tion (NMF) of the spectrogram – the (power) magnitude of the short-time Fourier transform (STFT) – has been considered in many audio applications. In this set-ting, NMF with the Itakura-Saito divergence was shown to underly a generative Gaussian composite model (GCM) of the STFT, a step forward from more empiri-cal approaches based on ad-hoc transform and divergence specifications. Still, the GCM is not yet a generative model of the raw signal itself, but only of its STFT. The work presented in this paper fills in this ultimate gap by proposing a novel signal synthesis model with low-rank time-frequency structure. In particular, our new approach opens doors to multi-resolution representations, that were not pos-sible in the traditional NMF setting. We describe two expectation-maximization algorithms for estimation in the new model and report audio signal processing results with music decomposition and speech enhancement

    Estimation with Low-Rank Time-Frequency Synthesis Models

    No full text
    International audienceMany state-of-the art signal decomposition techniques rely on a low-rank factorization of a time-frequency (t-f) transform. In particular, nonnegative matrix factorization (NMF) of the spectrogram has been considered in many audio applications. This is an analysis approach in the sense that the factorization is applied to the squared magnitude of the analysis coefficients returned by the t-f transform. In this paper we instead propose a synthesis approach, where low-rankness is imposed to the synthesis coefficients of the data signal over a given t-f dictionary (such as a Gabor frame). As such we offer a novel modeling paradigm that bridges t-f synthesis modeling and traditional analysis-based NMF approaches. The proposed generative model allows in turn to design more sophisticated multi-layer representations that can efficiently capture diverse forms of structure. Additionally, the generative modeling allows to exploit t-f low-rankness for compressive sensing. We present efficient iterative shrinkage algorithms to perform estimation in the proposed models and illustrate the capabilities of the new modeling paradigm over audio signal processing examples
    corecore