11,484 research outputs found

    Contributions to the problem of blind source separation, with emphasis on the study of sparse signals

    Get PDF
    Orientadores: Romis Ribeiro de Faissol Attux, Ricardo SuyamaTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: Neste trabalho, foi estudado o problema de Separação Cega de Fontes (BSS), com ênfase nos casos chamados de subparametrizados, isto é, em que o número de fontes é maior do que o de misturas. A primeira contribuição proposta foi a de um limitante relacionado ao erro de inversão intrínseco ao problema quando é utilizada uma estrutura linear de separação. As outras contribuições estão relacionadas à hipótese de que as fontes são esparsas: i) uma proposta de metodologia híbrida, que se utiliza de conceitos baseados em independência e esparsidade dos sinais de forma simultânea para estimar tanto o sistema misturador quanto o número de fontes existentes em misturas com dois sensores; ii) a utilização de ferramentas de otimização baseadas na operação do sistema imunológico para a estimação do sistema misturador em problemas intrinsecamente multimodais; por fim, iii) uma proposta de utilização de um critério baseado em esparsidade para separação de fontes, sendo derivado um processo de otimização baseado na norma ?1 para este fimAbstract: In this work, we studied the problem of Blind Source Separation (BSS), with emphasis on cases referred to as underdetermined, which occur when the number of sources is greater than the number of mixtures. The first contribution was a proposal of a bound to the inversion error that is intrinsic to the problem when a linear structure is used to perform separation. The other contributions are related to the hypothesis that the signals of the sources are sparse: i) the proposal of a hybrid methodology that employs concepts based on signal independence and sparsity to simultaneously estimate both the mixing system and the number of existing sources in mixtures with two sensors; ii) the use of optimization tools based on the modus operandi of the immune system to estimate the mixing system in problems that are inherently multimodal; finally, iii) the use of a criterion based on sparsity for source separation, which is derived from an optimization process based on the ?1 normDoutoradoEngenharia de ComputaçãoDoutor em Engenharia Elétric

    Sparse and spurious: dictionary learning with noise and outliers

    Get PDF
    A popular approach within the signal processing and machine learning communities consists in modelling signals as sparse linear combinations of atoms selected from a learned dictionary. While this paradigm has led to numerous empirical successes in various fields ranging from image to audio processing, there have only been a few theoretical arguments supporting these evidences. In particular, sparse coding, or sparse dictionary learning, relies on a non-convex procedure whose local minima have not been fully analyzed yet. In this paper, we consider a probabilistic model of sparse signals, and show that, with high probability, sparse coding admits a local minimum around the reference dictionary generating the signals. Our study takes into account the case of over-complete dictionaries, noisy signals, and possible outliers, thus extending previous work limited to noiseless settings and/or under-complete dictionaries. The analysis we conduct is non-asymptotic and makes it possible to understand how the key quantities of the problem, such as the coherence or the level of noise, can scale with respect to the dimension of the signals, the number of atoms, the sparsity and the number of observations.Comment: This is a substantially revised version of a first draft that appeared as a preprint titled "Local stability and robustness of sparse dictionary learning in the presence of noise", http://hal.inria.fr/hal-00737152, IEEE Transactions on Information Theory, Institute of Electrical and Electronics Engineers (IEEE), 2015, pp.2

    Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis

    Full text link
    The widespread use of multi-sensor technology and the emergence of big datasets has highlighted the limitations of standard flat-view matrix models and the necessity to move towards more versatile data analysis tools. We show that higher-order tensors (i.e., multiway arrays) enable such a fundamental paradigm shift towards models that are essentially polynomial and whose uniqueness, unlike the matrix methods, is guaranteed under verymild and natural conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical backbone, data analysis techniques using tensor decompositions are shown to have great flexibility in the choice of constraints that match data properties, and to find more general latent components in the data than matrix-based methods. A comprehensive introduction to tensor decompositions is provided from a signal processing perspective, starting from the algebraic foundations, via basic Canonical Polyadic and Tucker models, through to advanced cause-effect and multi-view data analysis schemes. We show that tensor decompositions enable natural generalizations of some commonly used signal processing paradigms, such as canonical correlation and subspace techniques, signal separation, linear regression, feature extraction and classification. We also cover computational aspects, and point out how ideas from compressed sensing and scientific computing may be used for addressing the otherwise unmanageable storage and manipulation problems associated with big datasets. The concepts are supported by illustrative real world case studies illuminating the benefits of the tensor framework, as efficient and promising tools for modern signal processing, data analysis and machine learning applications; these benefits also extend to vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker decomposition, HOSVD, tensor networks, Tensor Train

    Dictionary-based Tensor Canonical Polyadic Decomposition

    Full text link
    To ensure interpretability of extracted sources in tensor decomposition, we introduce in this paper a dictionary-based tensor canonical polyadic decomposition which enforces one factor to belong exactly to a known dictionary. A new formulation of sparse coding is proposed which enables high dimensional tensors dictionary-based canonical polyadic decomposition. The benefits of using a dictionary in tensor decomposition models are explored both in terms of parameter identifiability and estimation accuracy. Performances of the proposed algorithms are evaluated on the decomposition of simulated data and the unmixing of hyperspectral images

    Exploitation of source nonstationarity in underdetermined blind source separation with advanced clustering techniques

    Get PDF
    The problem of blind source separation (BSS) is investigated. Following the assumption that the time-frequency (TF) distributions of the input sources do not overlap, quadratic TF representation is used to exploit the sparsity of the statistically nonstationary sources. However, separation performance is shown to be limited by the selection of a certain threshold in classifying the eigenvectors of the TF matrices drawn from the observation mixtures. Two methods are, therefore, proposed based on recently introduced advanced clustering techniques, namely Gap statistics and self-splitting competitive learning (SSCL), to mitigate the problem of eigenvector classification. The novel integration of these two approaches successfully overcomes the problem of artificial sources induced by insufficient knowledge of the threshold and enables automatic determination of the number of active sources over the observation. The separation performance is thereby greatly improved. Practical consequences of violating the TF orthogonality assumption in the current approach are also studied, which motivates the proposal of a new solution robust to violation of orthogonality. In this new method, the TF plane is partitioned into appropriate blocks and source separation is thereby carried out in a block-by-block manner. Numerical experiments with linear chirp signals and Gaussian minimum shift keying (GMSK) signals are included which support the improved performance of the proposed approaches
    corecore