70 research outputs found

    Joint Analysis of Multiple Datasets by Cross-Cumulant Tensor (Block) Diagonalization

    No full text
    International audienceIn this paper, we propose approximate diagonalization of a cross-cumulant tensor as a means to achieve independent component analysis (ICA) in several linked datasets. This approach generalizes existing cumulant-based independent vector analysis (IVA). It leads to uniqueness, identifiability and resilience to noise that exceed those in the literature, in certain scenarios. The proposed method can achieve blind identification of underdetermined mixtures when single-dataset cumulant-based methods that use the same order of statistics fall short. In addition, it is possible to analyse more than two datasets in a single tensor factorization. The proposed approach readily extends to independent subspace analysis (ISA), by tensor block-diagonalization. The proposed approach can be used as-is or as an ingredient in various data fusion frameworks, using coupled decompositions. The core idea can be used to generalize existing ICA methods from one dataset to an ensemble

    Jacobi iterations for canonical dependence analysis

    No full text
    International audienceIn this manuscript we will study the advantages of Jacobi iterations to solve the problem of Canonical Dependence Analysis. Canonical Dependence Analysis can be seen as an extension of the Canonical Correlation Analysis where correlation measures are replaced by measures of higher order statistical dependencies. We will show the benefits of choosing an algorithm that exploits the manifold structure on which the optimisation problem can be formulated and contrast our results with the joint blind source separation algorithm that optimises the criterion in its ambient space. A major advantage of the proposed algorithm is the capability of identifying a linear mixture when multiple observation sets are available containing variables that are linearly dependent between the sets, independent within the sets and contaminated with non-Gaussian independent noise. Performance analysis reveals at least linear convergence speed as a function of the number of sweeps

    Dimension Reduction for Time Series in a Blind Source Separation Context Using R

    Get PDF
    Multivariate time series observations are increasingly common in multiple fields of science but the complex dependencies of such data often translate into intractable models with large number of parameters. An alternative is given by first reducing the dimension of the series and then modelling the resulting uncorrelated signals univariately, avoiding the need for any covariance parameters. A popular and effective framework for this is blind source separation. In this paper we review the dimension reduction tools for time series available in the R package tsBSS. These include methods for estimating the signal dimension of second-order stationary time series, dimension reduction techniques for stochastic volatility models and supervised dimension reduction tools for time series regression. Several examples are provided to illustrate the functionality of the package
    corecore