351 research outputs found
Independent Process Analysis without A Priori Dimensional Information
Recently, several algorithms have been proposed for independent subspace
analysis where hidden variables are i.i.d. processes. We show that these
methods can be extended to certain AR, MA, ARMA and ARIMA tasks. Central to our
paper is that we introduce a cascade of algorithms, which aims to solve these
tasks without previous knowledge about the number and the dimensions of the
hidden processes. Our claim is supported by numerical simulations. As a
particular application, we search for subspaces of facial components.Comment: 9 pages, 2 figure
Modeling sparse connectivity between underlying brain sources for EEG/MEG
We propose a novel technique to assess functional brain connectivity in
EEG/MEG signals. Our method, called Sparsely-Connected Sources Analysis (SCSA),
can overcome the problem of volume conduction by modeling neural data
innovatively with the following ingredients: (a) the EEG is assumed to be a
linear mixture of correlated sources following a multivariate autoregressive
(MVAR) model, (b) the demixing is estimated jointly with the source MVAR
parameters, (c) overfitting is avoided by using the Group Lasso penalty. This
approach allows to extract the appropriate level cross-talk between the
extracted sources and in this manner we obtain a sparse data-driven model of
functional connectivity. We demonstrate the usefulness of SCSA with simulated
data, and compare to a number of existing algorithms with excellent results.Comment: 9 pages, 6 figure
Independent subspace analysis can cope with the 'curse of dimensionality'
We search for hidden independent components, in particular we consider the independent subspace analysis (ISA) task. Earlier ISA procedures assume that the dimensions of the components are known. Here we show a method that enables the non-combinatorial estimation of the components. We make use of a decomposition principle called the ISA separation theorem. According to this separation theorem the ISA task can be reduced to the independent component analysis (ICA) task that assumes one-dimensional components and then to a grouping procedure that collects the respective non-independent elements into independent groups. We show that non-combinatorial grouping is feasible by means of the non-linear f-correlation matrices between the estimated components
On Independent Component Analysis and Supervised Dimension Reduction for Time Series
The main goal of this thesis work has been to develop tools to recover hidden structures, latent variables, or latent subspaces for multivariate and dependent time series data. The secondary goal has been to write computationally efficient algorithms for the methods to an R-package.
In Blind Source Separation (BSS) the goal is to find uncorrelated latent sources by transforming the observed data in an appropriate way. In Independent Component Analysis (ICA) the latent sources are assumed to be independent. The well-known ICA methods FOBI and JADE are generalized to work with multivariate time series, where the latent components exhibit stochastic volatility. In such time series the volatility cannot be regarded as a constant in time, as often there are periods of high and periods of low volatility. The new methods are called gFOBI and gJADE. Also SOBI, a classic method which works well once the volatility is assumed to be constant, is given a variant called vSOBI, that also works with time series with stochastic volatility.
In dimension reduction the idea is to transform the data into a new coordinate system, where the components are uncorrelated or even independent, and then keep only some of the transformed variables in such way that we do not lose too much of the important information of the data. The aforementioned BSS methods can be used in unsupervised dimension reduction; all the variables or time series have the same role.
In supervised dimension reduction the relationship between a response and predictor variables needs to be considered as well. Wellknown supervised dimension reduction methods for independent and identically distributed data, SIR and SAVE, are generalized to work for time series data. The methods TSIR and TSAVE are introduced and shown to work well for time series, as they also use the information on the past values of the predictor time series. Also TSSH, a hybrid version of TSIR and TSAVE, is introduced. All the methods that have been developed in this thesis have also been implemented in R package tsBSS
Fast and accurate methods of independent component analysis: A survey
summary:This paper presents a survey of recent successful algorithms for blind separation of determined instantaneous linear mixtures of independent sources such as natural speech or biomedical signals. These algorithms rely either on non-Gaussianity, nonstationarity, spectral diversity, or on a combination of them. Performance of the algorithms will be demonstrated on separation of a linear instantaneous mixture of audio signals (music, speech) and on artifact removal in electroencephalogram (EEG)
A coupled HMM for solving the permutation problem in frequency domain BSS
Permutation of the outputs at different frequency bins
remains as a major problem in the convolutive blind source
separation (BSS). In this work a coupled Hidden Markov
model (CHMM) effectively exploits the psychoacoustic
characteristics of signals to mitigate such permutation. A
joint diagonalization algorithm for convolutive BSS, which
incorporates a non-unitary penalty term within the crosspower
spectrum-based cost function in the frequency
domain, has been used. The proposed CHMM system
couples a number of conventional HMMs, equivalent to the
number of outputs, by making state transitions in each
model dependent not only on its own previous state, but
also on some aspects of the state of the other models. Using
this method the permutation effect has been substantially
reduced, and demonstrated using a number of simulation
studies
- …