4,932 research outputs found
Computing the Partial Correlation of ICA Models for Non-Gaussian Graph Signal Processing
[EN] Conventional partial correlation coefficients (PCC) were extended to the non-Gaussian case, in particular to independent component analysis (ICA) models of the observed multivariate samples. Thus, the usual methods that define the pairwise connections of a graph from the precision matrix were correspondingly extended. The basic concept involved replacing the implicit linear estimation of conventional PCC with a nonlinear estimation (conditional mean) assuming ICA. Thus, it is better eliminated the correlation between a given pair of nodes induced by the rest of nodes, and hence the specific connectivity weights can be better estimated. Some synthetic and real data examples
illustrate the approach in a graph signal processing context.This research was funded by Spanish Administration and European Union under grants TEC2014-58438-R and TEC2017-84743-P.Belda, J.; Vergara Domínguez, L.; Safont Armero, G.; Salazar Afanador, A. (2019). Computing the Partial Correlation of ICA Models for Non-Gaussian Graph Signal Processing. Entropy. 21(1):1-16. https://doi.org/10.3390/e21010022S116211Baba, K., Shibata, R., & Sibuya, M. (2004). PARTIAL CORRELATION AND CONDITIONAL CORRELATION AS MEASURES OF CONDITIONAL INDEPENDENCE. Australian New Zealand Journal of Statistics, 46(4), 657-664. doi:10.1111/j.1467-842x.2004.00360.xShuman, D. I., Narang, S. K., Frossard, P., Ortega, A., & Vandergheynst, P. (2013). The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains. IEEE Signal Processing Magazine, 30(3), 83-98. doi:10.1109/msp.2012.2235192Sandryhaila, A., & Moura, J. M. F. (2013). Discrete Signal Processing on Graphs. IEEE Transactions on Signal Processing, 61(7), 1644-1656. doi:10.1109/tsp.2013.2238935Ortega, A., Frossard, P., Kovacevic, J., Moura, J. M. F., & Vandergheynst, P. (2018). Graph Signal Processing: Overview, Challenges, and Applications. Proceedings of the IEEE, 106(5), 808-828. doi:10.1109/jproc.2018.2820126Mazumder, R., & Hastie, T. (2012). The graphical lasso: New insights and alternatives. Electronic Journal of Statistics, 6(0), 2125-2149. doi:10.1214/12-ejs740Chen, X., Xu, M., & Wu, W. B. (2013). Covariance and precision matrix estimation for high-dimensional time series. The Annals of Statistics, 41(6), 2994-3021. doi:10.1214/13-aos1182Friedman, J., Hastie, T., & Tibshirani, R. (2007). Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3), 432-441. doi:10.1093/biostatistics/kxm045Peng, J., Wang, P., Zhou, N., & Zhu, J. (2009). Partial Correlation Estimation by Joint Sparse Regression Models. Journal of the American Statistical Association, 104(486), 735-746. doi:10.1198/jasa.2009.0126Belda, J., Vergara, L., Salazar, A., & Safont, G. (2018). Estimating the Laplacian matrix of Gaussian mixtures for signal processing on graphs. Signal Processing, 148, 241-249. doi:10.1016/j.sigpro.2018.02.017Hyvärinen, A., & Oja, E. (2000). Independent component analysis: algorithms and applications. Neural Networks, 13(4-5), 411-430. doi:10.1016/s0893-6080(00)00026-5Chai, R., Naik, G. R., Nguyen, T. N., Ling, S. H., Tran, Y., Craig, A., & Nguyen, H. T. (2017). Driver Fatigue Classification With Independent Component by Entropy Rate Bound Minimization Analysis in an EEG-Based System. IEEE Journal of Biomedical and Health Informatics, 21(3), 715-724. doi:10.1109/jbhi.2016.2532354Liu, H., Liu, S., Huang, T., Zhang, Z., Hu, Y., & Zhang, T. (2016). Infrared spectrum blind deconvolution algorithm via learned dictionaries and sparse representation. Applied Optics, 55(10), 2813. doi:10.1364/ao.55.002813Naik, G. R., Selvan, S. E., & Nguyen, H. T. (2016). Single-Channel EMG Classification With Ensemble-Empirical-Mode-Decomposition-Based ICA for Diagnosing Neuromuscular Disorders. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 24(7), 734-743. doi:10.1109/tnsre.2015.2454503Guo, Y., Huang, S., Li, Y., & Naik, G. R. (2013). Edge Effect Elimination in Single-Mixture Blind Source Separation. Circuits, Systems, and Signal Processing, 32(5), 2317-2334. doi:10.1007/s00034-013-9556-9Chi, Y. (2016). Guaranteed Blind Sparse Spikes Deconvolution via Lifting and Convex Optimization. IEEE Journal of Selected Topics in Signal Processing, 10(4), 782-794. doi:10.1109/jstsp.2016.2543462Pendharkar, G., Naik, G. R., & Nguyen, H. T. (2014). Using Blind Source Separation on accelerometry data to analyze and distinguish the toe walking gait from normal gait in ITW children. Biomedical Signal Processing and Control, 13, 41-49. doi:10.1016/j.bspc.2014.02.009Wang, L., & Chi, Y. (2016). Blind Deconvolution From Multiple Sparse Inputs. IEEE Signal Processing Letters, 23(10), 1384-1388. doi:10.1109/lsp.2016.2599104Safont, G., Salazar, A., Vergara, L., Gomez, E., & Villanueva, V. (2018). Probabilistic Distance for Mixtures of Independent Component Analyzers. IEEE Transactions on Neural Networks and Learning Systems, 29(4), 1161-1173. doi:10.1109/tnnls.2017.2663843Safont, G., Salazar, A., Rodriguez, A., & Vergara, L. (2014). On Recovering Missing Ground Penetrating Radar Traces by Statistical Interpolation Methods. Remote Sensing, 6(8), 7546-7565. doi:10.3390/rs6087546Vergara, L., & Bernabeu, P. (2001). Simple approach to nonlinear prediction. Electronics Letters, 37(14), 926. doi:10.1049/el:20010616Ertuğrul Çelebi, M. (1997). General formula for conditional mean using higher order statistics. Electronics Letters, 33(25), 2097. doi:10.1049/el:19971432Lee, T.-W., Girolami, M., & Sejnowski, T. J. (1999). Independent Component Analysis Using an Extended Infomax Algorithm for Mixed Subgaussian and Supergaussian Sources. Neural Computation, 11(2), 417-441. doi:10.1162/089976699300016719Cardoso, J. F., & Souloumiac, A. (1993). Blind beamforming for non-gaussian signals. IEE Proceedings F Radar and Signal Processing, 140(6), 362. doi:10.1049/ip-f-2.1993.0054Hyvärinen, A., & Oja, E. (1997). A Fast Fixed-Point Algorithm for Independent Component Analysis. Neural Computation, 9(7), 1483-1492. doi:10.1162/neco.1997.9.7.1483Salazar, A., Vergara, L., & Miralles, R. (2010). On including sequential dependence in ICA mixture models. Signal Processing, 90(7), 2314-2318. doi:10.1016/j.sigpro.2010.02.010Lang, E. W., Tomé, A. M., Keck, I. R., Górriz-Sáez, J. M., & Puntonet, C. G. (2012). Brain Connectivity Analysis: A Short Survey. Computational Intelligence and Neuroscience, 2012, 1-21. doi:10.1155/2012/412512Fiedler, M. (1973). Algebraic connectivity of graphs. Czechoslovak Mathematical Journal, 23(2), 298-305. doi:10.21136/cmj.1973.101168Merris, R. (1994). Laplacian matrices of graphs: a survey. Linear Algebra and its Applications, 197-198, 143-176. doi:10.1016/0024-3795(94)90486-3Dong, X., Thanou, D., Frossard, P., & Vandergheynst, P. (2016). Learning Laplacian Matrix in Smooth Graph Signal Representations. IEEE Transactions on Signal Processing, 64(23), 6160-6173. doi:10.1109/tsp.2016.2602809Moragues, J., Vergara, L., & Gosalbez, J. (2011). Generalized Matched Subspace Filter for Nonindependent Noise Based on ICA. IEEE Transactions on Signal Processing, 59(7), 3430-3434. doi:10.1109/tsp.2011.2141668Egilmez, H. E., Pavez, E., & Ortega, A. (2017). Graph Learning From Data Under Laplacian and Structural Constraints. IEEE Journal of Selected Topics in Signal Processing, 11(6), 825-841. doi:10.1109/jstsp.2017.272697
Markov models for fMRI correlation structure: is brain functional connectivity small world, or decomposable into networks?
Correlations in the signal observed via functional Magnetic Resonance Imaging
(fMRI), are expected to reveal the interactions in the underlying neural
populations through hemodynamic response. In particular, they highlight
distributed set of mutually correlated regions that correspond to brain
networks related to different cognitive functions. Yet graph-theoretical
studies of neural connections give a different picture: that of a highly
integrated system with small-world properties: local clustering but with short
pathways across the complete structure. We examine the conditional independence
properties of the fMRI signal, i.e. its Markov structure, to find realistic
assumptions on the connectivity structure that are required to explain the
observed functional connectivity. In particular we seek a decomposition of the
Markov structure into segregated functional networks using decomposable graphs:
a set of strongly-connected and partially overlapping cliques. We introduce a
new method to efficiently extract such cliques on a large, strongly-connected
graph. We compare methods learning different graph structures from functional
connectivity by testing the goodness of fit of the model they learn on new
data. We find that summarizing the structure as strongly-connected networks can
give a good description only for very large and overlapping networks. These
results highlight that Markov models are good tools to identify the structure
of brain connectivity from fMRI signals, but for this purpose they must reflect
the small-world properties of the underlying neural systems
Learning and comparing functional connectomes across subjects
Functional connectomes capture brain interactions via synchronized
fluctuations in the functional magnetic resonance imaging signal. If measured
during rest, they map the intrinsic functional architecture of the brain. With
task-driven experiments they represent integration mechanisms between
specialized brain areas. Analyzing their variability across subjects and
conditions can reveal markers of brain pathologies and mechanisms underlying
cognition. Methods of estimating functional connectomes from the imaging signal
have undergone rapid developments and the literature is full of diverse
strategies for comparing them. This review aims to clarify links across
functional-connectivity methods as well as to expose different steps to perform
a group study of functional connectomes
Disentangling causal webs in the brain using functional Magnetic Resonance Imaging: A review of current approaches
In the past two decades, functional Magnetic Resonance Imaging has been used
to relate neuronal network activity to cognitive processing and behaviour.
Recently this approach has been augmented by algorithms that allow us to infer
causal links between component populations of neuronal networks. Multiple
inference procedures have been proposed to approach this research question but
so far, each method has limitations when it comes to establishing whole-brain
connectivity patterns. In this work, we discuss eight ways to infer causality
in fMRI research: Bayesian Nets, Dynamical Causal Modelling, Granger Causality,
Likelihood Ratios, LiNGAM, Patel's Tau, Structural Equation Modelling, and
Transfer Entropy. We finish with formulating some recommendations for the
future directions in this area
Finding Exogenous Variables in Data with Many More Variables than Observations
Many statistical methods have been proposed to estimate causal models in
classical situations with fewer variables than observations (p<n, p: the number
of variables and n: the number of observations). However, modern datasets
including gene expression data need high-dimensional causal modeling in
challenging situations with orders of magnitude more variables than
observations (p>>n). In this paper, we propose a method to find exogenous
variables in a linear non-Gaussian causal model, which requires much smaller
sample sizes than conventional methods and works even when p>>n. The key idea
is to identify which variables are exogenous based on non-Gaussianity instead
of estimating the entire structure of the model. Exogenous variables work as
triggers that activate a causal chain in the model, and their identification
leads to more efficient experimental designs and better understanding of the
causal mechanism. We present experiments with artificial data and real-world
gene expression data to evaluate the method.Comment: A revised version of this was published in Proc. ICANN201
Graph analysis of functional brain networks: practical issues in translational neuroscience
The brain can be regarded as a network: a connected system where nodes, or
units, represent different specialized regions and links, or connections,
represent communication pathways. From a functional perspective communication
is coded by temporal dependence between the activities of different brain
areas. In the last decade, the abstract representation of the brain as a graph
has allowed to visualize functional brain networks and describe their
non-trivial topological properties in a compact and objective way. Nowadays,
the use of graph analysis in translational neuroscience has become essential to
quantify brain dysfunctions in terms of aberrant reconfiguration of functional
brain networks. Despite its evident impact, graph analysis of functional brain
networks is not a simple toolbox that can be blindly applied to brain signals.
On the one hand, it requires a know-how of all the methodological steps of the
processing pipeline that manipulates the input brain signals and extract the
functional network properties. On the other hand, a knowledge of the neural
phenomenon under study is required to perform physiological-relevant analysis.
The aim of this review is to provide practical indications to make sense of
brain network analysis and contrast counterproductive attitudes
Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis
The widespread use of multi-sensor technology and the emergence of big
datasets has highlighted the limitations of standard flat-view matrix models
and the necessity to move towards more versatile data analysis tools. We show
that higher-order tensors (i.e., multiway arrays) enable such a fundamental
paradigm shift towards models that are essentially polynomial and whose
uniqueness, unlike the matrix methods, is guaranteed under verymild and natural
conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical
backbone, data analysis techniques using tensor decompositions are shown to
have great flexibility in the choice of constraints that match data properties,
and to find more general latent components in the data than matrix-based
methods. A comprehensive introduction to tensor decompositions is provided from
a signal processing perspective, starting from the algebraic foundations, via
basic Canonical Polyadic and Tucker models, through to advanced cause-effect
and multi-view data analysis schemes. We show that tensor decompositions enable
natural generalizations of some commonly used signal processing paradigms, such
as canonical correlation and subspace techniques, signal separation, linear
regression, feature extraction and classification. We also cover computational
aspects, and point out how ideas from compressed sensing and scientific
computing may be used for addressing the otherwise unmanageable storage and
manipulation problems associated with big datasets. The concepts are supported
by illustrative real world case studies illuminating the benefits of the tensor
framework, as efficient and promising tools for modern signal processing, data
analysis and machine learning applications; these benefits also extend to
vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker
decomposition, HOSVD, tensor networks, Tensor Train
MERLiN: Mixture Effect Recovery in Linear Networks
Causal inference concerns the identification of cause-effect relationships
between variables, e.g. establishing whether a stimulus affects activity in a
certain brain region. The observed variables themselves often do not constitute
meaningful causal variables, however, and linear combinations need to be
considered. In electroencephalographic studies, for example, one is not
interested in establishing cause-effect relationships between electrode signals
(the observed variables), but rather between cortical signals (the causal
variables) which can be recovered as linear combinations of electrode signals.
We introduce MERLiN (Mixture Effect Recovery in Linear Networks), a family of
causal inference algorithms that implement a novel means of constructing causal
variables from non-causal variables. We demonstrate through application to EEG
data how the basic MERLiN algorithm can be extended for application to
different (neuroimaging) data modalities. Given an observed linear mixture, the
algorithms can recover a causal variable that is a linear effect of another
given variable. That is, MERLiN allows us to recover a cortical signal that is
affected by activity in a certain brain region, while not being a direct effect
of the stimulus. The Python/Matlab implementation for all presented algorithms
is available on https://github.com/sweichwald/MERLi
- …