4,816 research outputs found

    Identification of Dynamic functional brain network states Through Tensor Decomposition

    Full text link
    With the advances in high resolution neuroimaging, there has been a growing interest in the detection of functional brain connectivity. Complex network theory has been proposed as an attractive mathematical representation of functional brain networks. However, most of the current studies of functional brain networks have focused on the computation of graph theoretic indices for static networks, i.e. long-time averages of connectivity networks. It is well-known that functional connectivity is a dynamic process and the construction and reorganization of the networks is key to understanding human cognition. Therefore, there is a growing need to track dynamic functional brain networks and identify time intervals over which the network is quasi-stationary. In this paper, we present a tensor decomposition based method to identify temporally invariant 'network states' and find a common topographic representation for each state. The proposed methods are applied to electroencephalogram (EEG) data during the study of error-related negativity (ERN).Comment: 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP

    Dynamic Tensor Clustering

    Full text link
    Dynamic tensor data are becoming prevalent in numerous applications. Existing tensor clustering methods either fail to account for the dynamic nature of the data, or are inapplicable to a general-order tensor. Also there is often a gap between statistical guarantee and computational efficiency for existing tensor clustering solutions. In this article, we aim to bridge this gap by proposing a new dynamic tensor clustering method, which takes into account both sparsity and fusion structures, and enjoys strong statistical guarantees as well as high computational efficiency. Our proposal is based upon a new structured tensor factorization that encourages both sparsity and smoothness in parameters along the specified tensor modes. Computationally, we develop a highly efficient optimization algorithm that benefits from substantial dimension reduction. In theory, we first establish a non-asymptotic error bound for the estimator from the structured tensor factorization. Built upon this error bound, we then derive the rate of convergence of the estimated cluster centers, and show that the estimated clusters recover the true cluster structures with a high probability. Moreover, our proposed method can be naturally extended to co-clustering of multiple modes of the tensor data. The efficacy of our approach is illustrated via simulations and a brain dynamic functional connectivity analysis from an Autism spectrum disorder study.Comment: Accepted at Journal of the American Statistical Associatio

    Tensor Analysis and Fusion of Multimodal Brain Images

    Get PDF
    Current high-throughput data acquisition technologies probe dynamical systems with different imaging modalities, generating massive data sets at different spatial and temporal resolutions posing challenging problems in multimodal data fusion. A case in point is the attempt to parse out the brain structures and networks that underpin human cognitive processes by analysis of different neuroimaging modalities (functional MRI, EEG, NIRS etc.). We emphasize that the multimodal, multi-scale nature of neuroimaging data is well reflected by a multi-way (tensor) structure where the underlying processes can be summarized by a relatively small number of components or "atoms". We introduce Markov-Penrose diagrams - an integration of Bayesian DAG and tensor network notation in order to analyze these models. These diagrams not only clarify matrix and tensor EEG and fMRI time/frequency analysis and inverse problems, but also help understand multimodal fusion via Multiway Partial Least Squares and Coupled Matrix-Tensor Factorization. We show here, for the first time, that Granger causal analysis of brain networks is a tensor regression problem, thus allowing the atomic decomposition of brain networks. Analysis of EEG and fMRI recordings shows the potential of the methods and suggests their use in other scientific domains.Comment: 23 pages, 15 figures, submitted to Proceedings of the IEE

    Machine Learning for Fluid Mechanics

    Full text link
    The field of fluid mechanics is rapidly advancing, driven by unprecedented volumes of data from field measurements, experiments and large-scale simulations at multiple spatiotemporal scales. Machine learning offers a wealth of techniques to extract information from data that could be translated into knowledge about the underlying fluid mechanics. Moreover, machine learning algorithms can augment domain knowledge and automate tasks related to flow control and optimization. This article presents an overview of past history, current developments, and emerging opportunities of machine learning for fluid mechanics. It outlines fundamental machine learning methodologies and discusses their uses for understanding, modeling, optimizing, and controlling fluid flows. The strengths and limitations of these methods are addressed from the perspective of scientific inquiry that considers data as an inherent part of modeling, experimentation, and simulation. Machine learning provides a powerful information processing framework that can enrich, and possibly even transform, current lines of fluid mechanics research and industrial applications.Comment: To appear in the Annual Reviews of Fluid Mechanics, 202

    Brain connectivity analysis: a short survey

    Get PDF
    This short survey the reviews recent literature on brain connectivity studies. It encompasses all forms of static and dynamic connectivity whether anatomical, functional, or effective. The last decade has seen an ever increasing number of studies devoted to deduce functional or effective connectivity, mostly from functional neuroimaging experiments. Resting state conditions have become a dominant experimental paradigm, and a number of resting state networks, among them the prominent default mode network, have been identified. Graphical models represent a convenient vehicle to formalize experimental findings and to closely and quantitatively characterize the various networks identified. Underlying these abstract concepts are anatomical networks, the so-called connectome, which can be investigated by functional imaging techniques as well. Future studies have to bridge the gap between anatomical neuronal connections and related functional or effective connectivities

    Relationship Between Structure and Functional Connectivity Within the Default Mode Network

    Get PDF
    We proposed a novel measure of conceptualizing dynamic functional network connectivity (FNC) in the human brain using flexibility of functional connectivity (fFC), which captures the variance of functional connectivity across time. In task-free fMRI scans (N = 122), this measure was demonstrated to correspond to the underlying structural connectivity (SC) within the default mode network (DMN), while static functional connectivity (sFC) did so to a relatively low degree. As SC likely does not develop to facilitate task-free brain function, but rather to integrate information during cognitive engagement, we argue that fFC can estimate the potential functional connectivity exhibited outside of the task-free setting to a greater degree than sFC, and is better suited for examining behavioral correlates of FNC. In support of this, we showed that SC-fFC coupling was related to intelligence levels, while SC-sFC coupling was not. Further, we found that the DMN existed in a functionally disconnected state during a large portion of the scan, raising questions about whether sFC is a meaningful quantifier of functional connectivity in the absence of a task, and scrutinizing its extrapolative power to real-world, cognitively engaging scenarios. Given that fFC is based on FNC variability across time rather than its average, it is largely unaffected by such contaminants
    corecore