253 research outputs found

    A powerful and efficient multivariate approach for voxel-level connectome-wide association studies

    Get PDF
    We describe an approach to multivariate analysis, termed structured kernel principal component regression (sKPCR), to identify associations in voxel-level connectomes using resting-state functional magnetic resonance imaging (rsfMRI) data. This powerful and computationally efficient multivariate method can identify voxel-phenotype associations based on the whole-brain connectivity pattern of voxels, and it can detect linear and non-linear signals in both volume-based and surface-based rsfMRI data. For each voxel, sKPCR first extracts low-dimensional signals from the spatially smoothed connectivities by structured kernel principal component analysis, and then tests the voxel-phenotype associations by an adaptive regression model. The method's power is derived from appropriately modelling the spatial structure of the data when performing dimension reduction, and then adaptively choosing an optimal dimension for association testing using the adaptive regression strategy. Simulations based on real connectome data have shown that sKPCR can accurately control the false-positive rate and that it is more powerful than many state-of-the-art approaches, such as the connectivity-wise generalized linear model (GLM) approach, multivariate distance matrix regression (MDMR), adaptive sum of powered score (aSPU) test, and least-square kernel machine (LSKM). Moreover, since sKPCR can reduce the computational cost of non-parametric permutation tests, its computation speed is much faster. To demonstrate the utility of sKPCR for real data analysis, we have also compared sKPCR with the above methods based on the identification of voxel-wise differences between schizophrenic patients and healthy controls in four independent rsfMRI datasets. The results showed that sKPCR had better between-sites reproducibility and a larger proportion of overlap with existing schizophrenia meta-analysis findings. Code for our approach can be downloaded from https://github.com/weikanggong/sKPCR. [Abstract copyright: Copyright © 2018 Elsevier Inc. All rights reserved.

    State-space model with deep learning for functional dynamics estimation in resting-state fMRI

    Get PDF
    Studies on resting-state functional Magnetic Resonance Imaging (rs-fMRI) have shown that different brain regions still actively interact with each other while a subject is at rest, and such functional interaction is not stationary but changes over time. In terms of a large-scale brain network, in this paper, we focus on time-varying patterns of functional networks, i.e., functional dynamics, inherent in rs-fMRI, which is one of the emerging issues along with the network modelling. Specifically, we propose a novel methodological architecture that combines deep learning and state-space modelling, and apply it to rs-fMRI based Mild Cognitive Impairment (MCI) diagnosis. We first devise a Deep Auto-Encoder (DAE) to discover hierarchical non-linear functional relations among regions, by which we transform the regional features into an embedding space, whose bases are complex functional networks. Given the embedded functional features, we then use a Hidden Markov Model (HMM) to estimate dynamic characteristics of functional networks inherent in rs-fMRI via internal states, which are unobservable but can be inferred from observations statistically. By building a generative model with an HMM, we estimate the likelihood of the input features of rs-fMRI as belonging to the corresponding status, i.e., MCI or normal healthy control, based on which we identify the clinical label of a testing subject. In order to validate the effectiveness of the proposed method, we performed experiments on two different datasets and compared with state-of-the-art methods in the literature. We also analyzed the functional networks learned by DAE, estimated the functional connectivities by decoding hidden states in HMM, and investigated the estimated functional connectivities by means of a graph-theoretic approach

    Probabilistic Latent Factor Model for Collaborative Filtering with Bayesian Inference

    Full text link
    Latent Factor Model (LFM) is one of the most successful methods for Collaborative filtering (CF) in the recommendation system, in which both users and items are projected into a joint latent factor space. Base on matrix factorization applied usually in pattern recognition, LFM models user-item interactions as inner products of factor vectors of user and item in that space and can be efficiently solved by least square methods with optimal estimation. However, such optimal estimation methods are prone to overfitting due to the extreme sparsity of user-item interactions. In this paper, we propose a Bayesian treatment for LFM, named Bayesian Latent Factor Model (BLFM). Based on observed user-item interactions, we build a probabilistic factor model in which the regularization is introduced via placing prior constraint on latent factors, and the likelihood function is established over observations and parameters. Then we draw samples of latent factors from the posterior distribution with Variational Inference (VI) to predict expected value. We further make an extension to BLFM, called BLFMBias, incorporating user-dependent and item-dependent biases into the model for enhancing performance. Extensive experiments on the movie rating dataset show the effectiveness of our proposed models by compared with several strong baselines.Comment: 8 pages, 5 figures, ICPR2020 conferenc

    Learning Laplacian Matrix in Smooth Graph Signal Representations

    Full text link
    The construction of a meaningful graph plays a crucial role in the success of many graph-based representations and algorithms for handling structured data, especially in the emerging field of graph signal processing. However, a meaningful graph is not always readily available from the data, nor easy to define depending on the application domain. In particular, it is often desirable in graph signal processing applications that a graph is chosen such that the data admit certain regularity or smoothness on the graph. In this paper, we address the problem of learning graph Laplacians, which is equivalent to learning graph topologies, such that the input data form graph signals with smooth variations on the resulting topology. To this end, we adopt a factor analysis model for the graph signals and impose a Gaussian probabilistic prior on the latent variables that control these signals. We show that the Gaussian prior leads to an efficient representation that favors the smoothness property of the graph signals. We then propose an algorithm for learning graphs that enforces such property and is based on minimizing the variations of the signals on the learned graph. Experiments on both synthetic and real world data demonstrate that the proposed graph learning framework can efficiently infer meaningful graph topologies from signal observations under the smoothness prior
    corecore