57,321 research outputs found

    Analysis of Dynamic Brain Imaging Data

    Get PDF
    Modern imaging techniques for probing brain function, including functional Magnetic Resonance Imaging, intrinsic and extrinsic contrast optical imaging, and magnetoencephalography, generate large data sets with complex content. In this paper we develop appropriate techniques of analysis and visualization of such imaging data, in order to separate the signal from the noise, as well as to characterize the signal. The techniques developed fall into the general category of multivariate time series analysis, and in particular we extensively use the multitaper framework of spectral analysis. We develop specific protocols for the analysis of fMRI, optical imaging and MEG data, and illustrate the techniques by applications to real data sets generated by these imaging modalities. In general, the analysis protocols involve two distinct stages: `noise' characterization and suppression, and `signal' characterization and visualization. An important general conclusion of our study is the utility of a frequency-based representation, with short, moving analysis windows to account for non-stationarity in the data. Of particular note are (a) the development of a decomposition technique (`space-frequency singular value decomposition') that is shown to be a useful means of characterizing the image data, and (b) the development of an algorithm, based on multitaper methods, for the removal of approximately periodic physiological artifacts arising from cardiac and respiratory sources.Comment: 40 pages; 26 figures with subparts including 3 figures as .gif files. Originally submitted to the neuro-sys archive which was never publicly announced (was 9804003

    Smoothing dynamic positron emission tomography time courses using functional principal components

    Get PDF
    A functional smoothing approach to the analysis of PET time course data is presented. By borrowing information across space and accounting for this pooling through the use of a nonparametric covariate adjustment, it is possible to smooth the PET time course data thus reducing the noise. A new model for functional data analysis, the Multiplicative Nonparametric Random Effects Model, is introduced to more accurately account for the variation in the data. A locally adaptive bandwidth choice helps to determine the correct amount of smoothing at each time point. This preprocessing step to smooth the data then allows Subsequent analysis by methods Such as Spectral Analysis to be substantially improved in terms of their mean squared error

    Statistical Physics and Representations in Real and Artificial Neural Networks

    Full text link
    This document presents the material of two lectures on statistical physics and neural representations, delivered by one of us (R.M.) at the Fundamental Problems in Statistical Physics XIV summer school in July 2017. In a first part, we consider the neural representations of space (maps) in the hippocampus. We introduce an extension of the Hopfield model, able to store multiple spatial maps as continuous, finite-dimensional attractors. The phase diagram and dynamical properties of the model are analyzed. We then show how spatial representations can be dynamically decoded using an effective Ising model capturing the correlation structure in the neural data, and compare applications to data obtained from hippocampal multi-electrode recordings and by (sub)sampling our attractor model. In a second part, we focus on the problem of learning data representations in machine learning, in particular with artificial neural networks. We start by introducing data representations through some illustrations. We then analyze two important algorithms, Principal Component Analysis and Restricted Boltzmann Machines, with tools from statistical physics

    Investigating microstructural variation in the human hippocampus using non-negative matrix factorization

    No full text
    In this work we use non-negative matrix factorization to identify patterns of microstructural variance in the human hippocampus. We utilize high-resolution structural and diffusion magnetic resonance imaging data from the Human Connectome Project to query hippocampus microstructure on a multivariate, voxelwise basis. Application of non-negative matrix factorization identifies spatial components (clusters of voxels sharing similar covariance patterns), as well as subject weightings (individual variance across hippocampus microstructure). By assessing the stability of spatial components as well as the accuracy of factorization, we identified 4 distinct microstructural components. Furthermore, we quantified the benefit of using multiple microstructural metrics by demonstrating that using three microstructural metrics (T1-weighted/T2-weighted signal, mean diffusivity and fractional anisotropy) produced more stable spatial components than when assessing metrics individually. Finally, we related individual subject weightings to demographic and behavioural measures using a partial least squares analysis. Through this approach we identified interpretable relationships between hippocampus microstructure and demographic and behavioural measures. Taken together, our work suggests non-negative matrix factorization as a spatially specific analytical approach for neuroimaging studies and advocates for the use of multiple metrics for data-driven component analyses

    MENGA: a new comprehensive tool for the integration of neuroimaging data and the Allen human brain transcriptome atlas

    Get PDF
    Brain-wide mRNA mappings offer a great potential for neuroscience research as they can provide information about system proteomics. In a previous work we have correlated mRNA maps with the binding patterns of radioligands targeting specific molecular systems and imaged with positron emission tomography (PET) in unrelated control groups. This approach is potentially applicable to any imaging modality as long as an efficient procedure of imaging-genomic matching is provided. In the original work we considered mRNA brain maps of the whole human genome derived from the Allen human brain database (ABA) and we performed the analysis with a specific region-based segmentation with a resolution that was limited by the PET data parcellation. There we identified the need for a platform for imaging-genomic integration that should be usable with any imaging modalities and fully exploit the high resolution mapping of ABA dataset.In this work we present MENGA (Multimodal Environment for Neuroimaging and Genomic Analysis), a software platform that allows the investigation of the correlation patterns between neuroimaging data of any sort (both functional and structural) with mRNA gene expression profiles derived from the ABA database at high resolution.We applied MENGA to six different imaging datasets from three modalities (PET, single photon emission tomography and magnetic resonance imaging) targeting the dopamine and serotonin receptor systems and the myelin molecular structure. We further investigated imaging-genomic correlations in the case of mismatch between selected proteins and imaging targets

    Space-by-time non-negative matrix factorization for single-trial decoding of M/EEG activity

    Get PDF
    We develop a novel methodology for the single-trial analysis of multichannel time-varying neuroimaging signals. We introduce the space-by-time M/EEG decomposition, based on Non-negative Matrix Factorization (NMF), which describes single-trial M/EEG signals using a set of non-negative spatial and temporal components that are linearly combined with signed scalar activation coefficients. We illustrate the effectiveness of the proposed approach on an EEG dataset recorded during the performance of a visual categorization task. Our method extracts three temporal and two spatial functional components achieving a compact yet full representation of the underlying structure, which validates and summarizes succinctly results from previous studies. Furthermore, we introduce a decoding analysis that allows determining the distinct functional role of each component and relating them to experimental conditions and task parameters. In particular, we demonstrate that the presented stimulus and the task difficulty of each trial can be reliably decoded using specific combinations of components from the identified space-by-time representation. When comparing with a sliding-window linear discriminant algorithm, we show that our approach yields more robust decoding performance across participants. Overall, our findings suggest that the proposed space-by-time decomposition is a meaningful low-dimensional representation that carries the relevant information of single-trial M/EEG signals

    Diffusion map for clustering fMRI spatial maps extracted by independent component analysis

    Full text link
    Functional magnetic resonance imaging (fMRI) produces data about activity inside the brain, from which spatial maps can be extracted by independent component analysis (ICA). In datasets, there are n spatial maps that contain p voxels. The number of voxels is very high compared to the number of analyzed spatial maps. Clustering of the spatial maps is usually based on correlation matrices. This usually works well, although such a similarity matrix inherently can explain only a certain amount of the total variance contained in the high-dimensional data where n is relatively small but p is large. For high-dimensional space, it is reasonable to perform dimensionality reduction before clustering. In this research, we used the recently developed diffusion map for dimensionality reduction in conjunction with spectral clustering. This research revealed that the diffusion map based clustering worked as well as the more traditional methods, and produced more compact clusters when needed.Comment: 6 pages. 8 figures. Copyright (c) 2013 IEEE. Published at 2013 IEEE International Workshop on Machine Learning for Signal Processin
    corecore