739 research outputs found

    Extracting brain regions from rest fMRI with Total-Variation constrained dictionary learning

    Get PDF
    International audienceSpontaneous brain activity reveals mechanisms of brain function and dysfunction. Its population-level statistical analysis based on functional images often relies on the de nition of brain regions that must summarize e ciently the covariance structure between the multiple brain networks. In this paper, we extend a network-discovery approach, namely dictionary learning, to readily extract brain regions. To do so, we intro duce a new tool drawing from clustering and linear decomposition methods by carefully crafting a penalty. Our approach automatically extracts regions from rest fMRI that better explain the data and are more stable across subjects than reference decomposition or clustering methods

    Region segmentation for sparse decompositions: better brain parcellations from rest fMRI

    Get PDF
    International audienceFunctional Magnetic Resonance Images acquired during resting-state provide information about the functional organization of the brain through measuring correlations between brain areas. Independent components analysis is the reference approach to estimate spatial components from weakly structured data such as brain signal time courses; each of these components may be referred to as a brain network and the whole set of components can be conceptualized as a brain functional atlas. Recently, new methods using a sparsity prior have emerged to deal with low signal-to-noise ratio data. However, even when using sophisticated priors, the results may not be very sparse and most often do not separate the spatial components into brain regions. This work presents post-processing techniques that automatically sparsify brain maps and separate regions properly using geometric operations, and compares these techniques according to faithfulness to data and stability metrics. In particular, among threshold-based approaches, hysteresis thresholding and random walker segmentation, the latter improves significantly the stability of both dense and sparse models

    Formal Models of the Network Co-occurrence Underlying Mental Operations

    Get PDF
    International audienceSystems neuroscience has identified a set of canonical large-scale networks in humans. These have predominantly been characterized by resting-state analyses of the task-uncon-strained, mind-wandering brain. Their explicit relationship to defined task performance is largely unknown and remains challenging. The present work contributes a multivariate statistical learning approach that can extract the major brain networks and quantify their configuration during various psychological tasks. The method is validated in two extensive datasets (n = 500 and n = 81) by model-based generation of synthetic activity maps from recombination of shared network topographies. To study a use case, we formally revisited the poorly understood difference between neural activity underlying idling versus goal-directed behavior. We demonstrate that task-specific neural activity patterns can be explained by plausible combinations of resting-state networks. The possibility of decomposing a mental task into the relative contributions of major brain networks, the "network co-occurrence architecture" of a given task, opens an alternative access to the neural substrates of human cognition

    Cerebellar Functional Parcellation Using Sparse Dictionary Learning Clustering

    Get PDF
    10.3389/fnins.2016.00188Frontiers in neuroscience10188GUSTO (Growing up towards Healthy Outcomes

    Apprentissage d'atlas fonctionnel du cerveau modélisant la variabilité inter-individuelle

    Get PDF
    Recent studies have shown that resting-state spontaneous brain activity unveils intrinsic cerebral functioning and complete information brought by prototype task study. From these signals, we will set up a functional atlas of the brain, along with an across-subject variability model. The novelty of our approach lies in the integration of neuroscientific priors and inter-individual variability in a probabilistic description of the rest activity. These models will be applied to large datasets. This variability, ignored until now, may lead to learning of fuzzy atlases, thus limited in term of resolution. This program yields both numerical and algorithmic challenges because of the data volume but also because of the complexity of modelisation.De récentes études ont montré que l'activité spontanée du cerveau observée au repos permet d'étudier l'organisation fonctionnelle cérébrale en complément de l'information fournie par les protocoles de tâches. A partir de ces signaux, nous allons extraire un atlas fonctionnel du cerveau modélisant la variabilité inter-sujet. La nouveauté de notre approche réside dans l'intégration d'a-prioris neuroscientifiques et de la variabilité inter-sujet directement dans un modèles probabiliste de l'activité de repos. Ces modèles seront appliqués sur de larges jeux de données. Cette variabilité, ignorée jusqu'à présent, cont nous permettre d'extraire des atlas flous, donc limités en terme de résolution. Des challenges à la fois numériques et algorithmiques sont à relever de par la taille des jeux de données étudiés et la complexité de la modélisation considérée

    Learning brain regions via large-scale online structured sparse dictionary-learning

    Get PDF
    International audienceWe propose a multivariate online dictionary-learning method for obtaining de-compositions of brain images with structured and sparse components (aka atoms). Sparsity is to be understood in the usual sense: the dictionary atoms are constrained to contain mostly zeros. This is imposed via an 1-norm constraint. By "struc-tured", we mean that the atoms are piece-wise smooth and compact, thus making up blobs, as opposed to scattered patterns of activation. We propose to use a Sobolev (Laplacian) penalty to impose this type of structure. Combining the two penalties, we obtain decompositions that properly delineate brain structures from functional images. This non-trivially extends the online dictionary-learning work of Mairal et al. (2010), at the price of only a factor of 2 or 3 on the overall running time. Just like the Mairal et al. (2010) reference method, the online nature of our proposed algorithm allows it to scale to arbitrarily sized datasets. Experiments on brain data show that our proposed method extracts structured and denoised dictionaries that are more intepretable and better capture inter-subject variability in small medium, and large-scale regimes alike, compared to state-of-the-art models
    corecore