73 research outputs found

    Learning to rank from medical imaging data

    Get PDF
    Medical images can be used to predict a clinical score coding for the severity of a disease, a pain level or the complexity of a cognitive task. In all these cases, the predicted variable has a natural order. While a standard classifier discards this information, we would like to take it into account in order to improve prediction performance. A standard linear regression does model such information, however the linearity assumption is likely not be satisfied when predicting from pixel intensities in an image. In this paper we address these modeling challenges with a supervised learning procedure where the model aims to order or rank images. We use a linear model for its robustness in high dimension and its possible interpretation. We show on simulations and two fMRI datasets that this approach is able to predict the correct ordering on pairs of images, yielding higher prediction accuracy than standard regression and multiclass classification techniques

    Beyond brain reading: randomized sparsity and clustering to simultaneously predict and identify

    Get PDF
    International audienceThe prediction of behavioral covariates from functional MRI (fMRI) is known as brain reading. From a statistical standpoint, this challenge is a supervised learning task. The ability to predict cognitive states from new data gives a model selection criterion: prediction accu- racy. While a good prediction score implies that some of the voxels used by the classifier are relevant, one cannot state that these voxels form the brain regions involved in the cognitive task. The best predictive model may have selected by chance non-informative regions, and neglected rele- vant regions that provide duplicate information. In this contribution, we address the support identification problem. The proposed approach relies on randomization techniques which have been proved to be consistent for support recovery. To account for the spatial correlations between voxels, our approach makes use of a spatially constrained hierarchical clustering algorithm. Results are provided on simulations and a visual experiment

    Predicting Activation Across Individuals with Resting-State Functional Connectivity Based Multi-Atlas Label Fusion

    Get PDF
    The alignment of brain imaging data for functional neuroimaging studies is challenging due to the discrepancy between correspondence of morphology, and equivalence of functional role. In this paper we map functional activation areas across individuals by a multi-atlas label fusion algorithm in a functional space. We learn the manifold of resting-state fMRI signals in each individual, and perform manifold alignment in an embedding space. We then transfer activation predictions from a source population to a target subject via multi-atlas label fusion. The cost function is derived from the aligned manifolds, so that the resulting correspondences are derived based on the similarity of intrinsic connectivity architecture. Experiments show that the resulting label fusion predicts activation evoked by various experiment conditions with higher accuracy than relying on morphological alignment. Interestingly, the distribution of this gain is distributed heterogeneously across the cortex, and across tasks. This offers insights into the relationship between intrinsic connectivity, morphology and task activation. Practically, the mechanism can serve as prior, and provides an avenue to infer task-related activation in individuals for whom only resting data is available. Keywords: Functional Connectivity, Cortical Surface, Task Activation, Target Subject, Intrinsic ConnectivityCongressionally Directed Medical Research Programs (U.S.) (Grant PT100120)Eunice Kennedy Shriver National Institute of Child Health and Human Development (U.S.) (R01HD067312)Neuroimaging Analysis Center (U.S.) (P41EB015902)Oesterreichische Nationalbank (14812)Oesterreichische Nationalbank (15929)Seventh Framework Programme (European Commission) (FP7 2012-PIEF-GA-33003

    Distributed Neural Systems for Face Perception

    No full text
    Face perception plays a central role in social communication and is, arguably, one of the most sophisticated visual perceptual skills in humans. Consequently, face perception has been the subject of intensive investigation and theorizing in both visual and social neuroscience. The organization of neural systems for face perception has stimulated intense debate. Much of this debate has focused on models that posit the existence of a module that is specialized for face perception versus models that propose that face perception is mediated by distributed processing. In our work, we have proposed that face perception is mediated by distributed systems, both in terms of the involvement of multiple brain areas and in terms of locally distributed population codes within these areas. Specifically, we proposed a model for the distributed neural system for face perception that has a Core System of visual extrastriate areas for visual analysis of faces and an Extended System that consists of additional neural systems that work in concert with the Core System to extract various types of information from faces. We also have shown that in visual extrastriate cortices, information that distinguishes faces from other categories of animate and inanimate objects is not restricted to regions that respond maximally to faces, i.e. the fusiform and occipital face areas

    Neural response to the visual familiarity of faces

    No full text
    Recognizing personally familiar faces is the result of a spatially distributed process that involves visual perceptual areas and areas that play an essential role in other cognitive and social functions, such as the anterior paracingulate cortex, the precuneus and the amygdala [M.I. Gobbini, E. Leibenluft, N. Santiago, J.V. Haxby, Social and emotional attachment in the neural representation of faces, Neuroimage 22 (2004) 1628–1635; M.I. Gobbini, J.V. Haxby, Neural systems for recognition of familiar faces, Neuropsychologia, in press; E. Leibenluft, M.I. Gobbini, T. Harrison, J.V. Haxby, Mothers’ neural activation in response to pictures of their, and other, children, Biol. Psychiatry 56 (2004) 225–232]. In order to isolate the role of visual familiarity in face recognition, we used fMRI to measure the response to faces characterized by experimentally induced visual familiarity that carried no biographical information or emotional content. The fMRI results showed a stronger response in the precuneus to the visually familiar faces consistent with studies that implicate this region in the retrieval of information from long term memory and imagery. Moreover, this finding supports the hypothesis of a key role of the precuneus in the acquisition of familiarity with faces [H. Kosaka, M. Omori, T. Iidaka, T. Murata, T. Shimoyama, T. Okada, N. Sadato, Y. Yonekura, Y. Wada, Neural substrates participating in acquisition of facial familiarity: an fMRI study, Neuroimage 20 (2003) 1734–1742]. By contrast, the visually familiar faces evoked a weaker response in the fusiform gyrus, which may reflect the development of a sparser encoding, or a reduced attentional load when processing stimuli that are familiar. The visually familiar faces evoked also a weaker response in the amygdala, supporting the proposed role of this structure in mediating the guarded attitude when meeting someone new

    Common neural mechanisms for the evaluation of facial trustworthiness and emotional expressions as revealed by behavioral adaptation

    No full text
    Item does not contain fulltextPeople rapidly and automatically evaluate faces along many social dimensions. Here, we focus on judgments of trustworthiness, which approximate basic valence evaluation of faces, and test whether these judgments are an overgeneralization of the perception of emotional expressions. We used a behavioral adaptation paradigm to investigate whether the previously noted perceptual similarities between trustworthiness and emotional expressions of anger and happiness extend to their underlying neural representations. We found that adapting to angry or happy facial expressions causes trustworthiness evaluations of subsequently rated neutral faces to increase or decrease, respectively. Further, we found no such modulation of trustworthiness evaluations after participants were adapted to fearful expressions, suggesting that this effect is specific to angry and happy expressions. We conclude that, in line with the overgeneralization hypothesis, a common neural system is engaged during the evaluation of facial trustworthiness and expressions of anger and happiness

    Three virtues of similarity based multivariate pattern analysis: an example from the object vision pathway

    No full text
    We present an fMRI investigation of object representation in the human ventral vision pathway highlighting three aspects of similarity analysis that make it especially useful for illuminating the representational content underlying neural activation patterns. First, similarity structures allow for an abstract depiction of representational content in a given brain region. This is demonstrated using hierarchical clustering and multidimensional scaling (MDS) of the dissimilarity matrices defined by our stimulus categories\u2014female and male human faces, dog faces, monkey faces, chairs, shoes, and houses. For example, in ventral temporal (VT) cortex the similarity space was neatly divided into face and non-face regions. Within the face region of the MDS space, male and female human faces were closest to each other, and dog faces were closer to human faces than monkey faces. Within the non-face region of the abstract space, the smaller objects\u2014shoes and chairs\u2014were closer to each other than they were to houses. Second, similarity structures are independent of the data source. Dissimilarities among stimulus categories can be derived from behavioral measures, from stimulus models, or from neural activity patterns in different brain regions and different subjects. The similarity structures from these diverse sources all have the same dimensionality. This source independence allowed for the direct comparison of similarity structures across subjects (n = 16) and across three brain regions representing early, middle, and late stages of the object vision pathway. Finally, similarity structures can change shape in well-ordered ways as the source of the dissimilarities changes\u2014helping to illuminate how representational content is transformed along a neural pathway. By comparing similarity spaces from three regions along the ventral visual pathway, we demonstrate how the similarity structure transforms from an organization based on low-level visual features\u2014as reflected by patterns in early visual cortex\u2014to a more categorical representation in late object vision cortex with intermediate organization at the middle stage

    Graph-based inter-subject classification of local fMRI patterns

    Get PDF
    International audienceClassification of medical images in multi-subjects settings is a difficult challenge due to the variability that exists between individuals. Here we introduce a new graph-based framework designed to deal with inter-subject functional variability present in fMRI data. A graphical model is constructed to encode the functional, geometric and structural properties of local activation patterns. We then design a specific graph kernel, allowing to conduct SVM classification in graph space. Experiments conducted in an inter-subject classification task of patterns recorded in the auditory cortex show that it is the only approach to perform above chance level, among a wide range of tested methods
    corecore