238,209 research outputs found

    Multiple Subject Learning for Inter-Subject Prediction

    No full text
    International audienceMulti-voxel pattern analysis has become an important tool for neuroimaging data analysis by allowing to predict a behavioral variable from the imaging patterns. However, standard models do not take into account the differences that can exist between subjects, so that they perform poorly in the inter-subject prediction task. We here introduce a model called Multiple Subject Learning (MSL) that is designed to effectively combine the information provided by fMRI data from several subjects; in a first stage, a weighting of single-subject kernels is learnt using multiple kernel learning to produce a classifier; then, a data shuffling procedure allows to build ensembles of such classifiers, which are then combined by a majority vote. We show that MSL outperforms other models in the inter-subject prediction task and we discuss the empirical behavior of this new model

    Quantitative predictions of cerebral arterial labeling employing neural network ensemble orchestrate precise investigation in brain frailty of cerebrovascular disease

    Get PDF
    학위논문(석사) -- 서울대학교대학원 : 자연과학대학 협동과정 뇌과학전공, 2023. 2. 김상윤서우근(공동지도교수).Identifying the cerebral arterial branches is essential for undertaking a computational approach to cerebrovascular imaging. However, the complexity and inter-individual differences involved in this process have not been thoroughly studied. We used machine learning to examine the anatomical profile of the cerebral arterial tree. The method is less sensitive to inter-subject and cohort-wise anatomical variations and exhibits robust performance with an unprecedented in-depth vessel range. We applied machine learning algorithms to disease-free healthy control subjects (n = 42), patients with stroke with intracranial atherosclerosis (ICAS) (n = 46), and patients with stroke mixed with the existing controls (n = 69). We trained and tested 70% and 30% of each study cohort, respectively, incorporating spatial coordinates and geometric vessel feature vectors. Cerebral arterial images were analyzed based on the segmentation-stacking method using magnetic resonance angiography. We precisely classified the cerebral arteries across the exhaustive scope of vessel components using advanced geometric characterization, redefinition of vessel unit conception, and post-processing algorithms. We verified that the neural network ensemble, with multiple joint models as the combined predictor, classified all vessel component types independent of inter-subject variations in cerebral arterial anatomy. The validity of the categorization performance of the model was tested, considering the control, ICAS, and control-blended stroke cohorts, using the area under the receiver operating characteristic (ROC) curve and precision-recall curve. The classification accuracy rarely fell outside each images 90–99% scope, independent of cohort-dependent cerebrovascular structural variations. The classification ensemble was calibrated with high overall area rates under the ROC curve of 0.99–1.00 [0.97–1.00] in the test set across various study cohorts. Identifying an all-inclusive range of vessel components across controls, ICAS, and stroke patients, the accuracy rates of the prediction were: internal carotid arteries, 91–100%; middle cerebral arteries, 82–98%; anterior cerebral arteries, 88–100%; posterior cerebral arteries, 87–100%; and collections of superior, anterior inferior, and posterior inferior cerebellar arteries, 90–99% in the chunk-level classification. Using a voting algorithm on the queued classified vessel factors and anatomically post-processing the automatically classified results intensified quantitative prediction performance. We employed stochastic clustering and deep neural network ensembles. Machine intelligence-assisted prediction of vessel structure allowed us to personalize quantitative predictions of various types of cerebral arterial structures, contributing to precise and efficient decisions regarding cerebrovascular disease.CHAPTER 1. AUTOMATED IN-DEPTH CEREBRAL ARTERIAL LABELING USING CEREBROVASCULAR VASCULATURE REFRAMING AND DEEP NEURAL NETWORKS 8 1.1. INTRODUCTION 8 1.2.1. Study design and subjects 9 1.2.2. Imaging preparation 11 1.2.2.1. Magnetic resonance machine 11 1.2.2.2. Magnetic resonance sequence 11 1.2.2.3. Region growing 11 1.2.2.4. Feature extraction 11 1.2.3. Reframing hierarchical cerebrovasculature 12 1.2.4. Classification method development 14 1.2.4.1. Two-step modeling 14 1.2.4.2. Validation 16 1.2.4.3. Statistics 16 1.2.4.4. Data availability 16 1.3. RESULTS 16 1.3.1. Subject characteristics 16 1.3.2. Vascular component characteristics 21 1.3.3. Testing the appropriateness of the reframed vascular structure 24 1.3.4. Step 1 modeling: chunk 24 1.3.5. Step 2 modeling: branch 26 1.3.6. Vascular morphological features according to the vascular risk factors 31 1.3.7. The profiles of geometric feature vectors weighted on deep neural networks 31 1.4. DISCUSSION 35 1.4.1. The role of neural networks in this study 36 1.4.2. Paradigm-shifting vascular unit reframing 36 1.4.3. Limitations and future directions 37 1.5. CONCLUSIONS 38 1.6. ACKNOWLEDGEMENTS 38 1.7. FUNDING 39 BIBLIOGRAPHY 40석

    BLOCK: Bilinear Superdiagonal Fusion for Visual Question Answering and Visual Relationship Detection

    Full text link
    Multimodal representation learning is gaining more and more interest within the deep learning community. While bilinear models provide an interesting framework to find subtle combination of modalities, their number of parameters grows quadratically with the input dimensions, making their practical implementation within classical deep learning pipelines challenging. In this paper, we introduce BLOCK, a new multimodal fusion based on the block-superdiagonal tensor decomposition. It leverages the notion of block-term ranks, which generalizes both concepts of rank and mode ranks for tensors, already used for multimodal fusion. It allows to define new ways for optimizing the tradeoff between the expressiveness and complexity of the fusion model, and is able to represent very fine interactions between modalities while maintaining powerful mono-modal representations. We demonstrate the practical interest of our fusion model by using BLOCK for two challenging tasks: Visual Question Answering (VQA) and Visual Relationship Detection (VRD), where we design end-to-end learnable architectures for representing relevant interactions between modalities. Through extensive experiments, we show that BLOCK compares favorably with respect to state-of-the-art multimodal fusion models for both VQA and VRD tasks. Our code is available at https://github.com/Cadene/block.bootstrap.pytorch

    Learning and comparing functional connectomes across subjects

    Get PDF
    Functional connectomes capture brain interactions via synchronized fluctuations in the functional magnetic resonance imaging signal. If measured during rest, they map the intrinsic functional architecture of the brain. With task-driven experiments they represent integration mechanisms between specialized brain areas. Analyzing their variability across subjects and conditions can reveal markers of brain pathologies and mechanisms underlying cognition. Methods of estimating functional connectomes from the imaging signal have undergone rapid developments and the literature is full of diverse strategies for comparing them. This review aims to clarify links across functional-connectivity methods as well as to expose different steps to perform a group study of functional connectomes

    Real-Time Human Motion Capture with Multiple Depth Cameras

    Full text link
    Commonly used human motion capture systems require intrusive attachment of markers that are visually tracked with multiple cameras. In this work we present an efficient and inexpensive solution to markerless motion capture using only a few Kinect sensors. Unlike the previous work on 3d pose estimation using a single depth camera, we relax constraints on the camera location and do not assume a co-operative user. We apply recent image segmentation techniques to depth images and use curriculum learning to train our system on purely synthetic data. Our method accurately localizes body parts without requiring an explicit shape model. The body joint locations are then recovered by combining evidence from multiple views in real-time. We also introduce a dataset of ~6 million synthetic depth frames for pose estimation from multiple cameras and exceed state-of-the-art results on the Berkeley MHAD dataset.Comment: Accepted to computer robot vision 201
    corecore