141 research outputs found

    Learning Discriminative Bayesian Networks from High-dimensional Continuous Neuroimaging Data

    Get PDF
    Due to its causal semantics, Bayesian networks (BN) have been widely employed to discover the underlying data relationship in exploratory studies, such as brain research. Despite its success in modeling the probability distribution of variables, BN is naturally a generative model, which is not necessarily discriminative. This may cause the ignorance of subtle but critical network changes that are of investigation values across populations. In this paper, we propose to improve the discriminative power of BN models for continuous variables from two different perspectives. This brings two general discriminative learning frameworks for Gaussian Bayesian networks (GBN). In the first framework, we employ Fisher kernel to bridge the generative models of GBN and the discriminative classifiers of SVMs, and convert the GBN parameter learning to Fisher kernel learning via minimizing a generalization error bound of SVMs. In the second framework, we employ the max-margin criterion and build it directly upon GBN models to explicitly optimize the classification performance of the GBNs. The advantages and disadvantages of the two frameworks are discussed and experimentally compared. Both of them demonstrate strong power in learning discriminative parameters of GBNs for neuroimaging based brain network analysis, as well as maintaining reasonable representation capacity. The contributions of this paper also include a new Directed Acyclic Graph (DAG) constraint with theoretical guarantee to ensure the graph validity of GBN.Comment: 16 pages and 5 figures for the article (excluding appendix

    Multi-scale Transformer Network with Edge-aware Pre-training for Cross-Modality MR Image Synthesis

    Full text link
    Cross-modality magnetic resonance (MR) image synthesis can be used to generate missing modalities from given ones. Existing (supervised learning) methods often require a large number of paired multi-modal data to train an effective synthesis model. However, it is often challenging to obtain sufficient paired data for supervised training. In reality, we often have a small number of paired data while a large number of unpaired data. To take advantage of both paired and unpaired data, in this paper, we propose a Multi-scale Transformer Network (MT-Net) with edge-aware pre-training for cross-modality MR image synthesis. Specifically, an Edge-preserving Masked AutoEncoder (Edge-MAE) is first pre-trained in a self-supervised manner to simultaneously perform 1) image imputation for randomly masked patches in each image and 2) whole edge map estimation, which effectively learns both contextual and structural information. Besides, a novel patch-wise loss is proposed to enhance the performance of Edge-MAE by treating different masked patches differently according to the difficulties of their respective imputations. Based on this proposed pre-training, in the subsequent fine-tuning stage, a Dual-scale Selective Fusion (DSF) module is designed (in our MT-Net) to synthesize missing-modality images by integrating multi-scale features extracted from the encoder of the pre-trained Edge-MAE. Further, this pre-trained encoder is also employed to extract high-level features from the synthesized image and corresponding ground-truth image, which are required to be similar (consistent) in the training. Experimental results show that our MT-Net achieves comparable performance to the competing methods even using 70%70\% of all available paired data. Our code will be publicly available at https://github.com/lyhkevin/MT-Net.Comment: 13 pages, 15 figure

    Sparse Multi-Response Tensor Regression for Alzheimer's Disease Study With Multivariate Clinical Assessments

    Get PDF
    Alzheimer's disease (AD) is a progressive and irreversible neurodegenerative disorder that has recently seen serious increase in the number of affected subjects. In the last decade, neuroimaging has been shown to be a useful tool to understand AD and its prodromal stage, amnestic mild cognitive impairment (MCI). The majority of AD/MCI studies have focused on disease diagnosis, by formulating the problem as classification with a binary outcome of AD/MCI or healthy controls. There have recently emerged studies that associate image scans with continuous clinical scores that are expected to contain richer information than a binary outcome. However, very few studies aim at modeling multiple clinical scores simultaneously, even though it is commonly conceived that multivariate outcomes provide correlated and complementary information about the disease pathology. In this article, we propose a sparse multi-response tensor regression method to model multiple outcomes jointly as well as to model multiple voxels of an image jointly. The proposed method is particularly useful to both infer clinical scores and thus disease diagnosis, and to identify brain subregions that are highly relevant to the disease outcomes. We conducted experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, and showed that the proposed method enhances the performance and clearly outperforms the competing solutions

    Contrastive Diffusion Model with Auxiliary Guidance for Coarse-to-Fine PET Reconstruction

    Full text link
    To obtain high-quality positron emission tomography (PET) scans while reducing radiation exposure to the human body, various approaches have been proposed to reconstruct standard-dose PET (SPET) images from low-dose PET (LPET) images. One widely adopted technique is the generative adversarial networks (GANs), yet recently, diffusion probabilistic models (DPMs) have emerged as a compelling alternative due to their improved sample quality and higher log-likelihood scores compared to GANs. Despite this, DPMs suffer from two major drawbacks in real clinical settings, i.e., the computationally expensive sampling process and the insufficient preservation of correspondence between the conditioning LPET image and the reconstructed PET (RPET) image. To address the above limitations, this paper presents a coarse-to-fine PET reconstruction framework that consists of a coarse prediction module (CPM) and an iterative refinement module (IRM). The CPM generates a coarse PET image via a deterministic process, and the IRM samples the residual iteratively. By delegating most of the computational overhead to the CPM, the overall sampling speed of our method can be significantly improved. Furthermore, two additional strategies, i.e., an auxiliary guidance strategy and a contrastive diffusion strategy, are proposed and integrated into the reconstruction process, which can enhance the correspondence between the LPET image and the RPET image, further improving clinical reliability. Extensive experiments on two human brain PET datasets demonstrate that our method outperforms the state-of-the-art PET reconstruction methods. The source code is available at \url{https://github.com/Show-han/PET-Reconstruction}.Comment: Accepted and presented in MICCAI 2023. To be published in Proceeding

    Improving Sparsity and Modularity of High-Order Functional Connectivity Networks for MCI and ASD Identification

    Get PDF
    High-order correlation has recently been proposed to model brain functional connectivity network (FCN) for identifying neurological disorders, such as mild cognitive impairment (MCI) and autism spectrum disorder (ASD). In practice, the high-order FCN (HoFCN) can be derived from multiple low-order FCNs that are estimated separately in a series of sliding windows, and thus it in fact provides a way of integrating dynamic information encoded in a sequence of low-order FCNs. However, the estimation of low-order FCN may be unreliable due to the fact that the use of limited volumes/samples in a sliding window can significantly reduce the statistical power, which in turn affects the reliability of the resulted HoFCN. To address this issue, we propose to enhance HoFCN based on a regularized learning framework. More specifically, we first calculate an initial HoFCN using a recently developed method based on maximum likelihood estimation. Then, we learn an optimal neighborhood network of the initially estimated HoFCN with sparsity and modularity priors as regularizers. Finally, based on the improved HoFCNs, we conduct experiments to identify MCI and ASD patients from their corresponding normal controls. Experimental results show that the proposed methods outperform the baseline methods, and the improved HoFCNs with modularity prior consistently achieve the best performance

    Brain atlas fusion from high-thickness diagnostic magnetic resonance images by learning-based super-resolution

    Get PDF
    It is fundamentally important to fuse the brain atlas from magnetic resonance (MR) images for many imaging-based studies. Most existing works focus on fusing the atlases from high-quality MR images. However, for low-quality diagnostic images (i.e., with high inter-slice thickness), the problem of atlas fusion has not been addressed yet. In this paper, we intend to fuse the brain atlas from the high-thickness diagnostic MR images that are prevalent for clinical routines. The main idea of our works is to extend the conventional groupwise registration by incorporating a novel super-resolution strategy. The contribution of the proposed super-resolution framework is two-fold. First, each high-thickness subject image is reconstructed to be isotropic by the patch-based sparsity learning. Then, the reconstructed isotropic image is enhanced for better quality through the random-forest-based regression model. In this way, the images obtained by the super-resolution strategy can be fused together by applying the groupwise registration method to construct the required atlas. Our experiments have shown that the proposed framework can effectively solve the problem of atlas fusion from the low-quality brain MR images

    Multimodal classification of Alzheimer's disease and mild cognitive impairment

    Get PDF
    Effective and accurate diagnosis of Alzheimer’s disease (AD), as well as its prodromal stage (i.e., mild cognitive impairment (MCI)), has attracted more and more attentions recently. So far, multiple biomarkers have been shown sensitive to the diagnosis of AD and MCI, i.e., structural MR imaging (MRI) for brain atrophy measurement, functional imaging (e.g., FDG-PET) for hypometabolism quantification, and cerebrospinal fluid (CSF) for quantification of specific proteins. However, most existing research focuses on only a single modality of biomarkers for diagnosis of AD and MCI, although recent studies have shown that different biomarkers may provide complementary information for diagnosis of AD and MCI. In this paper, we propose to combine three modalities of biomarkers, i.e., MRI, FDG-PET, and CSF biomarkers, to discriminate between AD (or MCI) and healthy controls, using a kernel combination method. Specifically, ADNI baseline MRI, FDG-PET, and CSF data from 51 AD patients, 99 MCI patients (including 43 MCI converters who had converted to AD within 18 months and 56 MCI non-converters who had not converted to AD within 18 months), and 52 healthy controls are used for development and validation of our proposed multimodal classification method. In particular, for each MR or FDG-PET image, 93 volumetric features are extracted from the 93 regions of interest (ROIs), automatically labeled by an atlas warping algorithm. For CSF biomarkers, their original values are directly used as features. Then, a linear support vector machine (SVM) is adopted to evaluate the classification accuracy, using a 10-fold cross-validation. As a result, for classifying AD from healthy controls, we achieve a classification accuracy of 93.2% (with a sensitivity of 93% and a specificity of 93.3%) when combining all three modalities of biomarkers, and only 86.5% when using even the best individual modality of biomarkers. Similarly, for classifying MCI from healthy controls, we achieve a classification accuracy of 76.4% (with a sensitivity of 81.8% and a specificity of 66%) for our combined method, and only 72% even using the best individual modality of biomarkers. Further analysis on MCI sensitivity of our combined method indicates that 91.5% of MCI converters and 73.4% of MCI non-converters are correctly classified. Moreover, we also evaluate the classification performance when employing a feature selection method to select the most discriminative MR and FDG-PET features. Again, our combined method shows considerably better performance, compared to the case of using an individual modality of biomarkers

    Discriminant analysis of longitudinal cortical thickness changes in Alzheimer's disease using dynamic and network features

    Get PDF
    Neuroimage measures from magnetic resonance (MR) imaging, such as cortical thickness, have been playing an increasingly important role in searching for bio-markers of Alzheimer’s disease (AD). Recent studies show that, AD, mild cognitive impairment (MCI) and normal control (NC) can be distinguished with relatively high accuracy using the baseline cortical thickness. With the increasing availability of large longitudinal datasets, it also becomes possible to study the longitudinal changes of cortical thickness and their correlation with the development of pathology in AD. In this study, the longitudinal cortical thickness changes of 152 subjects from four clinical groups (AD, NC, Progressive-MCI and Stable-MCI) selected from Alzheimer’s Disease Neuroimaging Initiative (ADNI) are measured by our recently-developed 4D (spatial+temporal) thickness measuring algorithm. It is found that the four clinical groups demonstrate very similar spatial distribution of GM loss on cortex. To fully utilizing the longitudinal information and better discriminate the subjects from four groups, especially between Stable-MCI and Progressive-MCI, three different categories of features are extracted for each subject, i.e., (1) static cortical thickness measures computed from the baseline and endline, (2) cortex thinning dynamics, such as the thinning speed (mm/year) and the thinning ratio (endline/baseline), and (3) network features computed from the brain network constructed based on the correlation between the longitudinal thickness changes of different ROIs. By combining the complementary information provided by features from all three different categories, two classifiers are trained to diagnose AD and to predict the conversion to AD in MCI subjects, respectively. In the leave-one-out cross-validation, the proposed method can distinguish AD patients from NC at an accuracy of 96.1%, and can detect 81.7% (AUC=0.875) of the MCI converters at 6-months ahead of their conversions to AD. Also, by analyzing the brain network built via longitudinal cortical thickness changes, a significant decrease (P<0.02) of the network clustering coefficient (associated with the development of AD pathology) is found in the Progressive-MCI group, which indicates the degenerated wiring efficiency of the brain network due to AD. More interestingly, the decreasing of network clustering coefficient of the olfactory cortex region was also found in the AD patients, which suggests the olfactory dysfunction. Although the smell identification test is not performed in ADNI, this finding is consistent with other AD-related olfactory studies
    corecore