1,679 research outputs found

    Multi-Level Canonical Correlation Analysis for Standard-Dose PET Image Estimation

    Get PDF
    Positron emission tomography (PET) images are widely used in many clinical applications such as tumor detection and brain disorder diagnosis. To obtain PET images of diagnostic quality, a sufficient amount of radioactive tracer has to be injected into a living body, which will inevitably increase the risk of radiation exposure. On the other hand, if the tracer dose is considerably reduced, the quality of the resulting images would be significantly degraded. It is of great interest to estimate a standard-dose PET (S-PET) image from a low-dose one in order to reduce the risk of radiation exposure and preserve image quality. This may be achieved through mapping both standard-dose and low-dose PET data into a common space and then performing patch based sparse representation. However, a one-size-fits-all common space built from all training patches is unlikely to be optimal for each target S-PET patch, which limits the estimation accuracy. In this paper, we propose a data-driven multi-level Canonical Correlation Analysis (mCCA) scheme to solve this problem. Specifically, a subset of training data that is most useful in estimating a target S-PET patch is identified in each level, and then used in the next level to update common space and improve estimation. Additionally, we also use multi-modal magnetic resonance images to help improve the estimation with complementary information. Validations on phantom and real human brain datasets show that our method effectively estimates S-PET images and well preserves critical clinical quantification measures, such as standard uptake value

    A cross-scanner and cross-tracer deep learning method for the recovery of standard-dose imaging quality from low-dose PET

    Get PDF
    PURPOSE: A critical bottleneck for the credibility of artificial intelligence (AI) is replicating the results in the diversity of clinical practice. We aimed to develop an AI that can be independently applied to recover high-quality imaging from low-dose scans on different scanners and tracers. METHODS: Brain [(18)F]FDG PET imaging of 237 patients scanned with one scanner was used for the development of AI technology. The developed algorithm was then tested on [(18)F]FDG PET images of 45 patients scanned with three different scanners, [(18)F]FET PET images of 18 patients scanned with two different scanners, as well as [(18)F]Florbetapir images of 10 patients. A conditional generative adversarial network (GAN) was customized for cross-scanner and cross-tracer optimization. Three nuclear medicine physicians independently assessed the utility of the results in a clinical setting. RESULTS: The improvement achieved by AI recovery significantly correlated with the baseline image quality indicated by structural similarity index measurement (SSIM) (r = −0.71, p < 0.05) and normalized dose acquisition (r = −0.60, p < 0.05). Our cross-scanner and cross-tracer AI methodology showed utility based on both physical and clinical image assessment (p < 0.05). CONCLUSION: The deep learning development for extensible application on unknown scanners and tracers may improve the trustworthiness and clinical acceptability of AI-based dose reduction. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s00259-021-05644-1

    Learning Algorithms for Fat Quantification and Tumor Characterization

    Get PDF
    Obesity is one of the most prevalent health conditions. About 30% of the world\u27s and over 70% of the United States\u27 adult populations are either overweight or obese, causing an increased risk for cardiovascular diseases, diabetes, and certain types of cancer. Among all cancers, lung cancer is the leading cause of death, whereas pancreatic cancer has the poorest prognosis among all major cancers. Early diagnosis of these cancers can save lives. This dissertation contributes towards the development of computer-aided diagnosis tools in order to aid clinicians in establishing the quantitative relationship between obesity and cancers. With respect to obesity and metabolism, in the first part of the dissertation, we specifically focus on the segmentation and quantification of white and brown adipose tissue. For cancer diagnosis, we perform analysis on two important cases: lung cancer and Intraductal Papillary Mucinous Neoplasm (IPMN), a precursor to pancreatic cancer. This dissertation proposes an automatic body region detection method trained with only a single example. Then a new fat quantification approach is proposed which is based on geometric and appearance characteristics. For the segmentation of brown fat, a PET-guided CT co-segmentation method is presented. With different variants of Convolutional Neural Networks (CNN), supervised learning strategies are proposed for the automatic diagnosis of lung nodules and IPMN. In order to address the unavailability of a large number of labeled examples required for training, unsupervised learning approaches for cancer diagnosis without explicit labeling are proposed. We evaluate our proposed approaches (both supervised and unsupervised) on two different tumor diagnosis challenges: lung and pancreas with 1018 CT and 171 MRI scans respectively. The proposed segmentation, quantification and diagnosis approaches explore the important adiposity-cancer association and help pave the way towards improved diagnostic decision making in routine clinical practice

    Identification of Causal Relationship between Amyloid-beta Accumulation and Alzheimer's Disease Progression via Counterfactual Inference

    Full text link
    Alzheimer's disease (AD) is a neurodegenerative disorder that is beginning with amyloidosis, followed by neuronal loss and deterioration in structure, function, and cognition. The accumulation of amyloid-beta in the brain, measured through 18F-florbetapir (AV45) positron emission tomography (PET) imaging, has been widely used for early diagnosis of AD. However, the relationship between amyloid-beta accumulation and AD pathophysiology remains unclear, and causal inference approaches are needed to uncover how amyloid-beta levels can impact AD development. In this paper, we propose a graph varying coefficient neural network (GVCNet) for estimating the individual treatment effect with continuous treatment levels using a graph convolutional neural network. We highlight the potential of causal inference approaches, including GVCNet, for measuring the regional causal connections between amyloid-beta accumulation and AD pathophysiology, which may serve as a robust tool for early diagnosis and tailored care

    Novel Statistical Learning Methods for Multi-Modality Heterogeneous Data Fusion in Health Care Applications

    Get PDF
    abstract: With the development of computer and sensing technology, rich datasets have become available in many fields such as health care, manufacturing, transportation, just to name a few. Also, data come from multiple heterogeneous sources or modalities. This is a common phenomenon in health care systems. While multi-modality data fusion is a promising research area, there are several special challenges in health care applications. (1) The integration of biological and statistical model is a big challenge; (2) It is commonplace that data from various modalities is not available for every patient due to cost, accessibility, and other reasons. This results in a special missing data structure in which different modalities may be missed in “blocks”. Therefore, how to train a predictive model using such a dataset poses a significant challenge to statistical learning. (3) It is well known that different modality data may contain different aspects of information about the response. The current studies cannot afford to solve this problem. My dissertation includes new statistical learning model development to address each of the aforementioned challenges as well as application case studies using real health care datasets, included in three chapters (Chapter 2, 3, and 4), respectively. Collectively, it is expected that my dissertation could provide a new sets of statistical learning models, algorithms, and theory contributed to multi-modality heterogeneous data fusion driven by the unique challenges in this area. Also, application of these new methods to important medical problems using real-world datasets is expected to provide solutions to these problems, and therefore contributing to the application domains.Dissertation/ThesisDoctoral Dissertation Industrial Engineering 201

    Longitudinal Morphometric Study of Genetic Influence of APOE e4 Genotype on Hippocampal Atrophy - An N=1925 Surface-based ADNI Study

    Get PDF
    abstract: The apolipoprotein E (APOE) e4 genotype is the most prevalent known genetic risk factor for Alzheimer's disease (AD). In this paper, we examined the longitudinal effect of APOE e4 on hippocampal morphometry in Alzheimer's Disease Neuroimaging Initiative (ADNI). Generally, atrophy of hippocampus has more chance occurs in AD patients who carrying the APOE e4 allele than those who are APOE e4 noncarriers. Also, brain structure and function depend on APOE genotype not just for Alzheimer's disease patients but also in health elderly individuals, so APOE genotyping is considered critical in clinical trials of Alzheimer's disease. We used a large sample of elderly participants, with the help of a new automated surface registration system based on surface conformal parameterization with holomorphic 1-forms and surface fluid registration. In this system, we automatically segmented and constructed hippocampal surfaces from MR images at many different time points, such as 6 months, 1- and 2-year follow up. Between the two different hippocampal surfaces, we did the high-order correspondences, using a novel inverse consistent surface fluid registration method. At each time point, using Hotelling's T^2 test, we found significant morphological deformation in APOE e4 carriers relative to noncarriers in the entire cohort as well as in the non-demented (pooled MCI and control) subjects, affecting the left hippocampus more than the right, and this effect was more pronounced in e4 homozygotes than heterozygotes.Dissertation/ThesisMasters Thesis Computer Science 201

    First order algorithms in variational image processing

    Get PDF
    Variational methods in imaging are nowadays developing towards a quite universal and flexible tool, allowing for highly successful approaches on tasks like denoising, deblurring, inpainting, segmentation, super-resolution, disparity, and optical flow estimation. The overall structure of such approaches is of the form D(Ku)+αR(u)→min⁥u{\cal D}(Ku) + \alpha {\cal R} (u) \rightarrow \min_u ; where the functional D{\cal D} is a data fidelity term also depending on some input data ff and measuring the deviation of KuKu from such and R{\cal R} is a regularization functional. Moreover KK is a (often linear) forward operator modeling the dependence of data on an underlying image, and α\alpha is a positive regularization parameter. While D{\cal D} is often smooth and (strictly) convex, the current practice almost exclusively uses nonsmooth regularization functionals. The majority of successful techniques is using nonsmooth and convex functionals like the total variation and generalizations thereof or ℓ1\ell_1-norms of coefficients arising from scalar products with some frame system. The efficient solution of such variational problems in imaging demands for appropriate algorithms. Taking into account the specific structure as a sum of two very different terms to be minimized, splitting algorithms are a quite canonical choice. Consequently this field has revived the interest in techniques like operator splittings or augmented Lagrangians. Here we shall provide an overview of methods currently developed and recent results as well as some computational studies providing a comparison of different methods and also illustrating their success in applications.Comment: 60 pages, 33 figure
    • 

    corecore