218 research outputs found

    PND-Net: Physics based Non-local Dual-domain Network for Metal Artifact Reduction

    Full text link
    Metal artifacts caused by the presence of metallic implants tremendously degrade the reconstructed computed tomography (CT) image quality, affecting clinical diagnosis or reducing the accuracy of organ delineation and dose calculation in radiotherapy. Recently, deep learning methods in sinogram and image domains have been rapidly applied on metal artifact reduction (MAR) task. The supervised dual-domain methods perform well on synthesized data, while unsupervised methods with unpaired data are more generalized on clinical data. However, most existing methods intend to restore the corrupted sinogram within metal trace, which essentially remove beam hardening artifacts but ignore other components of metal artifacts, such as scatter, non-linear partial volume effect and noise. In this paper, we mathematically derive a physical property of metal artifacts which is verified via Monte Carlo (MC) simulation and propose a novel physics based non-local dual-domain network (PND-Net) for MAR in CT imaging. Specifically, we design a novel non-local sinogram decomposition network (NSD-Net) to acquire the weighted artifact component, and an image restoration network (IR-Net) is proposed to reduce the residual and secondary artifacts in the image domain. To facilitate the generalization and robustness of our method on clinical CT images, we employ a trainable fusion network (F-Net) in the artifact synthesis path to achieve unpaired learning. Furthermore, we design an internal consistency loss to ensure the integrity of anatomical structures in the image domain, and introduce the linear interpolation sinogram as prior knowledge to guide sinogram decomposition. Extensive experiments on simulation and clinical data demonstrate that our method outperforms the state-of-the-art MAR methods.Comment: 19 pages, 8 figure

    Learning Disentangled Representations in the Imaging Domain

    Full text link
    Disentangled representation learning has been proposed as an approach to learning general representations even in the absence of, or with limited, supervision. A good general representation can be fine-tuned for new target tasks using modest amounts of data, or used directly in unseen domains achieving remarkable performance in the corresponding task. This alleviation of the data and annotation requirements offers tantalising prospects for applications in computer vision and healthcare. In this tutorial paper, we motivate the need for disentangled representations, present key theory, and detail practical building blocks and criteria for learning such representations. We discuss applications in medical imaging and computer vision emphasising choices made in exemplar key works. We conclude by presenting remaining challenges and opportunities.Comment: Submitted. This paper follows a tutorial style but also surveys a considerable (more than 200 citations) number of work

    Pattern classification approaches for breast cancer identification via MRI: stateā€ofā€theā€art and vision for the future

    Get PDF
    Mining algorithms for Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCEMRI) of breast tissue are discussed. The algorithms are based on recent advances in multidimensional signal processing and aim to advance current stateā€ofā€theā€art computerā€aided detection and analysis of breast tumours when these are observed at various states of development. The topics discussed include image feature extraction, information fusion using radiomics, multiā€parametric computerā€aided classification and diagnosis using information fusion of tensorial datasets as well as Clifford algebra based classification approaches and convolutional neural network deep learning methodologies. The discussion also extends to semiā€supervised deep learning and selfā€supervised strategies as well as generative adversarial networks and algorithms using generated confrontational learning approaches. In order to address the problem of weakly labelled tumour images, generative adversarial deep learning strategies are considered for the classification of different tumour types. The proposed data fusion approaches provide a novel Artificial Intelligence (AI) based framework for more robust image registration that can potentially advance the early identification of heterogeneous tumour types, even when the associated imaged organs are registered as separate entities embedded in more complex geometric spaces. Finally, the general structure of a highā€dimensional medical imaging analysis platform that is based on multiā€task detection and learning is proposed as a way forward. The proposed algorithm makes use of novel loss functions that form the building blocks for a generated confrontation learning methodology that can be used for tensorial DCEā€MRI. Since some of the approaches discussed are also based on timeā€lapse imaging, conclusions on the rate of proliferation of the disease can be made possible. The proposed framework can potentially reduce the costs associated with the interpretation of medical images by providing automated, faster and more consistent diagnosis

    Metabolic Imaging of Early Radiation-Induced Lung Injury Using Hyperpolarized 13C-Pyruvate in Rodent Lungs

    Get PDF
    Lung cancer is the leading cause of cancer related death. Radiation therapy is a prominent treatment method but leads to adverse consequences. Radiation-Induced Lung Injury (RILI) is the primary adverse consequence that limits further radiation therapy and develops in 5-37% of the treated patients. RILI proceeds in two distinct phases: a) early and reversible Radiation Pneumonitis (RP), and b) late and irreversible radiation fibrosis. Clinically, Dose Volume Histogram (DVH) parameters derived from radiation therapy planning stage are used to determine outcome and severity of RP but have been demonstrated to possess a very low predictive power. Computed Tomography (CT) is the most commonly used modality for the imaging of RP, but often only detects very late RP that leaves little room for intervention to abort the progress to irreversible radiation fibrosis. Early detection of RP using imaging may allow for interventional treatment and management of the disease and the associated symptoms in a better manner. Improvement in Dynamic Nuclear Polarization (DNP) technology has led to advancement of hyperpolarized 13-Carbon-Magnetic Resonance Imaging (13C-MRI). In this thesis, we present the investigation of early detection of RP with 13C-MRI in an animal model with the use of hyperpolarized 13C-pyruvate. A pilot study demonstrated the proof of concept along with a qualitative histological confirmation. 13C-MRI data and histology data were collected 2 weeks post irradiation of whole thorax in rodents. In the subsequent study, regional and longitudinal 13C-MRI and quantitative histology data were analyzed to demonstrate the early organ-wide response of RP. These data were collected at day 5, 10, 15 and 25 post conformal irradiation of the right rodent lung. Finally, we demonstrate a novel approach to map pH using hyperpolarized 13C-bicarbonate with the use of spiral-Iterative Decomposition of water and fat with Echo Asymmetry and Least squares estimation (IDEAL) pulse sequence. Validation of this approach by comparison to Chemical Shift Imaging (CSI) pH measurement and standard pH measurement with the aid of phantoms along with hyperpolarized 13C-bicarbonate is presented. pH mapping may play a role in the staging and therapeutic intervention of cancer

    AI in Medical Imaging Informatics: Current Challenges and Future Directions

    Get PDF
    This paper reviews state-of-the-art research solutions across the spectrum of medical imaging informatics, discusses clinical translation, and provides future directions for advancing clinical practice. More specifically, it summarizes advances in medical imaging acquisition technologies for different modalities, highlighting the necessity for efficient medical data management strategies in the context of AI in big healthcare data analytics. It then provides a synopsis of contemporary and emerging algorithmic methods for disease classification and organ/ tissue segmentation, focusing on AI and deep learning architectures that have already become the de facto approach. The clinical benefits of in-silico modelling advances linked with evolving 3D reconstruction and visualization applications are further documented. Concluding, integrative analytics approaches driven by associate research branches highlighted in this study promise to revolutionize imaging informatics as known today across the healthcare continuum for both radiology and digital pathology applications. The latter, is projected to enable informed, more accurate diagnosis, timely prognosis, and effective treatment planning, underpinning precision medicine

    Pulmonary Image Segmentation and Registration Algorithms: Towards Regional Evaluation of Obstructive Lung Disease

    Get PDF
    Pulmonary imaging, including pulmonary magnetic resonance imaging (MRI) and computed tomography (CT), provides a way to sensitively and regionally measure spatially heterogeneous lung structural-functional abnormalities. These unique imaging biomarkers offer the potential for better understanding pulmonary disease mechanisms, monitoring disease progression and response to therapy, and developing novel treatments for improved patient care. To generate these regional lung structure-function measurements and enable broad clinical applications of quantitative pulmonary MRI and CT biomarkers, as a first step, accurate, reproducible and rapid lung segmentation and registration methods are required. In this regard, we first developed a 1H MRI lung segmentation algorithm that employs complementary hyperpolarized 3He MRI functional information for improved lung segmentation. The 1H-3He MRI joint segmentation algorithm was formulated as a coupled continuous min-cut model and solved through convex relaxation, for which a dual coupled continuous max-flow model was proposed and a max-flow-based efficient numerical solver was developed. Experimental results on a clinical dataset of 25 chronic obstructive pulmonary disease (COPD) patients ranging in disease severity demonstrated that the algorithm provided rapid lung segmentation with high accuracy, reproducibility and diminished user interaction. We then developed a general 1H MRI left-right lung segmentation approach by exploring the left-to-right lung volume proportion prior. The challenging volume proportion-constrained multi-region segmentation problem was approximated through convex relaxation and equivalently represented by a max-flow model with bounded flow conservation conditions. This gave rise to a multiplier-based high performance numerical implementation based on convex optimization theories. In 20 patients with mild- to-moderate and severe asthma, the approach demonstrated high agreement with manual segmentation, excellent reproducibility and computational efficiency. Finally, we developed a CT-3He MRI deformable registration approach that coupled the complementary CT-1H MRI registration. The joint registration problem was solved by exploring optical-flow techniques, primal-dual analyses and convex optimization theories. In a diverse group of patients with asthma and COPD, the registration approach demonstrated lower target registration error than single registration and provided fast regional lung structure-function measurements that were strongly correlated with a reference method. Collectively, these lung segmentation and registration algorithms demonstrated accuracy, reproducibility and workflow efficiency that all may be clinically-acceptable. All of this is consistent with the need for broad and large-scale clinical applications of pulmonary MRI and CT

    Learning with Limited Labeled Data in Biomedical Domain by Disentanglement and Semi-Supervised Learning

    Get PDF
    In this dissertation, we are interested in improving the generalization of deep neural networks for biomedical data (e.g., electrocardiogram signal, x-ray images, etc). Although deep neural networks have attained state-of-the-art performance and, thus, deployment across a variety of domains, similar performance in the clinical setting remains challenging due to its ineptness to generalize across unseen data (e.g., new patient cohort). We address this challenge of generalization in the deep neural network from two perspectives: 1) learning disentangled representations from the deep network, and 2) developing efficient semi-supervised learning (SSL) algorithms using the deep network. In the former, we are interested in designing specific architectures and objective functions to learn representations, where variations in the data are well separated, i.e., disentangled. In the latter, we are interested in designing regularizers that encourage the underlying neural function\u27s behavior toward a common inductive bias to avoid over-fitting the function to small labeled data. Our end goal is to improve the generalization of the deep network for the diagnostic model in both of these approaches. In disentangled representations, this translates to appropriately learning latent representations from the data, capturing the observed input\u27s underlying explanatory factors in an independent and interpretable way. With data\u27s expository factors well separated, such disentangled latent space can then be useful for a large variety of tasks and domains within data distribution even with a small amount of labeled data, thus improving generalization. In developing efficient semi-supervised algorithms, this translates to utilizing a large volume of the unlabelled dataset to assist the learning from the limited labeled dataset, commonly encountered situation in the biomedical domain. By drawing ideas from different areas within deep learning like representation learning (e.g., autoencoder), variational inference (e.g., variational autoencoder), Bayesian nonparametric (e.g., beta-Bernoulli process), learning theory (e.g., analytical learning theory), function smoothing (Lipschitz Smoothness), etc., we propose several leaning algorithms to improve generalization in the associated task. We test our algorithms on real-world clinical data and show that our approach yields significant improvement over existing methods. Moreover, we demonstrate the efficacy of the proposed models in the benchmark data and simulated data to understand different aspects of the proposed learning methods. We conclude by identifying some of the limitations of the proposed methods, areas of further improvement, and broader future directions for the successful adoption of AI models in the clinical environment

    Imaging Biomarkers of Pulmonary Structure and Function

    Get PDF
    Asthma and chronic obstructive pulmonary disease (COPD) are characterized by airflow limitations resulting from airway obstruction and/or tissue destruction. The diagnosis and monitoring of these pulmonary diseases is primarily performed using spirometry, specifically the forced expiratory volume in one second (FEV1), which measures global airflow obstruction and provides no regional information of the different underlying disease pathologies. The limitations of spirometry and current therapies for lung disease patients have motivated the development of pulmonary imaging approaches, such as computed tomography (CT) and magnetic resonance imaging (MRI). Inhaled hyperpolarized noble gas MRI, specifically using helium-3 (3He) and xenon-129 (129Xe) gases, provides a way to quantify pulmonary ventilation by visualizing lung regions accessed by gas during a breath-hold, and alternatively, regions that are not accessed - coined ā€œventilation defects.ā€ Despite the strong foundation and many advantages hyperpolarized 3He MRI has to offer research and patient care, clinical translation has been inhibited in part due to the cost and need for specialized equipment, including multinuclear-MR hardware and polarizers, and personnel. Accordingly, our objective was to develop and evaluate imaging biomarkers of pulmonary structure and function using MRI and CT without the use of exogenous contrast agents or specialized equipment. First, we developed and compared CT parametric response maps (PRM) with 3He MR ventilation images in measuring gas-trapping and emphysema in ex-smokers with and without COPD. We observed that in mild-moderate COPD, 3He MR ventilation abnormalities were related to PRM gas-trapping whereas in severe COPD, ventilation abnormalities correlated with both PRM gas-trapping and PRM emphysema. We then developed and compared pulmonary ventilation abnormalities derived from Fourier decomposition of free-breathing proton (1H) MRI (FDMRI) with 3He MRI in subjects with COPD and bronchiectasis. This work demonstrated that FDMRI and 3He MRI ventilation defects were strongly related in COPD, but not in bronchiectasis subjects. In COPD only, FDMRI ventilation defects were spatially related with 3He MRI ventilation defects and emphysema. Based on the FDMRI biomarkers developed in patients with COPD and bronchiectasis, we then evaluated ventilation heterogeneity in patients with severe asthma, both pre- and post-salbutamol as well as post-methacholine challenge, using FDMRI and 3He MRI. FDMRI free-breathing ventilation abnormalities were correlated with but under-estimated 3He MRI static ventilation defects. Finally, based on the previously developed free-breathing MRI approach, we developed a whole-lung free-breathing pulmonary 1H MRI technique to measure regional specific-ventilation and evaluated both asthmatics and healthy volunteers. These measurements not only provided similar information as specific-ventilation measured using plethysmography, but also information about regional ventilation defects that were correlated with 3He MRI ventilation abnormalities. These results demonstrated that whole-lung free-breathing 1H MRI biomarker of specific-ventilation may reflect ventilation heterogeneity and/or gas-trapping in asthma. These important findings indicate that imaging biomarkers of pulmonary structure and function using MRI and CT have the potential to regionally reveal the different pathologies in COPD and asthma without the use of exogenous contrast agents. The development and validation of these clinically meaningful imaging biomarkers are critically required to accelerate pulmonary imaging translation from the research workbench to being a part of the clinical workflow, with the overall goal to improve patient outcomes

    Deep learning in medical imaging and radiation therapy

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/1/mp13264_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/2/mp13264.pd
    • ā€¦
    corecore