14 research outputs found

    Automated Decision Support System for Traumatic Injuries

    Full text link
    With trauma being one of the leading causes of death in the U.S., automated decision support systems that can accurately detect traumatic injuries and predict their outcomes are crucial for preventing secondary injuries and guiding care management. My dissertation research incorporates machine learning and image processing techniques to extract knowledge from structured (e.g., electronic health records) and unstructured (e.g., computed tomography images) data to generate real-time, robust, quantitative trauma diagnosis and prognosis. This work addresses two challenges: 1) incorporating clinical domain knowledge into deep convolutional neural networks using classical image processing techniques and 2) using post-hoc explainers to align black box predictive machine learning models with clinical domain knowledge. Addressing these challenges is necessary for developing trustworthy clinical decision-support systems that can be generalized across the healthcare system. Motivated by this goal, we introduce an explainable and expert-guided machine learning framework to predict the outcome of traumatic brain injury. We also propose image processing approaches to automatically assess trauma from computed tomography scans. This research comprises four projects. In the first project, we propose an explainable hierarchical machine learning framework to predict the long-term functional outcome of traumatic brain injury using information available in electronic health records. This information includes demographic data, baseline features, radiology reports, laboratory values, injury severity scores, and medical history. To build such a framework, we peer inside the black-box machine learning models to explain their rationale for each predicted risk score. Accordingly, additional layers of statistical inference and human expert validation are added to the model, which ensures the predicted risk score’s trustworthiness. We demonstrate that imposing statistical and domain knowledge “checks and balances” not only does not adversely affect the performance of the machine learning classifier but also makes it more reliable. In the second project, we introduce a framework for detecting and assessing the severity of brain subdural hematomas. First, the hematoma is segmented using a combination of hand-crafted and deep learning features. Next, we calculate the volume of the injured region to quantitatively assess its severity. We show that the combination of classical image processing and deep learning can outperform deep-learning-only methods to achieve improved average performance and robustness. In the third project, we develop a framework to identify and assess liver trauma by calculating the percentage of the liver parenchyma disrupted by trauma. First, liver parenchyma and trauma masks are segmented by employing a deep learning backbone. Next, these segmented regions are refined with respect to the domain knowledge about the location and intensity distribution of liver trauma. This framework accurately estimated the severity of liver parenchyma trauma. In the final project, we propose a kidney segmentation method for patients with blunt abdominal trauma. This model incorporates machine learning and active contour modeling to generate kidney masks on abdominal CT images. The resultant of this component can provide a region of interest for screening kidney traumas in future studies. Together, the four projects discussed in this thesis contribute to diagnosis and prognosis of trauma across multiple body regions. They provide a quantitative assessment of traumas that is a more accurate measurement of the risk for adverse health outcomes as an alternative to current qualitative and sometimes subjective current clinical practice.PHDBioinformaticsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/168065/1/negarf_1.pd

    Reconstruction and validation of arterial geometries for computational fluid dynamics using multiple temporal frames of 4D flow-MRI magnitude Images

    Get PDF
    Purpose Segmentation and reconstruction of arterial blood vessels is a fundamental step in the translation of computational fluid dynamics (CFD) to the clinical practice. Four-dimensional flow magnetic resonance imaging (4D Flow-MRI) can provide detailed information of blood flow but processing this information to elucidate the underlying anatomical structures is challenging. In this study, we present a novel approach to create high-contrast anatomical images from retrospective 4D Flow-MRI data. Methods For healthy and clinical cases, the 3D instantaneous velocities at multiple cardiac time steps were superimposed directly onto the 4D Flow-MRI magnitude images and combined into a single composite frame. This new Composite Phase-Contrast Magnetic Resonance Angiogram (CPC-MRA) resulted in enhanced and uniform contrast within the lumen. These images were subsequently segmented and reconstructed to generate 3D arterial models for CFD. Using the time-dependent, 3D incompressible Reynolds-averaged Navier–Stokes equations, the transient aortic haemodynamics was computed within a rigid wall model of patient geometries. Results Validation of these models against the gold standard CT-based approach showed no statistically significant inter-modality difference regarding vessel radius or curvature (p > 0.05), and a similar Dice Similarity Coefficient and Hausdorff Distance. CFD-derived near-wall hemodynamics indicated a significant inter-modality difference (p > 0.05), though these absolute errors were small. When compared to the in vivo data, CFD-derived velocities were qualitatively similar. Conclusion This proof-of-concept study demonstrated that functional 4D Flow-MRI information can be utilized to retrospectively generate anatomical information for CFD models in the absence of standard imaging datasets and intravenous contrast

    Deep learning for tomographic reconstruction with limited data

    Get PDF
    Tomography is a powerful technique to non-destructively determine the interior structure of an object.Usually, a series of projection images (e.g.\ X-ray images) is acquired from a range of different positions.from these projection images, a reconstruction of the object's interior is computed. Many advanced applications require fast acquisition, effectively limiting the number of projection images and imposing a level of noise on these images. These limitations result in artifacts (deficiencies) in the reconstructed images. Recently, deep neural networks have emerged as a powerful technique to remove these limited-data artifacts from reconstructed images, often outperformingconventional state-of-the-art techniques. To perform this task, the networks are typically trained on a dataset of paired low-quality and high-quality images of similar objects. This is a major obstacle to their use in many practical applications. In this thesis, we explore techniques to employ deep learning in advanced experiments where measuring additional objects is not possible.Financial support was provided by the Netherlands Organisation for Scientific Research (NWO), programme 639.073.506Number theory, Algebra and Geometr

    The radiological investigation of musculoskeletal tumours : chairperson's introduction

    No full text

    Infective/inflammatory disorders

    Get PDF

    3D deformable registration of longitudinal abdominopelvic CT images using unsupervised deep learning.

    Get PDF
    BACKGROUND AND OBJECTIVES: Deep learning is being increasingly used for deformable image registration and unsupervised approaches, in particular, have shown great potential. However, the registration of abdominopelvic Computed Tomography (CT) images remains challenging due to the larger displacements compared to those in brain or prostate Magnetic Resonance Imaging datasets that are typically considered as benchmarks. In this study, we investigate the use of the commonly used unsupervised deep learning framework VoxelMorph for the registration of a longitudinal abdominopelvic CT dataset acquired in patients with bone metastases from breast cancer. METHODS: As a pre-processing step, the abdominopelvic CT images were refined by automatically removing the CT table and all other extra-corporeal components. To improve the learning capabilities of the VoxelMorph framework when only a limited amount of training data is available, a novel incremental training strategy is proposed based on simulated deformations of consecutive CT images in the longitudinal dataset. This devised training strategy was compared against training on simulated deformations of a single CT volume. A widely used software toolbox for deformable image registration called NiftyReg was used as a benchmark. The evaluations were performed by calculating the Dice Similarity Coefficient (DSC) between manual vertebrae segmentations and the Structural Similarity Index (SSIM). RESULTS: The CT table removal procedure allowed both VoxelMorph and NiftyReg to achieve significantly better registration performance. In a 4-fold cross-validation scheme, the incremental training strategy resulted in better registration performance compared to training on a single volume, with a mean DSC of 0.929±0.037 and 0.883±0.033, and a mean SSIM of 0.984±0.009 and 0.969±0.007, respectively. Although our deformable image registration method did not outperform NiftyReg in terms of DSC (0.988±0.003) or SSIM (0.995±0.002), the registrations were approximately 300 times faster. CONCLUSIONS: This study showed the feasibility of deep learning based deformable registration of longitudinal abdominopelvic CT images via a novel incremental training strategy based on simulated deformations

    3D deformable registration of longitudinal abdominopelvic CT images using unsupervised deep learning

    No full text
    Background and Objectives: Deep learning is being increasingly used for deformable image registration and unsupervised approaches, in particular, have shown great potential. However, the registration of abdominopelvic Computed Tomography (CT) images remains challenging due to the larger displacements compared to those in brain or prostate Magnetic Resonance Imaging datasets that are typically considered as benchmarks. In this study, we investigate the use of the commonly used unsupervised deep learning framework VoxelMorph for the registration of a longitudinal abdominopelvic CT dataset acquired in patients with bone metastases from breast cancer. Methods: As a pre-processing step, the abdominopelvic CT images were refined by automatically removing the CT table and all other extra-corporeal components. To improve the learning capabilities of the VoxelMorph framework when only a limited amount of training data is available, a novel incremental training strategy is proposed based on simulated deformations of consecutive CT images in the longitudinal dataset. This devised training strategy was compared against training on simulated deformations of a single CT volume. A widely used software toolbox for deformable image registration called NiftyReg was used as a benchmark. The evaluations were performed by calculating the Dice Similarity Coefficient (DSC) between manual vertebrae segmentations and the Structural Similarity Index (SSIM). Results: The CT table removal procedure allowed both VoxelMorph and NiftyReg to achieve significantly better registration performance. In a 4-fold cross-validation scheme, the incremental training strategy resulted in better registration performance compared to training on a single volume, with a mean DSC of 0.929±0.037 and 0.883±0.033, and a mean SSIM of 0.984±0.009 and 0.969±0.007, respectively. Although our deformable image registration method did not outperform NiftyReg in terms of DSC (0.988±0.003) or SSIM (0.995±0.002), the registrations were approximately 300 times faster. Conclusions: This study showed the feasibility of deep learning based deformable registration of longitudinal abdominopelvic CT images via a novel incremental training strategy based on simulated deformations

    [<sup>18</sup>F]fluorination of biorelevant arylboronic acid pinacol ester scaffolds synthesized by convergence techniques

    Get PDF
    Aim: The development of small molecules through convergent multicomponent reactions (MCR) has been boosted during the last decade due to the ability to synthesize, virtually without any side-products, numerous small drug-like molecules with several degrees of structural diversity.(1) The association of positron emission tomography (PET) labeling techniques in line with the “one-pot” development of biologically active compounds has the potential to become relevant not only for the evaluation and characterization of those MCR products through molecular imaging, but also to increase the library of radiotracers available. Therefore, since the [18F]fluorination of arylboronic acid pinacol ester derivatives tolerates electron-poor and electro-rich arenes and various functional groups,(2) the main goal of this research work was to achieve the 18F-radiolabeling of several different molecules synthesized through MCR. Materials and Methods: [18F]Fluorination of boronic acid pinacol esters was first extensively optimized using a benzaldehyde derivative in relation to the ideal amount of Cu(II) catalyst and precursor to be used, as well as the reaction solvent. Radiochemical conversion (RCC) yields were assessed by TLC-SG. The optimized radiolabeling conditions were subsequently applied to several structurally different MCR scaffolds comprising biologically relevant pharmacophores (e.g. β-lactam, morpholine, tetrazole, oxazole) that were synthesized to specifically contain a boronic acid pinacol ester group. Results: Radiolabeling with fluorine-18 was achieved with volumes (800 μl) and activities (≤ 2 GBq) compatible with most radiochemistry techniques and modules. In summary, an increase in the quantities of precursor or Cu(II) catalyst lead to higher conversion yields. An optimal amount of precursor (0.06 mmol) and Cu(OTf)2(py)4 (0.04 mmol) was defined for further reactions, with DMA being a preferential solvent over DMF. RCC yields from 15% to 76%, depending on the scaffold, were reproducibly achieved. Interestingly, it was noticed that the structure of the scaffolds, beyond the arylboronic acid, exerts some influence in the final RCC, with electron-withdrawing groups in the para position apparently enhancing the radiolabeling yield. Conclusion: The developed method with high RCC and reproducibility has the potential to be applied in line with MCR and also has a possibility to be incorporated in a later stage of this convergent “one-pot” synthesis strategy. Further studies are currently ongoing to apply this radiolabeling concept to fluorine-containing approved drugs whose boronic acid pinacol ester precursors can be synthesized through MCR (e.g. atorvastatin)
    corecore