309 research outputs found

    Multiresolution spatiotemporal mechanical model of the heart as a prior to constrain the solution for 4D models of the heart.

    Get PDF
    In several nuclear cardiac imaging applications (SPECT and PET), images are formed by reconstructing tomographic data using an iterative reconstruction algorithm with corrections for physical factors involved in the imaging detection process and with corrections for cardiac and respiratory motion. The physical factors are modeled as coefficients in the matrix of a system of linear equations and include attenuation, scatter, and spatially varying geometric response. The solution to the tomographic problem involves solving the inverse of this system matrix. This requires the design of an iterative reconstruction algorithm with a statistical model that best fits the data acquisition. The most appropriate model is based on a Poisson distribution. Using Bayes Theorem, an iterative reconstruction algorithm is designed to determine the maximum a posteriori estimate of the reconstructed image with constraints that maximizes the Bayesian likelihood function for the Poisson statistical model. The a priori distribution is formulated as the joint entropy (JE) to measure the similarity between the gated cardiac PET image and the cardiac MRI cine image modeled as a FE mechanical model. The developed algorithm shows the potential of using a FE mechanical model of the heart derived from a cardiac MRI cine scan to constrain solutions of gated cardiac PET images

    Generative Invertible Networks (GIN): Pathophysiology-Interpretable Feature Mapping and Virtual Patient Generation

    Full text link
    Machine learning methods play increasingly important roles in pre-procedural planning for complex surgeries and interventions. Very often, however, researchers find the historical records of emerging surgical techniques, such as the transcatheter aortic valve replacement (TAVR), are highly scarce in quantity. In this paper, we address this challenge by proposing novel generative invertible networks (GIN) to select features and generate high-quality virtual patients that may potentially serve as an additional data source for machine learning. Combining a convolutional neural network (CNN) and generative adversarial networks (GAN), GIN discovers the pathophysiologic meaning of the feature space. Moreover, a test of predicting the surgical outcome directly using the selected features results in a high accuracy of 81.55%, which suggests little pathophysiologic information has been lost while conducting the feature selection. This demonstrates GIN can generate virtual patients not only visually authentic but also pathophysiologically interpretable

    Anatomy-Based Transmission Factors for Technique Optimization in Portable Chest X-ray

    Get PDF
    Currently, portable x-ray examinations do not employ automatic exposure control (AEC). To aid in the design of a size-specific technique chart, acrylic slabs of various thicknesses are often used to estimate x-ray transmission factors for patients of various body thicknesses. This approach, while simple, does not account for patient anatomy, tissue heterogeneity, and the attenuation properties of the human body. To better account for these factors, in this work, we determined x-ray transmission factors using computational patient models that are anatomically realistic. A Monte Carlo program was developed to model a portable x-ray system. Detailed modeling was done of the x-ray spectrum, detector positioning, collimation, and source-to-detector distance. Simulations were performed using 18 computational patient models from the extended cardiac-torso (XCAT) family (9 males, 9 females; age range: 2-58 years; weight range: 12-117 kg). The ratio of air kerma at the detector with and without a patient model was calculated as the transmission factor. The transmission factor decreased exponentially with increasing patient thickness. For the range of patient thicknesses examined (12-28 cm), the transmission factor ranged from approximately 25% to 2.8% when the air kerma used in the calculation represented an average over the entire imaging field of view. The transmission factor ranged from approximately 25% to 5.2% when the air kerma used in the calculation represented the average signals from two discrete AEC cells. These exponential relationships can be used to optimize imaging techniques for patients of various body thicknesses to aid in the design of clinical technique charts.https://engagedscholarship.csuohio.edu/u_poster_2014/1022/thumbnail.jp

    LROC Investigation of Three Strategies for Reducing the Impact of Respiratory Motion on the Detection of Solitary Pulmonary Nodules in SPECT

    Get PDF
    The objective of this investigation was to determine the effectiveness of three motion reducing strategies in diminishing the degrading impact of respiratory motion on the detection of small solitary pulmonary nodules (SPNs) in single-photon emission computed tomographic (SPECT) imaging in comparison to a standard clinical acquisition and the ideal case of imaging in the absence of respiratory motion. To do this nonuniform rational B-spline cardiac-torso (NCAT) phantoms based on human-volunteer CT studies were generated spanning the respiratory cycle for a normal background distribution of Tc-99 m NeoTect. Similarly, spherical phantoms of 1.0-cm diameter were generated to model small SPN for each of the 150 uniquely located sites within the lungs whose respiratory motion was based on the motion of normal structures in the volunteer CT studies. The SIMIND Monte Carlo program was used to produce SPECT projection data from these. Normal and single-lesion containing SPECT projection sets with a clinically realistic Poisson noise level were created for the cases of 1) the end-expiration (EE) frame with all counts, 2) respiration-averaged motion with all counts, 3) one fourth of the 32 frames centered around EE (Quarter Binning), 4) one half of the 32 frames centered around EE (Half Binning), and 5) eight temporally binned frames spanning the respiratory cycle. Each of the sets of combined projection data were reconstructed with RBI-EM with system spatial-resolution compensation (RC). Based on the known motion for each of the 150 different lesions, the reconstructed volumes of respiratory bins were shifted so as to superimpose the locations of the SPN onto that in the first bin (Reconstruct and Shift). Five human observers performed localization receiver operating characteristics (LROC) studies of SPN detection. The observer results were analyzed for statistical significance differences in SPN detection accuracy among the three correction strategies, the standard acquisition, and the ideal case of the absence of respiratory motion. Our human-observer LROC determined that Quarter Binning and Half Binning strategies resulted in SPN detection accuracy statistically significantly below (P \u3c 0.05) that of standard clinical acquisition, whereas the Reconstruct and Shift strategy resulted in a detection accuracy not statistically significantly different from that of the ideal case. This investigation demonstrates that tumor detection based on acquisitions associated with less than all the counts which could potentially be employed may result in poorer detection despite limiting the motion of the lesion. The Reconstruct and Shift method results in tumor detection that is equivalent to ideal motion correction

    Patient Specific Dosimetry Phantoms Using Multichannel LDDMM of the Whole Body

    Get PDF
    This paper describes an automated procedure for creating detailed patient-specific pediatric dosimetry phantoms from a small set of segmented organs in a child's CT scan. The algorithm involves full body mappings from adult template to pediatric images using multichannel large deformation diffeomorphic metric mapping (MC-LDDMM). The parallel implementation and performance of MC-LDDMM for this application is studied here for a sample of 4 pediatric patients, and from 1 to 24 processors. 93.84% of computation time is parallelized, and the efficiency of parallelization remains high until more than 8 processors are used. The performance of the algorithm was validated on a set of 24 male and 18 female pediatric patients. It was found to be accurate typically to within 1-2 voxels (2ā€“4ā€‰mm) and robust across this large and variable data set

    Impact of Poultry Mortality Pits on Farm Groundwater Quality

    Get PDF
    Proceedings of the 1999 Georgia Water Resources Conference, March 30 and 31, Athens, Georgia.Results of a 15-county survey revealed that intensive animal agriculture may impact shallow groundwater resources. Objectives of this study are to assess water quality on poultry farms and determine if there is a relationship between waste disposal practices and groundwater quality. Twenty poultry farms representing concentrated areas of commercial poultry production and four major soil provinces were evaluated using site assessments, questionnaires, electromagnetic (EM) survey readings, and chemical and microbiological analysis of domestic well water. Based upon the EM survey results, five farms were instrumented with lysimeters and test wells to determine possible nutrient and microbiological movement to groundwater. Site evaluations revealed that 10 of the 47 (21 %) domestic wells did not have appropriate well head protection to prevent surface water contamination. Five of the 47 (11 %) wells were located downslope and/or within 100 ft. of a nitrogen source other than pits and averaged nitrate-N (N03-N) levels above background (3 ppm). Thirty-eight percent had elevated coliform levels and 10.6% contained Salmonella in at least one sample during the sampling period. EM surveys and monitoring data indicated that nutrients migrate less than 100 ft. laterally down gradient from the pits. Poultry mortality pits on the 20 farms did not appear to elevate nitrate levels above background. Groundwater nitrate-N levels were higher on farms containing uncovered litter stacks. Preliminary results indicate that uncovered litter stacks may have a greater impact on groundwater quality than poultry mortality pits. Additional testing on various soil types is needed.Sponsored and Organized by: U.S. Geological Survey, Georgia Department of Natural Resources, The University of Georgia, Georgia State University, Georgia Institute of TechnologyThis book was published by the Institute of Ecology, The University of Georgia, Athens, Georgia 30602-2202 with partial funding provided by the U.S. Department of Interior, geological Survey, through the Georgia Water Research Insttitute as authorized by the Water Research Institutes Authorization Act of 1990 (P.L. 101-397). The views and statements advanced in this publication are solely those of the authors and do not represent official views or policies of the University of Georgia or the U.S. Geological Survey or the conference sponsors

    Simulating Cardiac Fluid Dynamics in the Human Heart

    Full text link
    Cardiac fluid dynamics fundamentally involves interactions between complex blood flows and the structural deformations of the muscular heart walls and the thin, flexible valve leaflets. There has been longstanding scientific, engineering, and medical interest in creating mathematical models of the heart that capture, explain, and predict these fluid-structure interactions. However, existing computational models that account for interactions among the blood, the actively contracting myocardium, and the cardiac valves are limited in their abilities to predict valve performance, resolve fine-scale flow features, or use realistic descriptions of tissue biomechanics. Here we introduce and benchmark a comprehensive mathematical model of cardiac fluid dynamics in the human heart. A unique feature of our model is that it incorporates biomechanically detailed descriptions of all major cardiac structures that are calibrated using tensile tests of human tissue specimens to reflect the heart's microstructure. Further, it is the first fluid-structure interaction model of the heart that provides anatomically and physiologically detailed representations of all four cardiac valves. We demonstrate that this integrative model generates physiologic dynamics, including realistic pressure-volume loops that automatically capture isovolumetric contraction and relaxation, and predicts fine-scale flow features. None of these outputs are prescribed; instead, they emerge from interactions within our comprehensive description of cardiac physiology. Such models can serve as tools for predicting the impacts of medical devices or clinical interventions. They also can serve as platforms for mechanistic studies of cardiac pathophysiology and dysfunction, including congenital defects, cardiomyopathies, and heart failure, that are difficult or impossible to perform in patients

    XCAT-GAN for Synthesizing 3D Consistent Labeled Cardiac MR Images on Anatomically Variable XCAT Phantoms

    Get PDF
    Generative adversarial networks (GANs) have provided promising data enrichment solutions by synthesizing high-fidelity images. However, generating large sets of labeled images with new anatomical variations remains unexplored. We propose a novel method for synthesizing cardiac magnetic resonance (CMR) images on a population of virtual subjects with a large anatomical variation, introduced using the 4D eXtended Cardiac and Torso (XCAT) computerized human phantom. We investigate two conditional image synthesis approaches grounded on a semantically-consistent mask-guided image generation technique: 4-class and 8-class XCAT-GANs. The 4-class technique relies on only the annotations of the heart; while the 8-class technique employs a predicted multi-tissue label map of the heart-surrounding organs and provides better guidance for our conditional image synthesis. For both techniques, we train our conditional XCAT-GAN with real images paired with corresponding labels and subsequently at the inference time, we substitute the labels with the XCAT derived ones. Therefore, the trained network accurately transfers the tissue-specific textures to the new label maps. By creating 33 virtual subjects of synthetic CMR images at the end-diastolic and end-systolic phases, we evaluate the usefulness of such data in the downstream cardiac cavity segmentation task under different augmentation strategies. Results demonstrate that even with only 20% of real images (40 volumes) seen during training, segmentation performance is retained with the addition of synthetic CMR images. Moreover, the improvement in utilizing synthetic images for augmenting the real data is evident through the reduction of Hausdorff distance up to 28% and an increase in the Dice score up to 5%, indicating a higher similarity to the ground truth in all dimensions.Comment: Accepted for MICCAI 202

    GPU-based Low Dose CT Reconstruction via Edge-preserving Total Variation Regularization

    Full text link
    High radiation dose in CT scans increases a lifetime risk of cancer and has become a major clinical concern. Recently, iterative reconstruction algorithms with Total Variation (TV) regularization have been developed to reconstruct CT images from highly undersampled data acquired at low mAs levels in order to reduce the imaging dose. Nonetheless, TV regularization may lead to over-smoothed images and lost edge information. To solve this problem, in this work we develop an iterative CT reconstruction algorithm with edge-preserving TV regularization to reconstruct CT images from highly undersampled data obtained at low mAs levels. The CT image is reconstructed by minimizing an energy consisting of an edge-preserving TV norm and a data fidelity term posed by the x-ray projections. The edge-preserving TV term is proposed to preferentially perform smoothing only on non-edge part of the image in order to avoid over-smoothing, which is realized by introducing a penalty weight to the original total variation norm. Our iterative algorithm is implemented on GPU to improve its speed. We test our reconstruction algorithm on a digital NCAT phantom, a physical chest phantom, and a Catphan phantom. Reconstruction results from a conventional FBP algorithm and a TV regularization method without edge preserving penalty are also presented for comparison purpose. The experimental results illustrate that both TV-based algorithm and our edge-preserving TV algorithm outperform the conventional FBP algorithm in suppressing the streaking artifacts and image noise under the low dose context. Our edge-preserving algorithm is superior to the TV-based algorithm in that it can preserve more information of fine structures and therefore maintain acceptable spatial resolution.Comment: 21 pages, 6 figures, 2 table
    • ā€¦
    corecore