1,131 research outputs found

    Four-dimensional tomographic reconstruction by time domain decomposition

    Full text link
    Since the beginnings of tomography, the requirement that the sample does not change during the acquisition of one tomographic rotation is unchanged. We derived and successfully implemented a tomographic reconstruction method which relaxes this decades-old requirement of static samples. In the presented method, dynamic tomographic data sets are decomposed in the temporal domain using basis functions and deploying an L1 regularization technique where the penalty factor is taken for spatial and temporal derivatives. We implemented the iterative algorithm for solving the regularization problem on modern GPU systems to demonstrate its practical use

    Automatic Emphysema Detection using Weakly Labeled HRCT Lung Images

    Get PDF
    A method for automatically quantifying emphysema regions using High-Resolution Computed Tomography (HRCT) scans of patients with chronic obstructive pulmonary disease (COPD) that does not require manually annotated scans for training is presented. HRCT scans of controls and of COPD patients with diverse disease severity are acquired at two different centers. Textural features from co-occurrence matrices and Gaussian filter banks are used to characterize the lung parenchyma in the scans. Two robust versions of multiple instance learning (MIL) classifiers, miSVM and MILES, are investigated. The classifiers are trained with the weak labels extracted from the forced expiratory volume in one minute (FEV1_1) and diffusing capacity of the lungs for carbon monoxide (DLCO). At test time, the classifiers output a patient label indicating overall COPD diagnosis and local labels indicating the presence of emphysema. The classifier performance is compared with manual annotations by two radiologists, a classical density based method, and pulmonary function tests (PFTs). The miSVM classifier performed better than MILES on both patient and emphysema classification. The classifier has a stronger correlation with PFT than the density based method, the percentage of emphysema in the intersection of annotations from both radiologists, and the percentage of emphysema annotated by one of the radiologists. The correlation between the classifier and the PFT is only outperformed by the second radiologist. The method is therefore promising for facilitating assessment of emphysema and reducing inter-observer variability.Comment: Accepted at PLoS ON

    Machine learning approaches for lung cancer diagnosis.

    Get PDF
    The enormity of changes and development in the field of medical imaging technology is hard to fathom, as it does not just represent the technique and process of constructing visual representations of the body from inside for medical analysis and to reveal the internal structure of different organs under the skin, but also it provides a noninvasive way for diagnosis of various disease and suggest an efficient ways to treat them. While data surrounding all of our lives are stored and collected to be ready for analysis by data scientists, medical images are considered a rich source that could provide us with a huge amount of data, that could not be read easily by physicians and radiologists, with valuable information that could be used in smart ways to discover new knowledge from these vast quantities of data. Therefore, the design of computer-aided diagnostic (CAD) system, that can be approved for use in clinical practice that aid radiologists in diagnosis and detecting potential abnormalities, is of a great importance. This dissertation deals with the development of a CAD system for lung cancer diagnosis, which is the second most common cancer in men after prostate cancer and in women after breast cancer. Moreover, lung cancer is considered the leading cause of cancer death among both genders in USA. Recently, the number of lung cancer patients has increased dramatically worldwide and its early detection doubles a patient’s chance of survival. Histological examination through biopsies is considered the gold standard for final diagnosis of pulmonary nodules. Even though resection of pulmonary nodules is the ideal and most reliable way for diagnosis, there is still a lot of different methods often used just to eliminate the risks associated with the surgical procedure. Lung nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. A pulmonary nodule is the first indication to start diagnosing lung cancer. Lung nodules can be benign (normal subjects) or malignant (cancerous subjects). Large (generally defined as greater than 2 cm in diameter) malignant nodules can be easily detected with traditional CT scanning techniques. However, the diagnostic options for small indeterminate nodules are limited due to problems associated with accessing small tumors. Therefore, additional diagnostic and imaging techniques which depends on the nodules’ shape and appearance are needed. The ultimate goal of this dissertation is to develop a fast noninvasive diagnostic system that can enhance the accuracy measures of early lung cancer diagnosis based on the well-known hypotheses that malignant nodules have different shape and appearance than benign nodules, because of the high growth rate of the malignant nodules. The proposed methodologies introduces new shape and appearance features which can distinguish between benign and malignant nodules. To achieve this goal a CAD system is implemented and validated using different datasets. This CAD system uses two different types of features integrated together to be able to give a full description to the pulmonary nodule. These two types are appearance features and shape features. For the appearance features different texture appearance descriptors are developed, namely the 3D histogram of oriented gradient, 3D spherical sector isosurface histogram of oriented gradient, 3D adjusted local binary pattern, 3D resolved ambiguity local binary pattern, multi-view analytical local binary pattern, and Markov Gibbs random field. Each one of these descriptors gives a good description for the nodule texture and the level of its signal homogeneity which is a distinguishable feature between benign and malignant nodules. For the shape features multi-view peripheral sum curvature scale space, spherical harmonics expansions, and different group of fundamental geometric features are utilized to describe the nodule shape complexity. Finally, the fusion of different combinations of these features, which is based on two stages is introduced. The first stage generates a primary estimation for every descriptor. Followed by the second stage that consists of an autoencoder with a single layer augmented with a softmax classifier to provide us with the ultimate classification of the nodule. These different combinations of descriptors are combined into different frameworks that are evaluated using different datasets. The first dataset is the Lung Image Database Consortium which is a benchmark publicly available dataset for lung nodule detection and diagnosis. The second dataset is our local acquired computed tomography imaging data that has been collected from the University of Louisville hospital and the research protocol was approved by the Institutional Review Board at the University of Louisville (IRB number 10.0642). These frameworks accuracy was about 94%, which make the proposed frameworks demonstrate promise to be valuable tool for the detection of lung cancer

    LROC Investigation of Three Strategies for Reducing the Impact of Respiratory Motion on the Detection of Solitary Pulmonary Nodules in SPECT

    Get PDF
    The objective of this investigation was to determine the effectiveness of three motion reducing strategies in diminishing the degrading impact of respiratory motion on the detection of small solitary pulmonary nodules (SPNs) in single-photon emission computed tomographic (SPECT) imaging in comparison to a standard clinical acquisition and the ideal case of imaging in the absence of respiratory motion. To do this nonuniform rational B-spline cardiac-torso (NCAT) phantoms based on human-volunteer CT studies were generated spanning the respiratory cycle for a normal background distribution of Tc-99 m NeoTect. Similarly, spherical phantoms of 1.0-cm diameter were generated to model small SPN for each of the 150 uniquely located sites within the lungs whose respiratory motion was based on the motion of normal structures in the volunteer CT studies. The SIMIND Monte Carlo program was used to produce SPECT projection data from these. Normal and single-lesion containing SPECT projection sets with a clinically realistic Poisson noise level were created for the cases of 1) the end-expiration (EE) frame with all counts, 2) respiration-averaged motion with all counts, 3) one fourth of the 32 frames centered around EE (Quarter Binning), 4) one half of the 32 frames centered around EE (Half Binning), and 5) eight temporally binned frames spanning the respiratory cycle. Each of the sets of combined projection data were reconstructed with RBI-EM with system spatial-resolution compensation (RC). Based on the known motion for each of the 150 different lesions, the reconstructed volumes of respiratory bins were shifted so as to superimpose the locations of the SPN onto that in the first bin (Reconstruct and Shift). Five human observers performed localization receiver operating characteristics (LROC) studies of SPN detection. The observer results were analyzed for statistical significance differences in SPN detection accuracy among the three correction strategies, the standard acquisition, and the ideal case of the absence of respiratory motion. Our human-observer LROC determined that Quarter Binning and Half Binning strategies resulted in SPN detection accuracy statistically significantly below (P \u3c 0.05) that of standard clinical acquisition, whereas the Reconstruct and Shift strategy resulted in a detection accuracy not statistically significantly different from that of the ideal case. This investigation demonstrates that tumor detection based on acquisitions associated with less than all the counts which could potentially be employed may result in poorer detection despite limiting the motion of the lesion. The Reconstruct and Shift method results in tumor detection that is equivalent to ideal motion correction

    A Deep Learning U-Net for Detecting and Segmenting Liver Tumors

    Get PDF
    Visualization of liver tumors on simulation CT scans is challenging even with contrast-enhancement, due to the sensitivity of the contrast enhancement to the timing of the CT acquisition. Image registration to magnetic resonance imaging (MRI) can be helpful for delineation, but differences in patient position, liver shape and volume, and the lack of anatomical landmarks between the two image sets makes the task difficult. This study develops a U-Net based neural network for automated liver and tumor segmentation for purposes of radiotherapy treatment planning. Non-contrast simulation based abdominal CT axial scans of 52 patients with primary liver tumors were utilized. Preprocessing steps included HU windowing to isolate livers from the scan and creating masks for liver and tumor using the radiotherapy structure set (RTSTRUCT) DICOM file, and converting the images to a PNG format. The RTSTRUCT file contained the ground truth contours that were manually labelled by the physician for both liver and tumor. The image slices were split into 1400 for training and 600 for validation. Two fully convolutional neural networks with a U-Net architecture were used in this study. The first U-Net segments the livers. The second U-Net segments the tumor from the liver segments produced from the first network. The dice coefficient for liver segmentation was 89.5% and the dice coefficient for liver tumor segmentation was 44.4%. The results showed that the proposed algorithm had good performance in liver segmentation and shows areas for improvement for liver tumor segmentation

    Transfer learning for multicenter classification of chronic obstructive pulmonary disease

    Get PDF
    Chronic obstructive pulmonary disease (COPD) is a lung disease which can be quantified using chest computed tomography (CT) scans. Recent studies have shown that COPD can be automatically diagnosed using weakly supervised learning of intensity and texture distributions. However, up till now such classifiers have only been evaluated on scans from a single domain, and it is unclear whether they would generalize across domains, such as different scanners or scanning protocols. To address this problem, we investigate classification of COPD in a multi-center dataset with a total of 803 scans from three different centers, four different scanners, with heterogenous subject distributions. Our method is based on Gaussian texture features, and a weighted logistic classifier, which increases the weights of samples similar to the test data. We show that Gaussian texture features outperform intensity features previously used in multi-center classification tasks. We also show that a weighting strategy based on a classifier that is trained to discriminate between scans from different domains, can further improve the results. To encourage further research into transfer learning methods for classification of COPD, upon acceptance of the paper we will release two feature datasets used in this study on http://bigr.nl/research/projects/copdComment: Accepted at Journal of Biomedical and Health Informatic

    Feature-driven Volume Visualization of Medical Imaging Data

    Get PDF
    Direct volume rendering (DVR) is a volume visualization technique that has been proved to be a very powerful tool in many scientific visualization domains. Diagnostic medical imaging is one such domain in which DVR provides new capabilities for the analysis of complex cases and improves the efficiency of image interpretation workflows. However, the full potential of DVR in the medical domain has not yet been realized. A major obstacle for a better integration of DVR in the medical domain is the time-consuming process to optimize the rendering parameters that are needed to generate diagnostically relevant visualizations in which the important features that are hidden in image volumes are clearly displayed, such as shape and spatial localization of tumors, its relationship with adjacent structures, and temporal changes in the tumors. In current workflows, clinicians must manually specify the transfer function (TF), view-point (camera), clipping planes, and other visual parameters. Another obstacle for the adoption of DVR to the medical domain is the ever increasing volume of imaging data. The advancement of imaging acquisition techniques has led to a rapid expansion in the size of the data, in the forms of higher resolutions, temporal imaging acquisition to track treatment responses over time, and an increase in the number of imaging modalities that are used for a single procedure. The manual specification of the rendering parameters under these circumstances is very challenging. This thesis proposes a set of innovative methods that visualize important features in multi-dimensional and multi-modality medical images by automatically or semi-automatically optimizing the rendering parameters. Our methods enable visualizations necessary for the diagnostic procedure in which 2D slice of interest (SOI) can be augmented with 3D anatomical contextual information to provide accurate spatial localization of 2D features in the SOI; the rendering parameters are automatically computed to guarantee the visibility of 3D features; and changes in 3D features can be tracked in temporal data under the constraint of consistent contextual information. We also present a method for the efficient computation of visibility histograms (VHs) using adaptive binning, which allows our optimal DVR to be automated and visualized in real-time. We evaluated our methods by producing visualizations for a variety of clinically relevant scenarios and imaging data sets. We also examined the computational performance of our methods for these scenarios

    Feature-driven Volume Visualization of Medical Imaging Data

    Get PDF
    Direct volume rendering (DVR) is a volume visualization technique that has been proved to be a very powerful tool in many scientific visualization domains. Diagnostic medical imaging is one such domain in which DVR provides new capabilities for the analysis of complex cases and improves the efficiency of image interpretation workflows. However, the full potential of DVR in the medical domain has not yet been realized. A major obstacle for a better integration of DVR in the medical domain is the time-consuming process to optimize the rendering parameters that are needed to generate diagnostically relevant visualizations in which the important features that are hidden in image volumes are clearly displayed, such as shape and spatial localization of tumors, its relationship with adjacent structures, and temporal changes in the tumors. In current workflows, clinicians must manually specify the transfer function (TF), view-point (camera), clipping planes, and other visual parameters. Another obstacle for the adoption of DVR to the medical domain is the ever increasing volume of imaging data. The advancement of imaging acquisition techniques has led to a rapid expansion in the size of the data, in the forms of higher resolutions, temporal imaging acquisition to track treatment responses over time, and an increase in the number of imaging modalities that are used for a single procedure. The manual specification of the rendering parameters under these circumstances is very challenging. This thesis proposes a set of innovative methods that visualize important features in multi-dimensional and multi-modality medical images by automatically or semi-automatically optimizing the rendering parameters. Our methods enable visualizations necessary for the diagnostic procedure in which 2D slice of interest (SOI) can be augmented with 3D anatomical contextual information to provide accurate spatial localization of 2D features in the SOI; the rendering parameters are automatically computed to guarantee the visibility of 3D features; and changes in 3D features can be tracked in temporal data under the constraint of consistent contextual information. We also present a method for the efficient computation of visibility histograms (VHs) using adaptive binning, which allows our optimal DVR to be automated and visualized in real-time. We evaluated our methods by producing visualizations for a variety of clinically relevant scenarios and imaging data sets. We also examined the computational performance of our methods for these scenarios

    Characterization and Compensation of Hysteretic Cardiac Respiratory Motion in Myocardial Perfusion Studies Through MRI Investigations

    Get PDF
    Respiratory motion causes artifacts and blurring of cardiac structures in reconstructed images of SPECT and PET cardiac studies. Hysteresis in respiratory motion causes the organs to move in distinct paths during inspiration and expiration. Current respiratory motion correction methods use a signal generated by tracking the motion of the abdomen during respiration to bin list- mode data as a function of the magnitude of this respiratory signal. They thereby fail to account for hysteretic motion. The goal of this research was to demonstrate the effects of hysteretic respiratory motion and the importance of its correction for different medical imaging techniques particularly SPECT and PET. This study describes a novel approach for detecting and correcting hysteresis in clinical SPECT and PET studies. From the combined use of MRI and a synchronized Visual Tracking System (VTS) in volunteers we developed hysteretic modeling using the Bouc-Wen model with inputs from measurements of both chest and abdomen respiratory motion. With the MRI determined heart motion as the truth in the volunteer studies we determined the Bouc Wen model could match the behavior over a range of hysteretic cycles. The proposed approach was validated through phantom simulations and applied to clinical SPECT studies
    corecore