24 research outputs found

    Computational methods for the analysis of functional 4D-CT chest images.

    Get PDF
    Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention

    A non-invasive diagnostic system for early assessment of acute renal transplant rejection.

    Get PDF
    Early diagnosis of acute renal transplant rejection (ARTR) is of immense importance for appropriate therapeutic treatment administration. Although the current diagnostic technique is based on renal biopsy, it is not preferred due to its invasiveness, recovery time (1-2 weeks), and potential for complications, e.g., bleeding and/or infection. In this thesis, a computer-aided diagnostic (CAD) system for early detection of ARTR from 4D (3D + b-value) diffusion-weighted (DW) MRI data is developed. The CAD process starts from a 3D B-spline-based data alignment (to handle local deviations due to breathing and heart beat) and kidney tissue segmentation with an evolving geometric (level-set-based) deformable model. The latter is guided by a voxel-wise stochastic speed function, which follows from a joint kidney-background Markov-Gibbs random field model accounting for an adaptive kidney shape prior and for on-going visual kidney-background appearances. A cumulative empirical distribution of apparent diffusion coefficient (ADC) at different b-values of the segmented DW-MRI is considered a discriminatory transplant status feature. Finally, a classifier based on deep learning of a non-negative constrained stacked auto-encoder is employed to distinguish between rejected and non-rejected renal transplants. In the “leave-one-subject-out” experiments on 53 subjects, 98% of the subjects were correctly classified (namely, 36 out of 37 rejected transplants and 16 out of 16 nonrejected ones). Additionally, a four-fold cross-validation experiment was performed, and an average accuracy of 96% was obtained. These experimental results hold promise of the proposed CAD system as a reliable non-invasive diagnostic tool

    Fast and robust hybrid framework for infant brain classification from structural MRI : a case study for early diagnosis of autism.

    Get PDF
    The ultimate goal of this work is to develop a computer-aided diagnosis (CAD) system for early autism diagnosis from infant structural magnetic resonance imaging (MRI). The vital step to achieve this goal is to get accurate segmentation of the different brain structures: whitematter, graymatter, and cerebrospinal fluid, which will be the main focus of this thesis. The proposed brain classification approach consists of two major steps. First, the brain is extracted based on the integration of a stochastic model that serves to learn the visual appearance of the brain texture, and a geometric model that preserves the brain geometry during the extraction process. Secondly, the brain tissues are segmented based on shape priors, built using a subset of co-aligned training images, that is adapted during the segmentation process using first- and second-order visual appearance features of infant MRIs. The accuracy of the presented segmentation approach has been tested on 300 infant subjects and evaluated blindly on 15 adult subjects. The experimental results have been evaluated by the MICCAI MR Brain Image Segmentation (MRBrainS13) challenge organizers using three metrics: Dice coefficient, 95-percentile Hausdorff distance, and absolute volume difference. The proposed method has been ranked the first in terms of performance and speed

    Diffusion-weighted magnetic resonance imaging in diagnosing graft dysfunction : a non-invasive alternative to renal biopsy.

    Get PDF
    The thesis is divided into three parts. The first part focuses on background information including how the kidney functions, diseases, and available kidney disease treatment strategies. In addition, the thesis provides information on imaging instruments and how they can be used to diagnose renal graft dysfunction. The second part focuses on elucidating the parameters linked with highly accurate diagnosis of rejection. Four parameters categories were tested: clinical biomarkers alone, individual mean apparent diffusion coefficient (ADC) at 11-different b- values, mean ADCs of certain groups of b-value, and fusion of clinical biomarkers and all b-values. The most accurate model was found to be when the b-value of b=100 s/mm2 and b=700 s/mm2 were fused. The third part of this thesis focuses on a study that uses Diffusion-Weighted MRI to diagnose and differentiate two types of renal rejection. The system was found to correctly differentiate the two types of rejection with a 98% accuracy. The last part of this thesis concludes the work that has been done and states the possible trends and future avenues

    A CAD system for early diagnosis of autism using different imaging modalities.

    Get PDF
    The term “autism spectrum disorder” (ASD) refers to a collection of neuro-developmental disorders that affect linguistic, behavioral, and social skills. Autism has many symptoms, most prominently, social impairment and repetitive behaviors. It is crucial to diagnose autism at an early stage for better assessment and investigation of this complex syndrome. There have been a lot of efforts to diagnose ASD using different techniques, such as imaging modalities, genetic techniques, and behavior reports. Imaging modalities have been extensively exploited for ASD diagnosis, and one of the most successful ones is Magnetic resonance imaging(MRI),where it has shown promise for the early diagnosis of the ASD related abnormalities in particular. Magnetic resonance imaging (MRI) modalities have emerged as powerful means that facilitate non-invasive clinical diagnostics of various diseases and abnormalities since their inception in the 1980s. After the advent in the nineteen eighties, MRI soon became one of the most promising non- invasive modalities for visualization and diagnostics of ASD-related abnormalities. Along with its main advantage of no exposure to radiation, high contrast, and spatial resolution, the recent advances to MRI modalities have notably increased diagnostic certainty. Multiple MRI modalities, such as different types of structural MRI (sMRI) that examines anatomical changes, and functional MRI (fMRI) that examines brain activity by monitoring blood flow changes,have been employed to investigate facets of ASD in order to better understand this complex syndrome. This work aims at developing a new computer-aided diagnostic (CAD) system for autism diagnosis using different imaging modalities. It mainly relies on making use of structural magnetic resonance images for extracting notable shape features from parts of the brainthat proved to correlate with ASD from previous neuropathological studies. Shape features from both the cerebral cortex (Cx) and cerebral white matter(CWM)are extracted. Fusion of features from these two structures is conducted based on the recent findings suggesting that Cx changes in autism are related to CWM abnormalities. Also, when fusing features from more than one structure, this would increase the robustness of the CAD system. Moreover, fMRI experiments are done and analyzed to find areas of activation in the brains of autistic and typically developing individuals that are related to a specific task. All sMRI findings are fused with those of fMRI to better understand ASD in terms of both anatomy and functionality,and thus better classify the two groups. This is one aspect of the novelty of this CAD system, where sMRI and fMRI studies are both applied on subjects from different ages to diagnose ASD. In order to build such a CAD system, three main blocks are required. First, 3D brain segmentation is applied using a novel hybrid model that combines shape, intensity, and spatial information. Second, shape features from both Cx and CWM are extracted and anf MRI reward experiment is conducted from which areas of activation that are related to the task of this experiment are identified. Those features were extracted from local areas of the brain to provide an accurate analysis of ASD and correlate it with certain anatomical areas. Third and last, fusion of all the extracted features is done using a deep-fusion classification network to perform classification and obtain the diagnosis report. Fusing features from all modalities achieved a classification accuracy of 94.7%, which emphasizes the significance of combining structures/modalities for ASD diagnosis. To conclude, this work could pave the pathway for better understanding of the autism spectrum by finding local areas that correlate to the disease. The idea of personalized medicine is emphasized in this work, where the proposed CAD system holds the promise to resolve autism endophenotypes and help clinicians deliver personalized treatment to individuals affected with this complex syndrome

    CAD system for early diagnosis of diabetic retinopathy based on 3D extracted imaging markers.

    Get PDF
    This dissertation makes significant contributions to the field of ophthalmology, addressing the segmentation of retinal layers and the diagnosis of diabetic retinopathy (DR). The first contribution is a novel 3D segmentation approach that leverages the patientspecific anatomy of retinal layers. This approach demonstrates superior accuracy in segmenting all retinal layers from a 3D retinal image compared to current state-of-the-art methods. It also offers enhanced speed, enabling potential clinical applications. The proposed segmentation approach holds great potential for supporting surgical planning and guidance in retinal procedures such as retinal detachment repair or macular hole closure. Surgeons can benefit from the accurate delineation of retinal layers, enabling better understanding of the anatomical structure and more effective surgical interventions. Moreover, real-time guidance systems can be developed to assist surgeons during procedures, improving overall patient outcomes. The second contribution of this dissertation is the introduction of a novel computeraided diagnosis (CAD) system for precise identification of diabetic retinopathy. The CAD system utilizes 3D-OCT imaging and employs an innovative approach that extracts two distinct features: first-order reflectivity and 3D thickness. These features are then fused and used to train and test a neural network classifier. The proposed CAD system exhibits promising results, surpassing other machine learning and deep learning algorithms commonly employed in DR detection. This demonstrates the effectiveness of the comprehensive analysis approach employed by the CAD system, which considers both low-level and high-level data from the 3D retinal layers. The CAD system presents a groundbreaking contribution to the field, as it goes beyond conventional methods, optimizing backpropagated neural networks to integrate multiple levels of information effectively. By achieving superior performance, the proposed CAD system showcases its potential in accurately diagnosing DR and aiding in the prevention of vision loss. In conclusion, this dissertation presents novel approaches for the segmentation of retinal layers and the diagnosis of diabetic retinopathy. The proposed methods exhibit significant improvements in accuracy, speed, and performance compared to existing techniques, opening new avenues for clinical applications and advancements in the field of ophthalmology. By addressing future research directions, such as testing on larger datasets, exploring alternative algorithms, and incorporating user feedback, the proposed methods can be further refined and developed into robust, accurate, and clinically valuable tools for diagnosing and monitoring retinal diseases

    Multiparametric Magnetic Resonance Imaging Artificial Intelligence Pipeline for Oropharyngeal Cancer Radiotherapy Treatment Guidance

    Get PDF
    Oropharyngeal cancer (OPC) is a widespread disease and one of the few domestic cancers that is rising in incidence. Radiographic images are crucial for assessment of OPC and aid in radiotherapy (RT) treatment. However, RT planning with conventional imaging approaches requires operator-dependent tumor segmentation, which is the primary source of treatment error. Further, OPC expresses differential tumor/node mid-RT response (rapid response) rates, resulting in significant differences between planned and delivered RT dose. Finally, clinical outcomes for OPC patients can also be variable, which warrants the investigation of prognostic models. Multiparametric MRI (mpMRI) techniques that incorporate simultaneous anatomical and functional information coupled to artificial intelligence (AI) approaches could improve clinical decision support for OPC by providing immediately actionable clinical rationale for adaptive RT planning. If tumors could be reproducibly segmented, rapid response could be classified, and prognosis could be reliably determined, overall patient outcomes would be optimized to improve the therapeutic index as a function of more risk-adapted RT volumes. Consequently, there is an unmet need for automated and reproducible imaging which can simultaneously segment tumors and provide predictive value for actionable RT adaptation. This dissertation primarily seeks to explore and optimize image processing, tumor segmentation, and patient outcomes in OPC through a combination of advanced imaging techniques and AI algorithms. In the first specific aim of this dissertation, we develop and evaluate mpMRI pre-processing techniques for use in downstream segmentation, response prediction, and outcome prediction pipelines. Various MRI intensity standardization and registration approaches were systematically compared and benchmarked. Moreover, synthetic image algorithms were developed to decrease MRI scan time in an effort to optimize our AI pipelines. We demonstrated that proper intensity standardization and image registration can improve mpMRI quality for use in AI algorithms, and developed a novel method to decrease mpMRI acquisition time. Subsequently, in the second specific aim of this dissertation, we investigated underlying questions regarding the implementation of RT-related auto-segmentation. Firstly, we quantified interobserver variability for an unprecedented large number of observers for various radiotherapy structures in several disease sites (with a particular emphasis on OPC) using a novel crowdsourcing platform. We then trained an AI algorithm on a series of extant matched mpMRI datasets to segment OPC primary tumors. Moreover, we validated and compared our best model\u27s performance to clinical expert observers. We demonstrated that AI-based mpMRI OPC tumor auto-segmentation offers decreased variability and comparable accuracy to clinical experts, and certain mpMRI input channel combinations could further improve performance. Finally, in the third specific aim of this dissertation, we predicted OPC primary tumor mid-therapy (rapid) treatment response and prognostic outcomes. Using co-registered pre-therapy and mid-therapy primary tumor manual segmentations of OPC patients, we generated and characterized treatment sensitive and treatment resistant pre-RT sub-volumes. These sub-volumes were used to train an AI algorithm to predict individual voxel-wise treatment resistance. Additionally, we developed an AI algorithm to predict OPC patient progression free survival using pre-therapy imaging from an international data science competition (ranking 1st place), and then translated these approaches to mpMRI data. We demonstrated AI models could be used to predict rapid response and prognostic outcomes using pre-therapy imaging, which could help guide treatment adaptation, though further work is needed. In summary, the completion of these aims facilitates the development of an image-guided fully automated OPC clinical decision support tool. The resultant deliverables from this project will positively impact patients by enabling optimized therapeutic interventions in OPC. Future work should consider investigating additional imaging timepoints, imaging modalities, uncertainty quantification, perceptual and ethical considerations, and prospective studies for eventual clinical implementation. A dynamic version of this dissertation is publicly available and assigned a digital object identifier through Figshare (doi: 10.6084/m9.figshare.22141871)

    U-Net based deep convolutional neural network models for liver segmentation from CT scan images

    Get PDF
    Liver segmentation is a critical task for diagnosis, treatment and follow-up processes of liver cancer. Computed Tomography (CT) scans are the common medical image modality for the segmentation task. Liver segmentation is considered a very hard task for many reasons. Medical images are limited for researchers. Liver shape is changing based on the patient position during the CT scan process, and varies from patient to another based on the health conditions. Liver and other organs, for example heart, stomach, and pancreas, share similar gray scale range in CT images. Liver treatment using surgery operations is very critical because liver contains significant amount of blood and the position of liver is very close to critical organs like heart, lungs, stomach, and crucial blood veins. Therefore the accuracy of segmentation is critical to define liver and tumors shape and position especially when the treatment surgery conducted using radio frequency heating or cryoablation needles. In the literature, convolutional neural networks (CNN) have achieved very high accuracy on liver segmentation and the U-Net model is considered the state-of-the-art for the medical image segmentation task. Many researchers have developed CNN models based on U-Net and stacked U-Nets with/without bridged connections. However, CNN models need significant number of labeled samples for training and validation which is not commonly available in the case of liver CT images. The process of generating manual annotated masks for the training samples are time consuming and need involvement of expert clinical doctors. Data augmentation has thus been widely used in boosting the sample size for model training. Using rotation with steps of 15o and horizontal and vertical flipping as augmentation techniques, the lack of dataset and training samples issue is solved. The choice of rotation and flipping because in the real life situations, most of the CT scans recorded while the while patient lies on face down or with 45o, 60o,90o on right side according to the location of the tumor. Nonetheless, such process has brought up a new issue for liver segmentation. For example, due to the augmentation operations of rotation and flipping, the trained model detected part of the heart as a liver when it is on the wrong side of the body. The first part of this research conducted an extensive experimental study of U-Net based model in terms of deeper and wider, and variant bridging and skip-connections in order to give recommendation for using U-Net based models. Top-down and bottom-up approaches were used to construct variations of deeper models, whilst two, three, and four stacked U-Nets were applied to construct the wider U-Net models. The variation of the skip connections between two and three U-Nets are the key factors in the study. The proposed model used 2 bridged U-Nets with three extra skip connections between the U-Nets to overcome the flipping issue. A new loss function based on minimizing the distance between the center of mass between the predicted blobs has also enhanced the liver segmentation accuracy. Finally, the deep-supervision concept was integrated with the new loss functions where the total loss was calculated as the sum of weighted loss functions over each weighted deeply supervision. It has achieved a segmentation accuracy of up to 90%. The proposed model of 2 bridged U-Nets with compound skip-connections and specific number of levels, layers, filters, and image size has increased the accuracy of liver segmentation to ~90% whereas the original U-Net and bridged nets have recorded a segmentation accuracy of ~85%. Although applying extra deeply supervised layers and weighted compound of dice coefficient and centroid loss functions solved the flipping issue with ~93%, there is still a room for improving the accuracy by applying some image enhancement as pre-processing stage

    U-Net based deep convolutional neural network models for liver segmentation from CT scan images

    Get PDF
    Liver segmentation is a critical task for diagnosis, treatment and follow-up processes of liver cancer. Computed Tomography (CT) scans are the common medical image modality for the segmentation task. Liver segmentation is considered a very hard task for many reasons. Medical images are limited for researchers. Liver shape is changing based on the patient position during the CT scan process, and varies from patient to another based on the health conditions. Liver and other organs, for example heart, stomach, and pancreas, share similar gray scale range in CT images. Liver treatment using surgery operations is very critical because liver contains significant amount of blood and the position of liver is very close to critical organs like heart, lungs, stomach, and crucial blood veins. Therefore the accuracy of segmentation is critical to define liver and tumors shape and position especially when the treatment surgery conducted using radio frequency heating or cryoablation needles. In the literature, convolutional neural networks (CNN) have achieved very high accuracy on liver segmentation and the U-Net model is considered the state-of-the-art for the medical image segmentation task. Many researchers have developed CNN models based on U-Net and stacked U-Nets with/without bridged connections. However, CNN models need significant number of labeled samples for training and validation which is not commonly available in the case of liver CT images. The process of generating manual annotated masks for the training samples are time consuming and need involvement of expert clinical doctors. Data augmentation has thus been widely used in boosting the sample size for model training. Using rotation with steps of 15o and horizontal and vertical flipping as augmentation techniques, the lack of dataset and training samples issue is solved. The choice of rotation and flipping because in the real life situations, most of the CT scans recorded while the while patient lies on face down or with 45o, 60o,90o on right side according to the location of the tumor. Nonetheless, such process has brought up a new issue for liver segmentation. For example, due to the augmentation operations of rotation and flipping, the trained model detected part of the heart as a liver when it is on the wrong side of the body. The first part of this research conducted an extensive experimental study of U-Net based model in terms of deeper and wider, and variant bridging and skip-connections in order to give recommendation for using U-Net based models. Top-down and bottom-up approaches were used to construct variations of deeper models, whilst two, three, and four stacked U-Nets were applied to construct the wider U-Net models. The variation of the skip connections between two and three U-Nets are the key factors in the study. The proposed model used 2 bridged U-Nets with three extra skip connections between the U-Nets to overcome the flipping issue. A new loss function based on minimizing the distance between the center of mass between the predicted blobs has also enhanced the liver segmentation accuracy. Finally, the deep-supervision concept was integrated with the new loss functions where the total loss was calculated as the sum of weighted loss functions over each weighted deeply supervision. It has achieved a segmentation accuracy of up to 90%. The proposed model of 2 bridged U-Nets with compound skip-connections and specific number of levels, layers, filters, and image size has increased the accuracy of liver segmentation to ~90% whereas the original U-Net and bridged nets have recorded a segmentation accuracy of ~85%. Although applying extra deeply supervised layers and weighted compound of dice coefficient and centroid loss functions solved the flipping issue with ~93%, there is still a room for improving the accuracy by applying some image enhancement as pre-processing stage
    corecore