97 research outputs found

    Expert System with an Embedded Imaging Module for Diagnosing Lung Diseases

    Get PDF
    Lung diseases are one of the major causes of suffering and death in the world. Improved survival rate could be obtained if the diseases can be detected at its early stage. Specialist doctors with the expertise and experience to interpret medical images and diagnose complex lung diseases are scarce. In this work, a rule-based expert system with an embedded imaging module is developed to assist the general physicians in hospitals and clinics to diagnose lung diseases whenever the services of specialist doctors are not available. The rule-based expert system contains a large knowledge base of data from various categories such as patient's personal and medical history, clinical symptoms, clinical test results and radiological information. An imaging module is integrated into the expert system for the enhancement of chest X-Ray images. The goal of this module is to enhance the chest X-Ray images so that it can provide details similar to more expensive methods such as MRl and CT scan. A new algorithm which is a modified morphological grayscale top hat transform is introduced to increase the visibility of lung nodules in chest X-Rays. Fuzzy inference technique is used to predict the probability of malignancy of the nodules. The output generated by the expert system was compared with the diagnosis made by the specialist doctors. The system is able to produce results\ud which are similar to the diagnosis made by the doctors and is acceptable by clinical standards

    Image Enhancement of Cancerous Tissue in Mammography Images

    Get PDF
    This research presents a framework for enhancing and analyzing time-sequenced mammographic images for detection of cancerous tissue, specifically designed to assist radiologists and physicians with the detection of breast cancer. By using computer aided diagnosis (CAD) systems as a tool to help in the detection of breast cancer in computed tomography (CT) mammography images, previous CT mammography images will enhance the interpretation of the next series of images. The first stage of this dissertation applies image subtraction to images from the same patient over time. Image types are defined as temporal subtraction, dual-energy subtraction, and Digital Database for Screening Mammography (DDSM). Image enhancement begins by applying image registration and subtraction using Matlab 2012a registration for temporal images and dual-energy subtraction for dual-energy images. DDSM images require no registration or subtraction as they are used for baseline analysis. The image data are from three different sources and all images had been annotated by radiologists for each image type using an image mask to identify malignant and benign. The second stage involved the examination of four different thresholding techniques. The amplitude thresholding method manipulates objects and backgrounds in such a way that object and background pixels have grey levels grouped into two dominant and different modes. In these cases, it was possible to extract the objects from the background using a threshold that separates the modes. The local thresholding introduced posed no restrictions on region shape or size, because it maximized edge features by thresholding local regions separately. The overall histogram analysis showed minima and maxima of the image and provided four feature types--mean, variance, skewness, and kurtosis. K-means clustering provided sequential splitting, initially performing dynamic splits. These dynamic splits were then further split into smaller, more variant regions until the regions of interest were isolated. Regional-growing methods used recursive splitting to partition the image top-down by using the average brightness of a region. Each thresholding method was applied to each of the three image types. In the final stage, the training set and test set were derived by applying the four thresholding methods on each of the three image types. This was accomplished by running Matlab 2012a grey-level, co-occurrence matrix (GLCM) and utilizing 21 target feature types, which were obtained from the Matlab function texture features. An additional four feature types were obtained from the state of the histogram-based features types. These 25 feature types were applied to each of the two classifications malignant and benign. WEKA 3.6.10 was used along with classifier J48 and cross-validation 10 fold to find the precision, recall, and f-measure values. Best results were obtained from these two combinations: temporal subtraction with amplitude thresholding, and temporal subtraction with regional-growing thresholding. To summarize, the researcher\u27s contribution was to assess the effectiveness of various thresholding methods in the context of a three-stage approach, to help radiologists find cancerous tissue lesions in CT and MRI mammography images

    Lung Volume Calculation in Preclinical MicroCT: A Fast Geometrical Approach

    Get PDF
    Lung; Preclinical imaging; VolumePulmón; Imágenes preclínicas; VolumenPulmó; Imatges preclíniques; VolumIn this study, we present a time-efficient protocol for thoracic volume calculation as a proxy for total lung volume. We hypothesize that lung volume can be calculated indirectly from this thoracic volume. We compared the measured thoracic volume with manually segmented and automatically thresholded lung volumes, with manual segmentation as the gold standard. A linear regression formula was obtained and used for calculating the theoretical lung volume. This volume was compared with the gold standard volumes. In healthy animals, thoracic volume was 887.45 mm3, manually delineated lung volume 554.33 mm3 and thresholded aerated lung volume 495.38 mm3 on average. Theoretical lung volume was 554.30 mm3. Finally, the protocol was applied to three animal models of lung pathology (lung metastasis and transgenic primary lung tumor and fungal infection). In confirmed pathologic animals, thoracic volumes were: 893.20 mm3, 860.12 and 1027.28 mm3. Manually delineated volumes were 640.58, 503.91 and 882.42 mm3, respectively. Thresholded lung volumes were 315.92 mm3, 408.72 and 236 mm3, respectively. Theoretical lung volume resulted in 635.28, 524.30 and 863.10.42 mm3. No significant differences were observed between volumes. This confirmed the potential use of this protocol for lung volume calculation in pathologic models

    Computational methods for the analysis of functional 4D-CT chest images.

    Get PDF
    Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention

    Expert System with an Embedded Imaging Module for Diagnosing Lung Diseases

    Get PDF
    Lung diseases are one of the major causes of suffering and death in the world. Improved survival rate could be obtained if the diseases can be detected at its early stage. Specialist doctors with the expertise and experience to interpret medical images and diagnose complex lung diseases are scarce. In this work, a rule-based expert system with an embedded imaging module is developed to assist the general physicians in hospitals and clinics to diagnose lung diseases whenever the services of specialist doctors are not available. The rule-based expert system contains a large knowledge base of data from various categories such as patient's personal and medical history, clinical symptoms, clinical test results and radiological information. An imaging module is integrated into the expert system for the enhancement of chest X-Ray images. The goal of this module is to enhance the chest X-Ray images so that it can provide details similar to more expensive methods such as MRl and CT scan. A new algorithm which is a modified morphological grayscale top hat transform is introduced to increase the visibility of lung nodules in chest X-Rays. Fuzzy inference technique is used to predict the probability of malignancy of the nodules. The output generated by the expert system was compared with the diagnosis made by the specialist doctors. The system is able to produce results which are similar to the diagnosis made by the doctors and is acceptable by clinical standards

    A computer aided diagnosis system for lung nodules detection in postero anterior chest radiographs

    Get PDF
    This thesis describes a Computer Aided System aimed at lung nodules detection. The fully automatized method developed to search for nodules is composed by four steps. They are the segmentation of the lung field, the enhancement of the image, the extraction of the candidate regions, and the selection between them of the regions with the highest chance to be True Positives. The steps of segmentation, enhancement and candidates extraction are based on multi-scale analysis. The common assumption underlying their development is that the signal representing the details to be detected by each of them (lung borders or nodule regions) is composed by a mixture of more simple signals belonging to different scales and level of details. The last step of candidate region classification is the most complicate; its 8 task is to discern among a high number of candidate regions, the few True Positives. To this aim several features and different classifiers have been investigated. In Chapter 1 the segmentation algorithm is described; the algorithm has been tested on the images of two different databases, the JSRT and the Niguarda database, both described in the next section, for a total of 409 images. We compared the results obtained with another method presented in the literature and described by Ginneken, in [85], as the one obtaining the best performance at the state of the art; it has been tested on the same images of the JSRT database. No errors have been detected in the results obtained by our method, meanwhile the one previously mentioned produced an overall number of error equal to 50. Also the results obtained on the images of the Niguarda database confirmed the efficacy of the system realized, allowing us to say that this is the best method presented so far in the literature. This sentence is based also on the fact that this is the only system tested on such an amount of images, and they are belonging to two different databases. Chapter 2 is aimed at the description of the multi-scale enhancement and the extraction methods. The enhancement allows to produce an image where the \u201cconspicuity\u201d of nodules is increased, so that nodules of different sizes and located in parts of the lungs characterized by completely different anatomic noise are more visible. Based on the same assumption the candidates extraction procedure, described in the same chapter, employs a multi-scale method to detect all the nodules of different sizes. Also this step has been compared with two methods ([8] and [1]) described in the literature and tested on the same images. Our implementation of the first one of them ([8]) produced really poor results; the second one obtained a sensitivity ratio (See Appendix C for its definition) equal to 86%. The considerably better performance of our method is proved by the fact that the sensitivity ratio we obtained is much higher (it is equal to 97%) and also the number of False positives detected is much less. The experiments aimed at the classification of the candidates are described in chapter 3; both a rule based technique and 2 learning systems, the Multi Layer Perceptron (MLP) and the Support Vector Machine (SVM), have been investigated. Their input is a set of 16 features. The rule based system obtained the best performance: the cardinality of the set of candidates left is highly reduced without lowering the sensitivity of the system, since no True Positive region is lost. It can be added that this performance is much better than the one of the system used by Ginneken and Schilam in [1], since its sensitivity is lower (equal to 77%) and the number of False Positive left is comparable. The drawback of a rule based system is the need of setting the 9 thresholds used by the rules; since they are experimentally set the system is dependent on the images used to develop it. Therefore it may happen that, on different databases, the performance could not be so good. The result of the MLPs and of the SVMs are described in detail and the ROC analysis is also reported, regarding the experiments performed with the SVMs. Furthermore, the attempt to improve the performance of the classification leaded to other experiments employing SVMs trained with more complicate feature sets. The results obtained, since not better than the previous, showed the need of a proper selection of the features. Future works will then be focused at testing other sets of features, and their combination obtained by means of proper feature selection techniques

    Coronary Artery Segmentation and Motion Modelling

    No full text
    Conventional coronary artery bypass surgery requires invasive sternotomy and the use of a cardiopulmonary bypass, which leads to long recovery period and has high infectious potential. Totally endoscopic coronary artery bypass (TECAB) surgery based on image guided robotic surgical approaches have been developed to allow the clinicians to conduct the bypass surgery off-pump with only three pin holes incisions in the chest cavity, through which two robotic arms and one stereo endoscopic camera are inserted. However, the restricted field of view of the stereo endoscopic images leads to possible vessel misidentification and coronary artery mis-localization. This results in 20-30% conversion rates from TECAB surgery to the conventional approach. We have constructed patient-specific 3D + time coronary artery and left ventricle motion models from preoperative 4D Computed Tomography Angiography (CTA) scans. Through temporally and spatially aligning this model with the intraoperative endoscopic views of the patient's beating heart, this work assists the surgeon to identify and locate the correct coronaries during the TECAB precedures. Thus this work has the prospect of reducing the conversion rate from TECAB to conventional coronary bypass procedures. This thesis mainly focus on designing segmentation and motion tracking methods of the coronary arteries in order to build pre-operative patient-specific motion models. Various vessel centreline extraction and lumen segmentation algorithms are presented, including intensity based approaches, geometric model matching method and morphology-based method. A probabilistic atlas of the coronary arteries is formed from a group of subjects to facilitate the vascular segmentation and registration procedures. Non-rigid registration framework based on a free-form deformation model and multi-level multi-channel large deformation diffeomorphic metric mapping are proposed to track the coronary motion. The methods are applied to 4D CTA images acquired from various groups of patients and quantitatively evaluated

    Image processing for plastic surgery planning

    Get PDF
    This thesis presents some image processing tools for plastic surgery planning. In particular, it presents a novel method that combines local and global context in a probabilistic relaxation framework to identify cephalometric landmarks used in Maxillofacial plastic surgery. It also uses a method that utilises global and local symmetry to identify abnormalities in CT frontal images of the human body. The proposed methodologies are evaluated with the help of several clinical data supplied by collaborating plastic surgeons
    • …
    corecore