25 research outputs found

    Extracting Lungs from CT Images using Fully Convolutional Networks

    Full text link
    Analysis of cancer and other pathological diseases, like the interstitial lung diseases (ILDs), is usually possible through Computed Tomography (CT) scans. To aid this, a preprocessing step of segmentation is performed to reduce the area to be analyzed, segmenting the lungs and removing unimportant regions. Generally, complex methods are developed to extract the lung region, also using hand-made feature extractors to enhance segmentation. With the popularity of deep learning techniques and its automated feature learning, we propose a lung segmentation approach using fully convolutional networks (FCNs) combined with fully connected conditional random fields (CRF), employed in many state-of-the-art segmentation works. Aiming to develop a generalized approach, the publicly available datasets from University Hospitals of Geneva (HUG) and VESSEL12 challenge were studied, including many healthy and pathological CT scans for evaluation. Experiments using the dataset individually, its trained model on the other dataset and a combination of both datasets were employed. Dice scores of 98.67%±0.94%98.67\%\pm0.94\% for the HUG-ILD dataset and 99.19%±0.37%99.19\%\pm0.37\% for the VESSEL12 dataset were achieved, outperforming works in the former and obtaining similar state-of-the-art results in the latter dataset, showing the capability in using deep learning approaches.Comment: Accepted for presentation at the International Joint Conference on Neural Networks (IJCNN) 201

    Computational methods for the analysis of functional 4D-CT chest images.

    Get PDF
    Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention

    Automatic 3D pulmonary nodule detection in CT images: a survey

    Get PDF
    This work presents a systematic review of techniques for the 3D automatic detection of pulmonary nodules in computerized-tomography (CT) images. Its main goals are to analyze the latest technology being used for the development of computational diagnostic tools to assist in the acquisition, storage and, mainly, processing and analysis of the biomedical data. Also, this work identifies the progress made, so far, evaluates the challenges to be overcome and provides an analysis of future prospects. As far as the authors know, this is the first time that a review is devoted exclusively to automated 3D techniques for the detection of pulmonary nodules from lung CT images, which makes this work of noteworthy value. The research covered the published works in the Web of Science, PubMed, Science Direct and IEEEXplore up to December 2014. Each work found that referred to automated 3D segmentation of the lungs was individually analyzed to identify its objective, methodology and results. Based on the analysis of the selected works, several studies were seen to be useful for the construction of medical diagnostic aid tools. However, there are certain aspects that still require attention such as increasing algorithm sensitivity, reducing the number of false positives, improving and optimizing the algorithm detection of different kinds of nodules with different sizes and shapes and, finally, the ability to integrate with the Electronic Medical Record Systems and Picture Archiving and Communication Systems. Based on this analysis, we can say that further research is needed to develop current techniques and that new algorithms are needed to overcome the identified drawbacks

    Multi-view convolutional recurrent neural networks for lung cancer nodule identification

    Get PDF
    Screening via low-dose Computer Tomography (CT) has been shown to reduce lung cancer mortality rates by at least 20%. However, the assessment of large numbers of CT scans by radiologists is cost intensive, and potentially produces varying and inconsistent results for differing radiologists (and also for temporally-separated assessments by the same radiologist). To overcome these challenges, computer aided diagnosis systems based on deep learning methods have proved an effective in automatic detection and classification of lung cancer. Latterly, interest has focused on the full utilization of the 3D information in CT scans using 3D-CNNs and related approaches. However, such approaches do not intrinsically correlate size and shape information between slices. In this work, an innovative approach to Multi-view Convolutional Recurrent Neural Networks (MV-CRecNet) is proposed that exploits shape, size and cross-slice variations while learning to identify lung cancer nodules from CT scans. The multiple-views that are passed to the model ensure better generalization and the learning of robust features. We evaluate the proposed MV-CRecNet model on the reference Lung Image Database Consortium and Image Database Resource Initiative and Early Lung Cancer Action Program datasets; six evaluation metrics are applied to eleven comparison models for testing. Results demonstrate that proposed methodology outperforms all of the models against all of the evaluation metrics

    Computer Methods For Pulmonary Nodule Characterization From Ct Images

    Full text link
    Computed tomography (CT) scans provide radiologists a non-invasive method of imaging internal structures of the body. Although CT scans have enabled the earlier detection of suspicious nodules, these nodules are often small and difficult to accurately classify for radiologists. An automated system was developed to classify a pulmonary nodule based on image features extracted from a single CT scan. Several critical issues related to performance evaluation of such systems were also examined. The image features considered in the system were: statistics from the density distribution, shape, curvature, and boundary features. The shape and density features were computed through moment analysis of the segmented nodule. Local curvature was computed from a triangle-tessellated surface of the nodule; the statistics of the distribution of curvatures were used as features in the system. Finally, the boundary of the nodule was examined to quantify the transition region between the nodule and lung parenchyma. This was accomplished by combining the grayscale information and 3D model to measure the gradient on the surface of the nodule. These methods resulted in a total of 43 features. For compari- son, 2D features were computed for the density and shape features, resulting in 26 features. Four feature classification schemes were evaluated: logistic regression, k-nearest-neighbors, distance-weighted nearest-neighbors, and support vector machines (SVM). These features and classifiers were validated on a large dataset of 259 nodules. The best performance, an area under the ROC curve (AUC) of 0.702, was achieved using 3D features and the logistic regression classifier. A major consideration when evaluating a nodule classification system is whether the system presents an improvement over a baseline performance. Since the majority of large nodules in many datasets are malignant, the impact of nodule size on the performance of the classification system was examined. This was accomplished by comparing the performance of the system with feature sets that included sizedependent features to feature sets that excluded those features.The performance of size alone, estimated using a size-threshold classifier, was an AUC of 0.653. For the SVM classifier, removing size-dependent features reduced the performance from an AUC of 0.69 to 0.61. To approximate the performance that might be obtained on a dataset without a size bias, a subset of cases was selected where the benign and malignant nodules were of similar sizes. On this subset, size was not a very powerful feature with an AUC of 0.507, and features that were not dependent on size performed better than size-dependent features for SVM, with an AUC of 0.63 compared to 0.52. While other methods have been proposed for performing nodule classification, this is the first study to comprehensively look at the performance impact from datasets with nodules that exhibit a bias in size

    Nextmed: Automatic Imaging Segmentation, 3D Reconstruction, and 3D Model Visualization Platform Using Augmented and Virtual Reality

    Get PDF
    The visualization of medical images with advanced techniques, such as augmented reality and virtual reality, represent a breakthrough for medical professionals. In contrast to more traditional visualization tools lacking 3D capabilities, these systems use the three available dimensions. To visualize medical images in 3D, the anatomical areas of interest must be segmented. Currently, manual segmentation, which is the most commonly used technique, and semi-automatic approaches can be time consuming because a doctor is required, making segmentation for each individual case unfeasible. Using new technologies, such as computer vision and artificial intelligence for segmentation algorithms and augmented and virtual reality for visualization techniques implementation, we designed a complete platform to solve this problem and allow medical professionals to work more frequently with anatomical 3D models obtained from medical imaging. As a result, the Nextmed project, due to the different implemented software applications, permits the importation of digital imaging and communication on medicine (dicom) images on a secure cloud platform and the automatic segmentation of certain anatomical structures with new algorithms that improve upon the current research results. A 3D mesh of the segmented structure is then automatically generated that can be printed in 3D or visualized using both augmented and virtual reality, with the designed software systems. The Nextmed project is unique, as it covers the whole process from uploading dicom images to automatic segmentation, 3D reconstruction, 3D visualization, and manipulation using augmented and virtual reality. There are many researches about application of augmented and virtual reality for medical image 3D visualization; however, they are not automated platforms. Although some other anatomical structures can be studied, we focused on one case: a lung study. Analyzing the application of the platform to more than 1000 dicom images and studying the results with medical specialists, we concluded that the installation of this system in hospitals would provide a considerable improvement as a tool for medical image visualization

    Pattern recognition methods applied to medical imaging: lung nodule detection in computed tomography images

    Get PDF
    Lung cancer is one of the main public health issues in developed countries. The overall 5-year survival rate is only 10−16%, although the mortality rate among men in the United States has started to decrease by about 1.5% per year since 1991 and a similar trend for the male population has been observed in most European countries. By contrast, in the case of the female population, the survival rate is still decreasing, despite a decline in the mortality of young women has been ob- served over the last decade. Approximately 70% of lung cancers are diagnosed at too advanced stages for the treatments to be effective. The five-year survival rate for early-stage lung cancers (stage I), which can reach 70%, is sensibly higher than for cancers diagnosed at more advanced stages. Lung cancer most commonly manifests itself as non-calcified pulmonary nodules. The CT has been shown as the most sensitive imaging modality for the detection of small pulmonary nodules, particularly since the introduction of the multi-detector-row and helical CT technologies. Screening programs based on Low Dose Computed Tomography (LDCT) may be regarded as a promising technique for detecting small, early-stage lung cancers. The efficacy of screening programs based on CT in reducing the mortality rate for lung cancer has not been fully demonstrated yet, and different and opposing opinions are being pointed out on this topic by many experts. However, the recent results obtained by the National Lung Screening Trial (NLST), involving 53454 high risk patients, show a 20% reduction of mortality when the screening program was carried out with the helical CT, rather than with a conventional chest X-ray. LDCT settings are currently recommended by the screening trial protocols. However, it is not trivial in this case to identify small pulmonary nodules,due to the noisier appearance of the images in low-dose CT with respect to the standard-dose CT. Moreover, thin slices are generally used in screening programs, thus originating datasets of about 300 − 400 slices per study. De- pending on the screening trial protocol they joined, radiologists can be asked to identify even very small lung nodules, which is a very difficult and time- consuming task. Lung nodules are rather spherical objects, characterized by very low CT values and/or low contrast. Nodules may have CT values in the same range of those of blood vessels, airway walls, pleura and may be strongly connected to them. It has been demonstrated, that a large percent- age of nodules (20 − 35%) is actually missed in screening diagnoses. To support radiologists in the identification of early-stage pathological objects, about one decade ago, researchers started to develop CAD methods to be applied to CT examinations. Within this framework, two CAD sub-systems are proposed: CAD for internal nodules (CADI), devoted to the identification of small nodules embedded in the lung parenchyma, i.e. Internal Nodules (INs) and CADJP, devoted the identification of nodules originating on the pleura surface, i.e. Juxta-Pleural Nodules (JPNs) respectively. As the training and validation sets may drastically influence the performance of a CAD system, the presented approaches have been trained, developed and tested on different datasets of CT scans (Lung Image Database Consortium (LIDC), ITALUNG − CT) and finally blindly validated on the ANODE09 dataset. The two CAD sub-systems are implemented in the ITK framework, an open source C++ framework for segmentation and registration of medical im- ages, and the rendering of the obtained results are achieved using VTK, a freely available software system for 3D computer graphics, image processing and visualization. The Support Vector Machines (SVMs) are implemented in SVMLight. The two proposed approaches have been developed to detect solid nodules, since the number of Ground Glass Opacity (GGO) contained in the available datasets has been considered too low. This thesis is structured as follows: in the first chapter the basic concepts about CT and lung anatomy are explained. The second chapter deals with CAD systems and their evaluation methods. In the third chapter the datasets used for this work are described. In chapter 4 the lung segmentation algorithm is explained in details, and in chapter 5 and 6 the algorithms to detect internal and juxta-pleural candidates are discussed. In chapter 7 the reduction of false positives findings is explained. In chapter 8 results of the train and validation sessions are shown. Finally in the last chapter the conclusions are drawn

    Deep Functional Mapping For Predicting Cancer Outcome

    Get PDF
    The effective understanding of the biological behavior and prognosis of cancer subtypes is becoming very important in-patient administration. Cancer is a diverse disorder in which a significant medical progression and diagnosis for each subtype can be observed and characterized. Computer-aided diagnosis for early detection and diagnosis of many kinds of diseases has evolved in the last decade. In this research, we address challenges associated with multi-organ disease diagnosis and recommend numerous models for enhanced analysis. We concentrate on evaluating the Magnetic Resonance Imaging (MRI), Computed Tomography (CT), and Positron Emission Tomography (PET) for brain, lung, and breast scans to detect, segment, and classify types of cancer from biomedical images. Moreover, histopathological, and genomic classification of cancer prognosis has been considered for multi-organ disease diagnosis and biomarker recommendation. We considered multi-modal, multi-class classification during this study. We are proposing implementing deep learning techniques based on Convolutional Neural Network and Generative Adversarial Network. In our proposed research we plan to demonstrate ways to increase the performance of the disease diagnosis by focusing on a combined diagnosis of histology, image processing, and genomics. It has been observed that the combination of medical imaging and gene expression can effectively handle the cancer detection situation with a higher diagnostic rate rather than considering the individual disease diagnosis. This research puts forward a blockchain-based system that facilitates interpretations and enhancements pertaining to automated biomedical systems. In this scheme, a secured sharing of the biomedical images and gene expression has been established. To maintain the secured sharing of the biomedical contents in a distributed system or among the hospitals, a blockchain-based algorithm is considered that generates a secure sequence to identity a hash key. This adaptive feature enables the algorithm to use multiple data types and combines various biomedical images and text records. All data related to patients, including identity, pathological records are encrypted using private key cryptography based on blockchain architecture to maintain data privacy and secure sharing of the biomedical contents
    corecore