1,812 research outputs found

    Large cells cancer volumetry in chest computed tomography pulmonary images

    Get PDF
    Lung cancer is the leading oncological cause of death in the world. As for carcinomas, they represent between 90% and 95% of lung cancers; among them, non-small cell lung cancer is the most common type and the large cell carcinoma, the pathology on which this research focuses, is usually detected with the computed tomography images of the thorax. These images have three big problems: noise, artifacts and low contrast. The volume of the large cell carcinoma is obtained from the segmentations of the cancerous tumor generated, in a semi-automatic way, by a computational strategy based on a combination of algorithms that, in order to address the aforementioned problems, considers median and gradient magnitude filters and an unsupervised grouping technique for generating the large cell carcinoma morphology. The results of high correlation between the semi-automatic segmentations and the manual ones, drawn up by a pulmonologist, allow us to infer the excellent performance of the proposed technique. This technique can be useful in the detection and monitoring of large cell carcinoma and if it is considering this kind of computational strategy, medical specialists can establish the clinic or surgical actions oriented to address this pulmonary pathology

    Computational methods for the analysis of functional 4D-CT chest images.

    Get PDF
    Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention

    The impact of pre- and post-image processing techniques on deep learning frameworks: A comprehensive review for digital pathology image analysis

    Get PDF
    Recently, deep learning frameworks have rapidly become the main methodology for analyzing medical images. Due to their powerful learning ability and advantages in dealing with complex patterns, deep learning algorithms are ideal for image analysis challenges, particularly in the field of digital pathology. The variety of image analysis tasks in the context of deep learning includes classification (e.g., healthy vs. cancerous tissue), detection (e.g., lymphocytes and mitosis counting), and segmentation (e.g., nuclei and glands segmentation). The majority of recent machine learning methods in digital pathology have a pre- and/or post-processing stage which is integrated with a deep neural network. These stages, based on traditional image processing methods, are employed to make the subsequent classification, detection, or segmentation problem easier to solve. Several studies have shown how the integration of pre- and post-processing methods within a deep learning pipeline can further increase the model's performance when compared to the network by itself. The aim of this review is to provide an overview on the types of methods that are used within deep learning frameworks either to optimally prepare the input (pre-processing) or to improve the results of the network output (post-processing), focusing on digital pathology image analysis. Many of the techniques presented here, especially the post-processing methods, are not limited to digital pathology but can be extended to almost any image analysis field

    The impact of pre- and post-image processing techniques on deep learning frameworks: A comprehensive review for digital pathology image analysis.

    Get PDF
    Recently, deep learning frameworks have rapidly become the main methodology for analyzing medical images. Due to their powerful learning ability and advantages in dealing with complex patterns, deep learning algorithms are ideal for image analysis challenges, particularly in the field of digital pathology. The variety of image analysis tasks in the context of deep learning includes classification (e.g., healthy vs. cancerous tissue), detection (e.g., lymphocytes and mitosis counting), and segmentation (e.g., nuclei and glands segmentation). The majority of recent machine learning methods in digital pathology have a pre- and/or post-processing stage which is integrated with a deep neural network. These stages, based on traditional image processing methods, are employed to make the subsequent classification, detection, or segmentation problem easier to solve. Several studies have shown how the integration of pre- and post-processing methods within a deep learning pipeline can further increase the model's performance when compared to the network by itself. The aim of this review is to provide an overview on the types of methods that are used within deep learning frameworks either to optimally prepare the input (pre-processing) or to improve the results of the network output (post-processing), focusing on digital pathology image analysis. Many of the techniques presented here, especially the post-processing methods, are not limited to digital pathology but can be extended to almost any image analysis field

    A Modular Approach to Lung Nodule Detection from Computed Tomography Images Using Artificial Neural Networks and Content Based Image Representation

    Get PDF
    Lung cancer is one of the most lethal cancer types. Research in computer aided detection (CAD) and diagnosis for lung cancer aims at providing effective tools to assist physicians in cancer diagnosis and treatment to save lives. In this dissertation, we focus on developing a CAD framework for automated lung cancer nodule detection from 3D lung computed tomography (CT) images. Nodule detection is a challenging task that no machine intelligence can surpass human capability to date. In contrast, human recognition power is limited by vision capacity and may suffer from work overload and fatigue, whereas automated nodule detection systems can complement expert’s efforts to achieve better detection performance. The proposed CAD framework encompasses several desirable properties such as mimicking physicians by means of geometric multi-perspective analysis, computational efficiency, and the most importantly producing high performance in detection accuracy. As the central part of the framework, we develop a novel hierarchical modular decision engine implemented by Artificial Neural Networks. One advantage of this decision engine is that it supports the combination of spatial-level and feature-level information analysis in an efficient way. Our methodology overcomes some of the limitations of current lung nodule detection techniques by combining geometric multi-perspective analysis with global and local feature analysis. The proposed modular decision engine design is flexible to modifications in the decision modules; the engine structure can adopt the modifications without having to re-design the entire system. The engine can easily accommodate multi-learning scheme and parallel implementation so that each information type can be processed (in parallel) by the most adequate learning technique of its own. We have also developed a novel shape representation technique that is invariant under rigid-body transformation and we derived new features based on this shape representation for nodule detection. We implemented a prototype nodule detection system as a demonstration of the proposed framework. Experiments have been conducted to assess the performance of the proposed methodologies using real-world lung CT data. Several performance measures for detection accuracy are used in the assessment. The results show that the decision engine is able to classify patterns efficiently with very good classification performance

    Pulmonary adenocarcinoma characterization using computed tomography images

    Get PDF
    Lung cancer is one of the pathologies that sensitively affects the health of human beings. Particularly, the pathology called pulmonary adenocarcinoma represents 25% of all lung cancers. In this research, we propose a semiautomatic technique for the characterization of a tumor (adenocarcinoma type), present in a three-dimensional pulmonary computed tomography dataset. Following the basic scheme of digital image processing, first, a bank of smoothing filters and edge detectors is applied allowing the adequate preprocessing over the dataset images. Then, clustering methods are used for obtaining the tumor morphology. The relative percentage error and the accuracy rate were the metrics considered to determine the performance of the proposed technique. The values obtained from the metrics used reflect an excellent correlation between the morphology of the tumor, generated manually by a pneumologist and the values obtained by the proposed technique. In the clinical and surgical contexts, the characterization of the detected lung tumor is made in terms of volume occupied by the tumor and it allows the monitoring of this disease as well as the activation of the respective protocols for its approach

    Computer-aided detection of lung nodules: A review

    Get PDF
    We present an in-depth review and analysis of salient methods for computer-aided detection of lung nodules. We evaluate the current methods for detecting lung nodules using literature searches with selection criteria based on validation dataset types, nodule sizes, numbers of cases, types of nodules, extracted features in traditional feature-based classifiers, sensitivity, and false positives (FP)/scans. Our review shows that current detection systems are often optimized for particular datasets and can detect only one or two types of nodules. We conclude that, in addition to achieving high sensitivity and reduced FP/scans, strategies for detecting lung nodules must detect a variety of nodules with high precision to improve the performances of the radiologists. To the best of our knowledge, ours is the first review of the effectiveness of feature extraction using traditional feature-based classifiers. Moreover, we discuss deep-learning methods in detail and conclude that features must be appropriately selected to improve the overall accuracy of the system. We present an analysis of current schemes and highlight constraints and future research areas

    Lung nodule modeling and detection for computerized image analysis of low dose CT imaging of the chest.

    Get PDF
    From a computerized image analysis prospective, early diagnosis of lung cancer involves detection of doubtful nodules and classification into different pathologies. The detection stage involves a detection approach, usually by template matching, and an authentication step to reduce false positives, usually conducted by a classifier of one form or another; statistical, fuzzy logic, support vector machines approaches have been tried. The classification stage matches, according to a particular approach, the characteristics (e.g., shape, texture and spatial distribution) of the detected nodules to common characteristics (again, shape, texture and spatial distribution) of nodules with known pathologies (confirmed by biopsies). This thesis focuses on the first step; i.e., nodule detection. Specifically, the thesis addresses three issues: a) understanding the CT data of typical low dose CT (LDCT) scanning of the chest, and devising an image processing approach to reduce the inherent artifacts in the scans; b) devising an image segmentation approach to isolate the lung tissues from the rest of the chest and thoracic regions in the CT scans; and c) devising a nodule modeling methodology to enhance the detection rate and lend benefits for the ultimate step in computerized image analysis of LDCT of the lungs, namely associating a pathology to the detected nodule. The methodology for reducing the noise artifacts is based on noise analysis and examination of typical LDCT scans that may be gathered on a repetitive fashion; since, a reduction in the resolution is inevitable to avoid excessive radiation. Two optimal filtering methods are tested on samples of the ELCAP screening data; the Weiner and the Anisotropic Diffusion Filters. Preference is given to the Anisotropic Diffusion Filter, which can be implemented on 7x7 blocks/windows of the CT data. The methodology for lung segmentation is based on the inherent characteristics of the LDCT scans, shown as distinct bi-modal gray scale histogram. A linear model is used to describe the histogram (the joint probability density function of the lungs and non-lungs tissues) by a linear combination of weighted kernels. The Gaussian kernels were chosen, and the classic Expectation-Maximization (EM) algorithm was employed to estimate the marginal probability densities of the lungs and non-lungs tissues, and select an optimal segmentation threshold. The segmentation is further enhanced using standard shape analysis based on mathematical morphology, which improves the continuity of the outer and inner borders of the lung tissues. This approach (a preliminary version of it appeared in [14]) is found to be adequate for lung segmentation as compared to more sophisticated approaches developed at the CVIP Lab (e.g., [15][16]) and elsewhere. The methodology developed for nodule modeling is based on understanding the physical characteristics of the nodules in LDCT scans, as identified by human experts. An empirical model is introduced for the probability density of the image intensity (or Hounsfield units) versus the radial distance measured from the centroid – center of mass - of typical nodules. This probability density showed that the nodule spatial support is within a circle/square of size 10 pixels; i.e., limited to 5 mm in length; which is within the range that the radiologist specify to be of concern. This probability density is used to fill in the intensity (or Hounsfield units) of parametric nodule models. For these models (e.g., circles or semi-circles), given a certain radius, we calculate the intensity (or Hounsfield units) using an exponential expression for the radial distance with parameters specified from the histogram of an ensemble of typical nodules. This work is similar in spirit to the earlier work of Farag et al., 2004 and 2005 [18][19], except that the empirical density of the radial distance and the histogram of typical nodules provide a data-driven guide for estimating the intensity (or Hounsfield units) of the nodule models. We examined the sensitivity and specificity of parametric nodules in a template-matching framework for nodule detection. We show that false positives are inevitable problems with typical machine learning methods of automatic lung nodule detection, which invites further efforts and perhaps fresh thinking into automatic nodule detection. A new approach for nodule modeling is introduced in Chapter 5 of this thesis, which brings high promise in both the detection, and the classification of nodules. Using the ELCAP study, we created an ensemble of four types of nodules and generated a nodule model for each type based on optimal data reduction methods. The resulting nodule model, for each type, has lead to drastic improvements in the sensitivity and specificity of nodule detection. This approach may be used as well for classification. In conclusion, the methodologies in this thesis are based on understanding the LDCT scans and what is to be expected in terms of image quality. Noise reduction and image segmentation are standard. The thesis illustrates that proper nodule models are possible and indeed a computerized approach for image analysis to detect and classify lung nodules is feasible. Extensions to the results in this thesis are immediate and the CVIP Lab has devised plans to pursue subsequent steps using clinical data
    • …
    corecore