205 research outputs found

    Unsupervised CT lung image segmentation of a mycobacterium tuberculosis infection model

    Get PDF
    Tuberculosis (TB) is an infectious disease caused by Mycobacterium tuberculosis that produces pulmonary damage. Radiological imaging is the preferred technique for the assessment of TB longitudinal course. Computer-assisted identification of biomarkers eases the work of the radiologist by providing a quantitative assessment of disease. Lung segmentation is the step before biomarker extraction. In this study, we present an automatic procedure that enables robust segmentation of damaged lungs that have lesions attached to the parenchyma and are affected by respiratory movement artifacts in a Mycobacterium Tuberculosis infection model. Its main steps are the extraction of the healthy lung tissue and the airway tree followed by elimination of the fuzzy boundaries. Its performance was compared with respect to a segmentation obtained using: (1) a semi-automatic tool and (2) an approach based on fuzzy connectedness. A consensus segmentation resulting from the majority voting of three experts' annotations was considered our ground truth. The proposed approach improves the overlap indicators (Dice similarity coefficient, 94% ± 4%) and the surface similarity coefficients (Hausdorff distance, 8.64 mm ± 7.36 mm) in the majority of the most difficult-to-segment slices. Results indicate that the refined lung segmentations generated could facilitate the extraction of meaningful quantitative data on disease burden.The research leading to these results received funding from the Innovative Medicines Initiative (www.imi.europa.eu) Joint Undertaking under grant agreement no. 115337, whose resources comprise funding from the European Union’s Seventh Framework Programme (FP7/2007–2013) and EFPIA companies’ in kind contribution. This work was partially funded by projects TEC2013-48552-C2-1-R, RTC-2015-3772-1, TEC2015-73064-EXP and TEC2016-78052-R from the Spanish Ministerio de Economía, Industria y Competitividad, TOPUS S2013/MIT-3024 project from the regional government of Madrid and by the Department of Health, UK

    Unsupervised CT Lung Image Segmentation of a Mycobacterium Tuberculosis Infection Model

    Get PDF
    Tuberculosis (TB) is an infectious disease caused by Mycobacterium tuberculosis that produces pulmonary damage. Radiological imaging is the preferred technique for the assessment of TB longitudinal course. Computer-assisted identification of biomarkers eases the work of the radiologist by providing a quantitative assessment of disease. Lung segmentation is the step before biomarker extraction. In this study, we present an automatic procedure that enables robust segmentation of damaged lungs that have lesions attached to the parenchyma and are affected by respiratory movement artifacts in a Mycobacterium Tuberculosis infection model. Its main steps are the extraction of the healthy lung tissue and the airway tree followed by elimination of the fuzzy boundaries. Its performance was compared with respect to a segmentation obtained using: (1) a semi-automatic tool and (2) an approach based on fuzzy connectedness. A consensus segmentation resulting from the majority voting of three experts' annotations was considered our ground truth. The proposed approach improves the overlap indicators (Dice similarity coefficient, 94\% +/- 4\%) and the surface similarity coefficients (Hausdorff distance, 8.64 mm +/- 7.36 mm) in the majority of the most difficult-to-segment slices. Results indicate that the refined lung segmentations generated could facilitate the extraction of meaningful quantitative data on disease burden.We thank Estibaliz Gomez de Mariscal, Paula Martin Gonzalez and Mario Gonzalez Arjona for helping with the manual lung annotation. The research leading to these results received funding from the Innovative Medicines Initiative (www.imi.europa.eu) Joint Undertaking under grant agreement no. 115337, whose resources comprise funding from the European Union's Seventh Framework Programme (FP7/2007-2013) and EFPIA companies' in kind contribution. This work was partially funded by projects TEC2013-48552-C2-1-R, RTC-2015-3772-1, TEC2015-73064-EXP and TEC2016-78052-R from the Spanish Ministerio de Economia, Industria y Competitividad, TOPUS S2013/MIT-3024 project from the regional government of Madrid and by the Department of Health, UK.S

    Computational methods for the analysis of functional 4D-CT chest images.

    Get PDF
    Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention

    A deep learning approach to bone segmentation in CT scans

    Get PDF
    This thesis proposes a deep learning approach to bone segmentation in abdominal CT scans. Segmentation is a common initial step in medical images analysis, often fundamental for computer-aided detection and diagnosis systems. The extraction of bones in CT scans is a challenging task, which if done manually by experts requires a time consuming process and that has not today a broadly recognized automatic solution. The method presented is based on a convolutional neural network, inspired by the U-Net and trained end-to-end, that performs a semantic segmentation of the data. The training dataset is made up of 21 abdominal CT scans, each one containing between 403 and 994 2D transversal images. Those images are in full resolution, 512x512 voxels, and each voxel is classified by the network into one of the following classes: background, femoral bones, hips, sacrum, sternum, spine and ribs. The output is therefore a bone mask where the bones are recognized and divided into six different classes. In the testing dataset, labeled by experts, the best model achieves a Dice coefficient as average of all bone classes of 0.93. This work demonstrates, to the best of my knowledge for the first time, the feasibility of automatic bone segmentation and classification for CT scans using a convolutional neural network

    Automatic 3D extraction of pleural plaques and diffuse pleural thickening from lung MDCT images

    Full text link
    Pleural plaques (PPs) and diffuse pleural thickening (DPT) are very common asbestos related pleural diseases (ARPD). They are currently identified non-invasively using medical imaging techniques. A fully automatic algorithm for 3D detection of calcified pleura in the diaphragmatic area and thickened pleura on the costal surfaces from multi detector computed tomography (MDCT) images has been developed and tested. The algorithm for detecting diaphragmatic pleura includes estimation of the diaphragm top surface in 3D and identifying those voxels at a certain vertical distance from the estimated diaphragm, and with intensities close to that of bone, as calcified pleura. The algorithm for detecting thickened pleura on the costal surfaces includes: estimation of the pleural costal surface in 3D, estimation of the centrelines of ribs and costal cartilages and the surfaces that they lie on, calculating the mean distance between the two surfaces, and identifying any space between the two surfaces whose distance exceeds the mean distance as thickened pleura. The accuracy and performance of the proposed algorithm was tested on 20 MDCT datasets from patients diagnosed with existing PPs and/or DPT and the results were compared against the ground truth provided by an experienced radiologist. Several metrics were employed and evaluations indicate high performance of both calcified pleura detection in the diaphragmatic area and thickened pleura on the costal surfaces. This work has made significant contributions to both medical image analysis and medicine. For the first time in medical image analysis, the approach uses other stable organs such as the ribs and costal cartilage, besides the lungs themselves, for referencing and landmarking in 3D. It also estimates fat thickness between the rib surface and pleura (which is usually very thin) and excludes it from the detected areas, when identifying the thickened pleura. It also distinguishes the calcified pleura attached to the rib(s), separates them in 3D and detects calcified pleura on the lung diaphragmatic surfaces. The key contribution to medicine is effective detection of pleural thickening of any size and recognition of any changes, however small. This could have a significant impact on managing patient risks

    Automated Distinct Bone Segmentation from Computed Tomography Images using Deep Learning

    Get PDF
    Large-scale CT scans are frequently performed for forensic and diagnostic purposes, to plan and direct surgical procedures, and to track the development of bone-related diseases. This often involves radiologists who have to annotate bones manually or in a semi-automatic way, which is a time consuming task. Their annotation workload can be reduced by automated segmentation and detection of individual bones. This automation of distinct bone segmentation not only has the potential to accelerate current workflows but also opens up new possibilities for processing and presenting medical data for planning, navigation, and education. In this thesis, we explored the use of deep learning for automating the segmentation of all individual bones within an upper-body CT scan. To do so, we had to find a network architec- ture that provides a good trade-off between the problem’s high computational demands and the results’ accuracy. After finding a baseline method and having enlarged the dataset, we set out to eliminate the most prevalent types of error. To do so, we introduced an novel method called binary-prediction-enhanced multi-class (BEM) inference, separating the task into two: Distin- guishing bone from non-bone is conducted separately from identifying the individual bones. Both predictions are then merged, which leads to superior results. Another type of error is tack- led by our developed architecture, the Sneaky-Net, which receives additional inputs with larger fields of view but at a smaller resolution. We can thus sneak more extensive areas of the input into the network while keeping the growth of additional pixels in check. Overall, we present a deep-learning-based method that reliably segments most of the over one hundred distinct bones present in upper-body CT scans in an end-to-end trained matter quickly enough to be used in interactive software. Our algorithm has been included in our groups virtual reality medical image visualisation software SpectoVR with the plan to be used as one of the puzzle piece in surgical planning and navigation, as well as in the education of future doctors

    Computer-Aided Assessment of Tuberculosis with Radiological Imaging: From rule-based methods to Deep Learning

    Get PDF
    Mención Internacional en el título de doctorTuberculosis (TB) is an infectious disease caused by Mycobacterium tuberculosis (Mtb.) that produces pulmonary damage due to its airborne nature. This fact facilitates the disease fast-spreading, which, according to the World Health Organization (WHO), in 2021 caused 1.2 million deaths and 9.9 million new cases. Traditionally, TB has been considered a binary disease (latent/active) due to the limited specificity of the traditional diagnostic tests. Such a simple model causes difficulties in the longitudinal assessment of pulmonary affectation needed for the development of novel drugs and to control the spread of the disease. Fortunately, X-Ray Computed Tomography (CT) images enable capturing specific manifestations of TB that are undetectable using regular diagnostic tests, which suffer from limited specificity. In conventional workflows, expert radiologists inspect the CT images. However, this procedure is unfeasible to process the thousands of volume images belonging to the different TB animal models and humans required for a suitable (pre-)clinical trial. To achieve suitable results, automatization of different image analysis processes is a must to quantify TB. It is also advisable to measure the uncertainty associated with this process and model causal relationships between the specific mechanisms that characterize each animal model and its level of damage. Thus, in this thesis, we introduce a set of novel methods based on the state of the art Artificial Intelligence (AI) and Computer Vision (CV). Initially, we present an algorithm to assess Pathological Lung Segmentation (PLS) employing an unsupervised rule-based model which was traditionally considered a needed step before biomarker extraction. This procedure allows robust segmentation in a Mtb. infection model (Dice Similarity Coefficient, DSC, 94%±4%, Hausdorff Distance, HD, 8.64mm±7.36mm) of damaged lungs with lesions attached to the parenchyma and affected by respiratory movement artefacts. Next, a Gaussian Mixture Model ruled by an Expectation-Maximization (EM) algorithm is employed to automatically quantify the burden of Mtb.using biomarkers extracted from the segmented CT images. This approach achieves a strong correlation (R2 ≈ 0.8) between our automatic method and manual extraction. Consequently, Chapter 3 introduces a model to automate the identification of TB lesions and the characterization of disease progression. To this aim, the method employs the Statistical Region Merging algorithm to detect lesions subsequently characterized by texture features that feed a Random Forest (RF) estimator. The proposed procedure enables a selection of a simple but powerful model able to classify abnormal tissue. The latest works base their methodology on Deep Learning (DL). Chapter 4 extends the classification of TB lesions. Namely, we introduce a computational model to infer TB manifestations present in each lung lobe of CT scans by employing the associated radiologist reports as ground truth. We do so instead of using the classical manually delimited segmentation masks. The model adjusts the three-dimensional architecture, V-Net, to a multitask classification context in which loss function is weighted by homoscedastic uncertainty. Besides, the method employs Self-Normalizing Neural Networks (SNNs) for regularization. Our results are promising with a Root Mean Square Error of 1.14 in the number of nodules and F1-scores above 0.85 for the most prevalent TB lesions (i.e., conglomerations, cavitations, consolidations, trees in bud) when considering the whole lung. In Chapter 5, we present a DL model capable of extracting disentangled information from images of different animal models, as well as information of the mechanisms that generate the CT volumes. The method provides the segmentation mask of axial slices from three animal models of different species employing a single trained architecture. It also infers the level of TB damage and generates counterfactual images. So, with this methodology, we offer an alternative to promote generalization and explainable AI models. To sum up, the thesis presents a collection of valuable tools to automate the quantification of pathological lungs and moreover extend the methodology to provide more explainable results which are vital for drug development purposes. Chapter 6 elaborates on these conclusions.Programa de Doctorado en Multimedia y Comunicaciones por la Universidad Carlos III de Madrid y la Universidad Rey Juan CarlosPresidenta: María Jesús Ledesma Carbayo.- Secretario: David Expósito Singh.- Vocal: Clarisa Sánchez Gutiérre

    Deep Semantic Segmentation of Natural and Medical Images: A Review

    Full text link
    The semantic image segmentation task consists of classifying each pixel of an image into an instance, where each instance corresponds to a class. This task is a part of the concept of scene understanding or better explaining the global context of an image. In the medical image analysis domain, image segmentation can be used for image-guided interventions, radiotherapy, or improved radiological diagnostics. In this review, we categorize the leading deep learning-based medical and non-medical image segmentation solutions into six main groups of deep architectural, data synthesis-based, loss function-based, sequenced models, weakly supervised, and multi-task methods and provide a comprehensive review of the contributions in each of these groups. Further, for each group, we analyze each variant of these groups and discuss the limitations of the current approaches and present potential future research directions for semantic image segmentation.Comment: 45 pages, 16 figures. Accepted for publication in Springer Artificial Intelligence Revie

    수치 모델과 그래프 이론을 이용한 향상된 영상 분할 연구 -폐 영상에 응용-

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 공과대학 협동과정 바이오엔지니어링전공, 2016. 2. 김희찬.This dissertation presents a thoracic cavity segmentation algorithm and a method of pulmonary artery and vein decomposition from volumetric chest CT, and evaluates their performances. The main contribution of this research is to develop an automated algorithm for segmentation of the clinically meaningful organ. Although there are several methods to improve the organ segmentation accuracy such as the morphological method based on threshold algorithm or the object selection method based on the connectivity information our novel algorithm uses numerical algorithms and graph theory which came from the computer engineering field. This dissertation presents a new method through the following two examples and evaluates the results of the method. The first study aimed at the thoracic cavity segmentation. The thoracic cavity is the organ enclosed by the thoracic wall and the diaphragm surface. The thoracic wall has no clear boundary. Moreover since the diaphragm is the thin surface, this organ might have lost parts of its surface in the chest CT. As the previous researches, a method which found the mediastinum on the 2D axial view was reported, and a thoracic wall extraction method and several diaphragm segmentation methods were also informed independently. But the thoracic cavity volume segmentation method was proposed in this thesis for the first time. In terms of thoracic cavity volumetry, the mean±SD volumetric overlap ratio (VOR), false positive ratio on VOR (FPRV), and false negative ratio on VOR (FNRV) of the proposed method were 98.17±0.84%, 0.49±0.23%, and 1.34±0.83%, respectively. The proposed semi-automatic thoracic cavity segmentation method, which extracts multiple organs (namely, the rib, thoracic wall, diaphragm, and heart), performed with high accuracy and may be useful for clinical purposes. The second study proposed a method to decompose the pulmonary vessel into vessel subtrees for separation of the artery and vein. The volume images of the separated artery and vein could be used for a simulation support data in the lung cancer. Although a clinician could perform the separation in his imagination, and separate the vessel into the artery and vein in the manual, an automatic separation method is the better method than other methods. In the previous semi-automatic method, root marking of 30 to 40 points was needed while tracing vessels under 2D slice view, and this procedure needed approximately an hour and a half. After optimization of the feature value set, the accuracy of the arterial and venous decomposition was 89.71 ± 3.76% in comparison with the gold standard. This framework could be clinically useful for studies on the effects of the pulmonary arteries and veins on lung diseases.Chapter 1 General Introduction 2 1.1 Image Informatics using Open Source 3 1.2 History of the segmentation algorithm 5 1.3 Goal of Thesis Work 8 Chapter 2 Thoracic cavity segmentation algorithm using multi-organ extraction and surface fitting in volumetric CT 10 2.1 Introduction 11 2.2 Related Studies 13 2.3 The Proposed Thoracic Cavity Segmentation Method 16 2.4 Experimental Results 35 2.5 Discussion 41 2.6 Conclusion 45 Chapter 3 Semi-automatic decomposition method of pulmonary artery and vein using two level minimum spanning tree constructions for non-enhanced volumetric CT 46 3.1 Introduction 47 3.2 Related Studies 51 3.3 Artery and Vein Decomposition 55 3.4 An Efficient Decomposition Method 70 3.5 Evaluation 75 3.6 Discussion and Conclusion 85 References 88 Abstract in Korean 95Docto
    corecore