278 research outputs found
Unsupervised CT lung image segmentation of a mycobacterium tuberculosis infection model
Tuberculosis (TB) is an infectious disease caused by Mycobacterium tuberculosis that produces pulmonary damage. Radiological imaging is the preferred technique for the assessment of TB longitudinal course. Computer-assisted identification of biomarkers eases the work of the radiologist by providing a quantitative assessment of disease. Lung segmentation is the step before biomarker extraction. In this study, we present an automatic procedure that enables robust segmentation of damaged lungs that have lesions attached to the parenchyma and are affected by respiratory movement artifacts in a Mycobacterium Tuberculosis infection model. Its main steps are the extraction of the healthy lung tissue and the airway tree followed by elimination of the fuzzy boundaries. Its performance was compared with respect to a segmentation obtained using: (1) a semi-automatic tool and (2) an approach based on fuzzy connectedness. A consensus segmentation resulting from the majority voting of three experts' annotations was considered our ground truth. The proposed approach improves the overlap indicators (Dice similarity coefficient, 94% ± 4%) and the surface similarity coefficients (Hausdorff distance, 8.64 mm ± 7.36 mm) in the majority of the most difficult-to-segment slices. Results indicate that the refined lung segmentations generated could facilitate the extraction of meaningful quantitative data on disease burden.The research leading to these results received funding from the Innovative Medicines Initiative (www.imi.europa.eu) Joint Undertaking under grant agreement no. 115337, whose resources comprise funding from the European Union’s Seventh Framework Programme (FP7/2007–2013) and EFPIA companies’ in kind contribution. This work was partially funded by projects TEC2013-48552-C2-1-R, RTC-2015-3772-1, TEC2015-73064-EXP and TEC2016-78052-R from the Spanish Ministerio de EconomĂa,
Industria y Competitividad, TOPUS S2013/MIT-3024 project from the regional government of Madrid and by the Department of Health, UK
Recommended from our members
From Fully-Supervised, Single-Task to Scarcely-Supervised, Multi-Task Deep Learning for Medical Image Analysis
Image analysis based on machine learning has gained prominence with the advent of deep learning, particularly in medical imaging. To be effective in addressing challenging image analysis tasks, however, conventional deep neural networks require large corpora of annotated training data, which are unfortunately scarce in the medical domain, thus often rendering fully-supervised learning strategies ineffective.This thesis devises for use in a variety of medical image analysis applications a series of novel deep learning methods, ranging from fully-supervised, single-task learning to scarcely-supervised, multi-task learning that makes efficient use of annotated training data. Specifically, its main contributions include (1) fully-supervised, single-task learning for the segmentation of pulmonary lobes from chest CT scans and the analysis of scoliosis from spine X-ray images; (2) supervised, single-task, domain-generalized pulmonary segmentation in chest X-ray images and retinal vasculature segmentation in fundoscopic images; (3) largely-unsupervised, multiple-task learning via deep generative modeling for the joint synthesis and classification of medical image data; and (4) partly-supervised, multiple-task learning for the combined segmentation and classification of chest and spine X-ray images
Deep Learning of Unified Region, Edge, and Contour Models for Automated Image Segmentation
Image segmentation is a fundamental and challenging problem in computer
vision with applications spanning multiple areas, such as medical imaging,
remote sensing, and autonomous vehicles. Recently, convolutional neural
networks (CNNs) have gained traction in the design of automated segmentation
pipelines. Although CNN-based models are adept at learning abstract features
from raw image data, their performance is dependent on the availability and
size of suitable training datasets. Additionally, these models are often unable
to capture the details of object boundaries and generalize poorly to unseen
classes. In this thesis, we devise novel methodologies that address these
issues and establish robust representation learning frameworks for
fully-automatic semantic segmentation in medical imaging and mainstream
computer vision. In particular, our contributions include (1) state-of-the-art
2D and 3D image segmentation networks for computer vision and medical image
analysis, (2) an end-to-end trainable image segmentation framework that unifies
CNNs and active contour models with learnable parameters for fast and robust
object delineation, (3) a novel approach for disentangling edge and texture
processing in segmentation networks, and (4) a novel few-shot learning model in
both supervised settings and semi-supervised settings where synergies between
latent and image spaces are leveraged to learn to segment images given limited
training data.Comment: PhD dissertation, UCLA, 202
Development and validation of HRCT airway segmentation algorithms
Direct measurements of airway lumen and wall areas are potentially useful as a diagnostic tool and as an aid to understanding the pathophysiology underlying lung disease. Direct measurements can be made from images created by high resolution computer tomography (HRCT) by using computer-based algorithms to segment airways, but current validation techniques cannot adequately establish the accuracy and precision of these algorithms. A detailed review of HRCT airway segmentation algorithms was undertaken, from which three candidate algorithm designs were developed. A custom Windows-based software program was implemented to facilitate multi-modality development and validation of the segmentation algorithms. The performance of the algorithms was examined in clinical HRCT images. A centre-likelihood (CL) ray-casting algorithm was found to be the most suitable algorithm due to its speed and reliability in semi-automatic segmentation and tracking of the airway wall. Several novel refinements were demonstrated to improve the CL algorithm’s robustness in HRCT lung data. The performance of the CL algorithm was then quantified in two-dimensional simulated data to optimise customisable parameters such as edge-detection method, interpolation and number of rays. Novel correction equations to counter the effects of volume averaging and airway orientation angle were derived and demonstrated in three-dimensional simulated data. The optimal CL algorithm was validated with HRCT data using a plastic phantom and a pig lung phantom matched to micro-CT. Accuracy was found to be improved compared to previous studies using similar methods. The volume averaging correction was found to improve precision and accuracy in the plastic phantom but not in the pig lung phantom. When tested in a clinical setting the results of the optimised CL algorithm was in agreement with the results of other measures of lung function. The thesis concludes that the relative contributions of confounders of airway measurement have been quantified in simulated data and the CL algorithm’s performance has been validated in a plastic phantom as well as animal model. This validation protocol has improved the accuracy and precision of measurements made using the CL algorith
- …