19 research outputs found

    Performance of a deep learning-based lung nodule detection system as an alternative reader in a Chinese lung cancer screening program

    Get PDF
    Objective: To evaluate the performance of a deep learning-based computer-aided detection (DL-CAD) system in a Chinese low-dose CT (LDCT) lung cancer screening program. Materials and methods: One-hundred-and-eighty individuals with a lung nodule on their baseline LDCT lung cancer screening scan were randomly mixed with screenees without nodules in a 1:1 ratio (total: 360 individuals). All scans were assessed by double reading and subsequently processed by an academic DL-CAD system. The findings of double reading and the DL-CAD system were then evaluated by two senior radiologists to derive the reference standard. The detection performance was evaluated by the Free Response Operating Characteristic curve, sensitivity and false-positive (FP) rate. The senior radiologists categorized nodules according to nodule diameter, type (solid, part-solid, non-solid) and Lung-RADS. Results: The reference standard consisted of 262 nodules ≥ 4 mm in 196 individuals; 359 findings were considered false positives. The DL-CAD system achieved a sensitivity of 90.1% with 1.0 FP/scan for detection of lung nodules regardless of size or type, whereas double reading had a sensitivity of 76.0% with 0.04 FP/scan (P = 0.001). The sensitivity for detection of nodules ≥ 4 - ≤ 6 mm was significantly higher with DL-CAD than with double reading (86.3% vs. 58.9% respectively; P = 0.001). Sixty-three nodules were only identified by the DL-CAD system, and 27 nodules only found by double reading. The DL-CAD system reached similar performance compared to double reading in Lung-RADS 3 (94.3% vs. 90.0%, P = 0.549) and Lung-RADS 4 nodules (100.0% vs. 97.0%, P = 1.000), but showed a higher sensitivity in Lung-RADS 2 (86.2% vs. 65.4%, P < 0.001). Conclusions: The DL-CAD system can accurately detect pulmonary nodules on LDCT, with an acceptable false-positive rate of 1 nodule per scan and has higher detection performance than double reading. This DL-CAD system may assist radiologists in nodule detection in LDCT lung cancer screening

    Quantitative CT analysis in ILD and use of artificial intelligence on imaging of ILD

    Get PDF
    Advances in computer technology over the past decade, particularly in the field of medical image analysis, have permitted the identification, characterisation and quantitation of abnormalities that can be used to diagnose disease or determine disease severity. On CT imaging performed in patients with ILD, deep-learning computer algorithms now demonstrate comparable performance with trained observers in the identification of a UIP pattern, which is associated with a poor prognosis in several fibrosing ILDs. Computer tools that quantify individual voxel-level CT features have also come of age and can predict mortality with greater power than visual CT analysis scores. As these tools become more established, they have the potential to improve the sensitivity with which minor degrees of disease progression are identified. Currently, PFTs are the gold standard measure used to assess clinical deterioration. However, the variation associated with pulmonary function measurements may mask the presence of small but genuine functional decline, which in the future could be confirmed by computer tools. The current chapter will describe the latest advances in quantitative CT analysis and deep learning as related to ILDs and suggest potential future directions for this rapidly advancing field

    Lung Pattern Analysis using Artificial Intelligence for the Diagnosis Support of Interstitial Lung Diseases

    Get PDF
    Interstitial lung diseases (ILDs) is a group of more than 200 chronic lung disorders characterized by inflammation and scarring of the lung tissue that leads to respiratory failure. Although ILD is a heterogeneous group of histologically distinct diseases, most of them exhibit similar clinical presentations and their diagnosis often presents a diagnostic dilemma. Early diagnosis is crucial for making treatment decisions, while misdiagnosis may lead to life-threatening complications. If a final diagnosis cannot be reached with the high resolution computed tomography scan, additional invasive procedures are required (e.g. bronchoalveolar lavage, surgical biopsy). The aim of this PhD thesis was to investigate the components of a computational system that will assist radiologists with the diagnosis of ILDs, while avoiding the dangerous, expensive and time-consuming invasive biopsies. The appropriate interpretation of the available radiological data combined with clinical/biochemical information can provide a reliable diagnosis, able to improve the diagnostic accuracy of the radiologists. In this thesis, we introduce two convolutional neural networks particularly designed for ILDs and a training scheme that employs knowledge transfer from the similar domain of general texture classification for performance enhancement. Moreover, we investigate the clinical relevance of breathing information for disease classification. The breathing information is quantified as a deformation field between inhale-exhale lung images using a novel 3D convolutional neural network for medical image registration. Finally, we design and evaluate the final end-to-end computational system for ILD classification using lung anatomy segmentation algorithms from the literature and the proposed ILD quantification neural networks. Deep learning approaches have been mostly investigated for all the aforementioned steps, while the results demonstrated their potential in analyzing lung images

    Deep Learning in Medical Image Analysis

    Get PDF
    The computer-assisted analysis for better interpreting images have been longstanding issues in the medical imaging field. On the image-understanding front, recent advances in machine learning, especially, in the way of deep learning, have made a big leap to help identify, classify, and quantify patterns in medical images. Specifically, exploiting hierarchical feature representations learned solely from data, instead of handcrafted features mostly designed based on domain-specific knowledge, lies at the core of the advances. In that way, deep learning is rapidly proving to be the state-of-the-art foundation, achieving enhanced performances in various medical applications. In this article, we introduce the fundamentals of deep learning methods; review their successes to image registration, anatomical/cell structures detection, tissue segmentation, computer-aided disease diagnosis or prognosis, and so on. We conclude by raising research issues and suggesting future directions for further improvements

    Early detection of lung cancer through nodule characterization by Deep Learning

    Full text link
    Lung cancer is one of the most frequent cancers in the world with 1.8 million new cases reported in 2012, representing 12.9% of all new cancers worldwide, accounting 1.4 million deaths up to 2008. The importance of early detection and classification of malignant and benign nodules using computed tomography (CT) scans, may facilitate radiologists the tasks of nodule staging assessment and individual therapeutic planning. However, if potential malignant nodules are detected on CT scans, treatments may be less aggressive, not even requiring chemotherapy or radiation therapy after surgery. This Bachelor Thesis focus on the exploration of existing methods and data sets for the automatic classification of lung nodules based on CT images. To this aim, we start by assembling, studying and analyzing some state-of-the-art studies in lung nodule detection, characterization and classification. Furthermore, we report and contextualize state-of-the-art deep learning architectures suited for lung nodule classification. From the public datasets researched, we select a widely used and large data set of lung nodules CT scans, and use it to fine-tune a state-of-theart convolutional neural network. We compare this strategy with training-from-scratch a new shallower neuronal network. Initial evaluation suggests that: (1) Transfer learning is unable to perform correctly due to its inability to adapt between natural images and CT scans domains. (2) Learning from scratch is unable to learn from a small number of samples. However, this first evaluation paves the road towards the design of better classification methods fed by better annotated public-available data sets. In overall, this Project is a mandatory first stage on a hot research topic

    Deep learning in medical imaging and radiation therapy

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/1/mp13264_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/2/mp13264.pd

    Novel Deep Learning Models for Medical Imaging Analysis

    Get PDF
    abstract: Deep learning is a sub-field of machine learning in which models are developed to imitate the workings of the human brain in processing data and creating patterns for decision making. This dissertation is focused on developing deep learning models for medical imaging analysis of different modalities for different tasks including detection, segmentation and classification. Imaging modalities including digital mammography (DM), magnetic resonance imaging (MRI), positron emission tomography (PET) and computed tomography (CT) are studied in the dissertation for various medical applications. The first phase of the research is to develop a novel shallow-deep convolutional neural network (SD-CNN) model for improved breast cancer diagnosis. This model takes one type of medical image as input and synthesizes different modalities for additional feature sources; both original image and synthetic image are used for feature generation. This proposed architecture is validated in the application of breast cancer diagnosis and proved to be outperforming the competing models. Motivated by the success from the first phase, the second phase focuses on improving medical imaging synthesis performance with advanced deep learning architecture. A new architecture named deep residual inception encoder-decoder network (RIED-Net) is proposed. RIED-Net has the advantages of preserving pixel-level information and cross-modality feature transferring. The applicability of RIED-Net is validated in breast cancer diagnosis and Alzheimer’s disease (AD) staging. Recognizing medical imaging research often has multiples inter-related tasks, namely, detection, segmentation and classification, my third phase of the research is to develop a multi-task deep learning model. Specifically, a feature transfer enabled multi-task deep learning model (FT-MTL-Net) is proposed to transfer high-resolution features from segmentation task to low-resolution feature-based classification task. The application of FT-MTL-Net on breast cancer detection, segmentation and classification using DM images is studied. As a continuing effort on exploring the transfer learning in deep models for medical application, the last phase is to develop a deep learning model for both feature transfer and knowledge from pre-training age prediction task to new domain of Mild cognitive impairment (MCI) to AD conversion prediction task. It is validated in the application of predicting MCI patients’ conversion to AD with 3D MRI images.Dissertation/ThesisDoctoral Dissertation Industrial Engineering 201

    Computer-Aided Assessment of Tuberculosis with Radiological Imaging: From rule-based methods to Deep Learning

    Get PDF
    Mención Internacional en el título de doctorTuberculosis (TB) is an infectious disease caused by Mycobacterium tuberculosis (Mtb.) that produces pulmonary damage due to its airborne nature. This fact facilitates the disease fast-spreading, which, according to the World Health Organization (WHO), in 2021 caused 1.2 million deaths and 9.9 million new cases. Traditionally, TB has been considered a binary disease (latent/active) due to the limited specificity of the traditional diagnostic tests. Such a simple model causes difficulties in the longitudinal assessment of pulmonary affectation needed for the development of novel drugs and to control the spread of the disease. Fortunately, X-Ray Computed Tomography (CT) images enable capturing specific manifestations of TB that are undetectable using regular diagnostic tests, which suffer from limited specificity. In conventional workflows, expert radiologists inspect the CT images. However, this procedure is unfeasible to process the thousands of volume images belonging to the different TB animal models and humans required for a suitable (pre-)clinical trial. To achieve suitable results, automatization of different image analysis processes is a must to quantify TB. It is also advisable to measure the uncertainty associated with this process and model causal relationships between the specific mechanisms that characterize each animal model and its level of damage. Thus, in this thesis, we introduce a set of novel methods based on the state of the art Artificial Intelligence (AI) and Computer Vision (CV). Initially, we present an algorithm to assess Pathological Lung Segmentation (PLS) employing an unsupervised rule-based model which was traditionally considered a needed step before biomarker extraction. This procedure allows robust segmentation in a Mtb. infection model (Dice Similarity Coefficient, DSC, 94%±4%, Hausdorff Distance, HD, 8.64mm±7.36mm) of damaged lungs with lesions attached to the parenchyma and affected by respiratory movement artefacts. Next, a Gaussian Mixture Model ruled by an Expectation-Maximization (EM) algorithm is employed to automatically quantify the burden of Mtb.using biomarkers extracted from the segmented CT images. This approach achieves a strong correlation (R2 ≈ 0.8) between our automatic method and manual extraction. Consequently, Chapter 3 introduces a model to automate the identification of TB lesions and the characterization of disease progression. To this aim, the method employs the Statistical Region Merging algorithm to detect lesions subsequently characterized by texture features that feed a Random Forest (RF) estimator. The proposed procedure enables a selection of a simple but powerful model able to classify abnormal tissue. The latest works base their methodology on Deep Learning (DL). Chapter 4 extends the classification of TB lesions. Namely, we introduce a computational model to infer TB manifestations present in each lung lobe of CT scans by employing the associated radiologist reports as ground truth. We do so instead of using the classical manually delimited segmentation masks. The model adjusts the three-dimensional architecture, V-Net, to a multitask classification context in which loss function is weighted by homoscedastic uncertainty. Besides, the method employs Self-Normalizing Neural Networks (SNNs) for regularization. Our results are promising with a Root Mean Square Error of 1.14 in the number of nodules and F1-scores above 0.85 for the most prevalent TB lesions (i.e., conglomerations, cavitations, consolidations, trees in bud) when considering the whole lung. In Chapter 5, we present a DL model capable of extracting disentangled information from images of different animal models, as well as information of the mechanisms that generate the CT volumes. The method provides the segmentation mask of axial slices from three animal models of different species employing a single trained architecture. It also infers the level of TB damage and generates counterfactual images. So, with this methodology, we offer an alternative to promote generalization and explainable AI models. To sum up, the thesis presents a collection of valuable tools to automate the quantification of pathological lungs and moreover extend the methodology to provide more explainable results which are vital for drug development purposes. Chapter 6 elaborates on these conclusions.Programa de Doctorado en Multimedia y Comunicaciones por la Universidad Carlos III de Madrid y la Universidad Rey Juan CarlosPresidenta: María Jesús Ledesma Carbayo.- Secretario: David Expósito Singh.- Vocal: Clarisa Sánchez Gutiérre
    corecore