489 research outputs found

    Lung nodule diagnosis and cancer histology classification from computed tomography data by convolutional neural networks: A survey

    Get PDF
    Lung cancer is among the deadliest cancers. Besides lung nodule classification and diagnosis, developing non-invasive systems to classify lung cancer histological types/subtypes may help clinicians to make targeted treatment decisions timely, having a positive impact on patients' comfort and survival rate. As convolutional neural networks have proven to be responsible for the significant improvement of the accuracy in lung cancer diagnosis, with this survey we intend to: show the contribution of convolutional neural networks not only in identifying malignant lung nodules but also in classifying lung cancer histological types/subtypes directly from computed tomography data; point out the strengths and weaknesses of slice-based and scan-based approaches employing convolutional neural networks; and highlight the challenges and prospective solutions to successfully apply convolutional neural networks for such classification tasks. To this aim, we conducted a comprehensive analysis of relevant Scopus-indexed studies involved in lung nodule diagnosis and cancer histology classification up to January 2022, dividing the investigation in convolutional neural network-based approaches fed with planar or volumetric computed tomography data. Despite the application of convolutional neural networks in lung nodule diagnosis and cancer histology classification is a valid strategy, some challenges raised, mainly including the lack of publicly-accessible annotated data, together with the lack of reproducibility and clinical interpretability. We believe that this survey will be helpful for future studies involved in lung nodule diagnosis and cancer histology classification prior to lung biopsy by means of convolutional neural networks

    Convolutional Neural Network based Malignancy Detection of Pulmonary Nodule on Computer Tomography

    Get PDF
    Without performing biopsy that could lead physical damages to nerves and vessels, Computerized Tomography (CT) is widely used to diagnose the lung cancer due to the high sensitivity of pulmonary nodule detection. However, distinguishing pulmonary nodule in-between malignant and benign is still not an easy task. As the CT scans are mostly in relatively low resolution, it is not easy for radiologists to read the details of the scan image. In the past few years, the continuing rapid growth of CT scan analysis system has generated a pressing need for advanced computational tools to extract useful features to assist the radiologist in reading progress. Computer-aided detection (CAD) systems have been developed to reduce observational oversights by identifying the suspicious features that a radiologist looks for during case review. Most previous CAD systems rely on low-level non-texture imaging features such as intensity, shape, size or volume of the pulmonary nodules. However, the pulmonary nodules have a wide variety in shapes and sizes, and also the high visual similarities between benign and malignant patterns, so relying on non-texture imaging features is difficult for diagnosis of the nodule types. To overcome the problem of non-texture imaging features, more recent CAD systems adopted the supervised or unsupervised learning scheme to translate the content of the nodules into discriminative features. Such features enable high-level imaging features highly correlated with shape and texture. Convolutional neural networks (ConvNets), supervised methods related to deep learning, have been improved rapidly in recent years. Due to their great success in computer vision tasks, they are also expected to be helpful in medical imaging. In this thesis, a CAD based on a deep convolutional neural network (ConvNet) is designed and evaluated for malignant pulmonary nodules on computerized tomography. The proposed ConvNet, which is the core component of the proposed CAD system, is trained on the LUNGx challenge database to classify benign and malignant pulmonary nodules on CT. The architecture of the proposed ConvNet consists of 3 convolutional layers with maximum pooling operations and rectified linear units (ReLU) activations, followed by 2 denser layers with full-connectivities, and the architecture is carefully tailored for pulmonary nodule classification by considering the problems of over-fitting, receptive field, and imbalanced data. The proposed CAD system achieved the sensitivity of 0.896 and specificity of 8.78 at the optimal cut-off point of the receiver operating characteristic curve (ROC) with the area under the curve (AUC) of 0.920. The testing results showed that the proposed ConvNet achieves 10% higher AUC compared to the state-of-the-art work related to the unsupervised method. By integrating the proposed highly accurate ConvNet, the proposed CAD system also outperformed the other state-of-the-art ConvNets explicitly designed for diagnosis of pulmonary nodules detection or classification

    Segmentation and classification of lung nodules from Thoracic CT scans : methods based on dictionary learning and deep convolutional neural networks.

    Get PDF
    Lung cancer is a leading cause of cancer death in the world. Key to survival of patients is early diagnosis. Studies have demonstrated that screening high risk patients with Low-dose Computed Tomography (CT) is invaluable for reducing morbidity and mortality. Computer Aided Diagnosis (CADx) systems can assist radiologists and care providers in reading and analyzing lung CT images to segment, classify, and keep track of nodules for signs of cancer. In this thesis, we propose a CADx system for this purpose. To predict lung nodule malignancy, we propose a new deep learning framework that combines Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) to learn best in-plane and inter-slice visual features for diagnostic nodule classification. Since a nodule\u27s volumetric growth and shape variation over a period of time may reveal information regarding the malignancy of nodule, separately, a dictionary learning based approach is proposed to segment the nodule\u27s shape at two time points from two scans, one year apart. The output of a CNN classifier trained to learn visual appearance of malignant nodules is then combined with the derived measures of shape change and volumetric growth in assigning a probability of malignancy to the nodule. Due to the limited number of available CT scans of benign and malignant nodules in the image database from the National Lung Screening Trial (NLST), we chose to initially train a deep neural network on the larger LUNA16 Challenge database which was built for the purpose of eliminating false positives from detected nodules in thoracic CT scans. Discriminative features that were learned in this application were transferred to predict malignancy. The algorithm for segmenting nodule shapes in serial CT scans utilizes a sparse combination of training shapes (SCoTS). This algorithm captures a sparse representation of a shape in input data through a linear span of previously delineated shapes in a training repository. The model updates shape prior over level set iterations and captures variabilities in shapes by a sparse combination of the training data. The level set evolution is therefore driven by a data term as well as a term capturing valid prior shapes. During evolution, the shape prior influence is adjusted based on shape reconstruction, with the assigned weight determined from the degree of sparsity of the representation. The discriminative nature of sparse representation, affords us the opportunity to compare nodules\u27 variations in consecutive time points and to predict malignancy. Experimental validations of the proposed segmentation algorithm have been demonstrated on 542 3-D lung nodule data from the LIDC-IDRI database which includes radiologist delineated nodule boundaries. The effectiveness of the proposed deep learning and dictionary learning architectures for malignancy prediction have been demonstrated on CT data from 370 biopsied subjects collected from the NLST database. Each subject in this database had at least two serial CT scans at two separate time points one year apart. The proposed RNN CAD system achieved an ROC Area Under the Curve (AUC) of 0.87, when validated on CT data from nodules at second sequential time point and 0.83 based on dictionary learning method; however, when nodule shape change and appearance were combined, the classifier performance improved to AUC=0.89

    Deep learning for identifying Lung Diseases

    Get PDF
    Growing health problems, such as lung diseases, especially for children and the elderly, require better diagnostic methods, such as computer-based solutions, and it is crucial to detect and treat these problems early. The purpose of this article is to design and implement a new computer vision-based algorithm based on lung disease diagnosis, which has better performance in lung disease recognition than previous models to reduce lung-related health problems and costs . In addition, we have improved the accuracy of the five lung diseases detection, which helps doctors and doctors use computers to solve this problem at an early stage

    Framework for progressive segmentation of chest radiograph for efficient diagnosis of inert regions

    Get PDF
    Segmentation is one of the most essential steps required to identify the inert object in the chest x-ray. A review with the existing segmentation techniques towards chest x-ray as well as other vital organs was performed. The main objective was to find whether existing system offers accuracy at the cost of recursive and complex operations. The proposed system contributes to introduce a framework that can offer a good balance between computational performance and segmentation performance. Given an input of chest x-ray, the system offers progressive search for similar image on the basis of similarity score with queried image. Region-based shape descriptor is applied for extracting the feature exclusively for identifying the lung region from the thoracic region followed by contour adjustment. The final segmentation outcome shows accurate identification followed by segmentation of apical and costophrenic region of lung. Comparative analysis proved that proposed system offers better segmentation performance in contrast to existing system

    3D Lung Nodule Classification in Computed Tomography Images

    Get PDF
    Lung cancer is the leading cause of cancer death worldwide. One of the reasons is the absence of symptoms at an early stage, which means that it is only discovered at a later stage, where the treatment is more difficult [1]. Furthermore, when making a diagnosis, frequently done by reading computed tomographies (CT's), it is regularly allied with errors. One of the reasons is the variation of the opinion of the doctors regarding the diagnosis of the same nodule [2,3].The use of CADx, Computer-Aided Diagnosis, systems can be a great help for this problem by assisting doctors in diagnosis with a second opinion. Although its efficiency has already been proven [4], it often ends up not being used because doctors can not understand the "how and why" of CADx diagnostic results, and ultimately do not trust the system [5]. To increase the radiologists' confidence in the CADx system it is proposed that along with the results of malignancy prediction, there are also results with evidence that explains those malignancy results.There are some visible features in lung nodules that are correlated with malignancy. Since humans are able to visually identify these characteristics and correlate them with nodule malignancy, one way to present those evidence is to make predictions of those characteristics. To have these predictions it is proposed to use deep learning approaches. Convolutional neural networks had shown to outperform the state of the art results in medical image analysis [6]. To predict the characteristics and malignancy in CADx system, the architecture HSCNN, a deep hierarchical semantic convolutional neural network, proposed by Shen et al. [7], will be used.The Lung Image Database Consortium image collection (LIDC-IDRI) public dataset is frequently used as input for lung cancer CADx systems. The LIDC-IDRI consists of thoracic CT scans, presenting a lot of data's quantity and variability. In most of the nodules, this dataset has doctor's evaluations for 9 different characteristics. A recurrent problem in those evaluations is the subjectivity of the doctors' interpretation in what each characteristic is. In some characteristics, it can result in a great divergence in evaluations regarding the same nodule, which makes the inclusion of those evaluations as an input in CADx systems not useful as it could be. To reduce this subjectivity, it is proposed the creation of a metric that makes the characteristics classification more objective. For this, it is planned bibliographic and LIDC-IDRI dataset reviews. With that, taking into account this new metric, validated after by doctors from Hospital de São João, will be made a reclassification in LIDC-IDRI dataset. This way it could be possible to use as input all the relevant characteristics. The principal objective of this dissertation is to develop a lung nodule CADx system methodology which promotes the confidence of specialists in its use. This will be made classifying lung nodules according to relevant characteristics to diagnosis and malignancy. The reclassified LIDC-IDRI dataset will be used as an input for CADx system and the architecture used for predicting the characteristics and malignancy results will be the HSCNN. To measure the classification evaluation will be used sensitivity, sensibility, and area under the Receiver Operating Characteristic (ROC), curve. The proposed solution may be used for improving a CADx system, LNDetector, currently in development by the Center for Biomedical Engineering Research (C-BER) group from INESC-TEC in which this work will be developed.[1] - S. Sone M. Hasegawa and S. Takashima. Growth rate of small lung cancels detected on mass ct screening. Tire British Journal of Radiology, pages 1252-1259[2] - D. J. Bell S. E. Marley P. Guo H. Mann M. L. Scott L. H. Schwartz D. C. Ghiorghiu B. Zhao, Y. Tan. Exploring intra-and inter-reader variability in uni-dimensional, bi-dimensional, and volumetric measurements of solid tumors on ct scans reconstructed at different slice intervals. European journal of radiology 82, page 959-968, 2013[3] - H.T Winer-Muram. The solitary pulmonary nodule 1. Radiology, 239, pages 39-49, 2006.[4] - R. Yan J. Lee L. C. Chu C. T. Lin A. Hussien J. Rathmell B. Thomas C. Chen et al. P. Huang, S. Park. Added value of computer-aided ct image features for early lung cancer diagnosis with small pulmonary nodules: A matched case-control study. Radiology 286, page 286-295, 2017[5] - W Jorritsma, Fokie Cnossen, and Peter Van Ooijen. Improving the radiologist-cad interaction: Designing for appropriate trust. Clinical Radiology, 70, 10 2014.[6] - Tom Brosch, Youngjin Yoo, David Li, Anthony Traboulsee, and Roger Tam. Modeling the variability in brain morphology and lesion distribution in multiple sclerosis by deep learning. Volume 17, 09 2014.[7] - Simon Aberle Deni A. T. Bui Alex Hsu Willliam Shen, Shiwen X. Han. An interpretable deep hierarchical semantic convolutional neural network for lung nodule malignancy classification. june 201

    Optimization of neural networks for deep learning and applications to CT image segmentation

    Full text link
    [eng] During the last few years, AI development in deep learning has been going so fast that even important researchers, politicians, and entrepreneurs are signing petitions to try to slow it down. The newest methods for natural language processing and image generation are achieving results so unbelievable that people are seriously starting to think they can be dangerous for society. In reality, they are not dangerous (at the moment) even if we have to admit we reached a point where we have no more control over the flux of data inside the deep networks. It is impossible to open a modern deep neural network and interpret how it processes the information and, in many cases, explain how or why it gives back that particular result. One of the goals of this doctoral work has been to study the behavior of weights in convolutional neural networks and in transformers. We hereby present a work that demonstrates how to invert 3x3 convolutions after training a neural network able to learn how to classify images, with the future aim of having precisely invertible convolutional neural networks. We demonstrate that a simple network can learn to classify images on an open-source dataset without loss in accuracy, with respect to a non-invertible one. All that with the ability to reconstruct the original image without detectable error (on 8-bit images) in up to 20 convolutions stacked in a row. We present a thorough comparison between our method and the standard. We tested the performances of the five most used transformers for image classification on an open- source dataset. Studying the embedded matrices, we have been able to provide two criteria that can help transformers learn with a training time reduction of up to 30% and with no impact on classification accuracy. The evolution of deep learning techniques is also touching the field of digital health. With tens of thousands of new start-ups and more than 1B $ of investments only in the last year, this field is growing rapidly and promising to revolutionize healthcare. In this thesis, we present several neural networks for the segmentation of lungs, lung nodules, and areas affected by pneumonia induced by COVID-19, in chest CT scans. The architecturesm we used are all residual convolutional neural networks inspired by UNet and Inception. We customized them with novel loss functions and layers studied to achieve high performances on these particular applications. The errors on the surface of nodule segmentation masks are not over 1mm in more than 99% of the cases. Our algorithm for COVID-19 lesion detection has a specificity of 100% and overall accuracy of 97.1%. In general, it surpasses the state-of-the-art in all the considered statistics, using UNet as a benchmark. Combining these with other algorithms able to detect and predict lung cancer, the whole work was presented in a European innovation program and judged of high interest by worldwide experts. With this work, we set the basis for the future development of better AI tools in healthcare and scientific investigation into the fundamentals of deep learning.[spa] Durante los últimos años, el desarrollo de la IA en el aprendizaje profundo ha ido tan rápido que Incluso importantes investigadores, políticos y empresarios están firmando peticiones para intentar para ralentizarlo. Los métodos más nuevos para el procesamiento y la generación de imágenes y lenguaje natural, están logrando resultados tan increíbles que la gente está empezando a preocuparse seriamente. Pienso que pueden ser peligrosos para la sociedad. En realidad, no son peligrosos (al menos de momento) incluso si tenemos que admitir que llegamos a un punto en el que ya no tenemos control sobre el flujo de datos dentro de las redes profundas. Es imposible abrir una moderna red neuronal profunda e interpretar cómo procesa la información y, en muchos casos, explique cómo o por qué devuelve ese resultado en particular, uno de los objetivos de este doctorado. El trabajo ha consistido en estudiar el comportamiento de los pesos en redes neuronales convolucionales y en transformadores. Por la presente presentamos un trabajo que demuestra cómo invertir 3x3 convoluciones después de entrenar una red neuronal capaz de aprender a clasificar imágenes, con el objetivo futuro de tener redes neuronales convolucionales precisamente invertibles. Nosotros queremos demostrar que una red simple puede aprender a clasificar imágenes en un código abierto conjunto de datos sin pérdida de precisión, con respecto a uno no invertible. Todo eso con la capacidad de reconstruir la imagen original sin errores detectables (en imágenes de 8 bits) en hasta 20 convoluciones apiladas en fila. Presentamos una exhaustiva comparación entre nuestro método y el estándar. Probamos las prestaciones de los cinco transformadores más utilizados para la clasificación de imágenes en abierto. conjunto de datos de origen. Al estudiar las matrices incrustadas, hemos sido capaz de proporcionar dos criterios que pueden ayudar a los transformadores a aprender con un tiempo de capacitación reducción de hasta el 30% y sin impacto en la precisión de la clasificación. La evolución de las técnicas de aprendizaje profundo también está afectando al campo de la salud digital. Con decenas de miles de nuevas empresas y más de mil millones de dólares en inversiones sólo en el año pasado, este campo está creciendo rápidamente y promete revolucionar la atención médica. En esta tesis, presentamos varias redes neuronales para la segmentación de pulmones, nódulos pulmonares, y zonas afectadas por neumonía inducida por COVID-19, en tomografías computarizadas de tórax. La arquitectura que utilizamos son todas redes neuronales convolucionales residuales inspiradas en UNet. Las personalizamos con nuevas funciones y capas de pérdida, estudiado para lograr altos rendimientos en estas aplicaciones particulares. Los errores en la superficie de las máscaras de segmentación de los nódulos no supera 1 mm en más del 99% de los casos. Nuestro algoritmo para la detección de lesiones de COVID-19 tiene una especificidad del 100% y en general precisión del 97,1%. En general supera el estado del arte en todos los aspectos considerados, estadísticas, utilizando UNet como punto de referencia. Combinando estos con otros algoritmos capaces de detectar y predecir el cáncer de pulmón, todo el trabajo se presentó en una innovación europea programa y considerado de gran interés por expertos de todo el mundo. Con este trabajo, sentamos las bases para el futuro desarrollo de mejores herramientas de IA en Investigación sanitaria y científica sobre los fundamentos del aprendizaje profundo
    corecore