411 research outputs found

    Lung Nodule Classification by the Combination of Fusion Classifier and Cascaded Convolutional Neural Networks

    Full text link
    Lung nodule classification is a class imbalanced problem, as nodules are found with much lower frequency than non-nodules. In the class imbalanced problem, conventional classifiers tend to be overwhelmed by the majority class and ignore the minority class. We showed that cascaded convolutional neural networks can classify the nodule candidates precisely for a class imbalanced nodule candidate data set in our previous study. In this paper, we propose Fusion classifier in conjunction with the cascaded convolutional neural network models. To fuse the models, nodule probabilities are calculated by using the convolutional neural network models at first. Then, Fusion classifier is trained and tested by the nodule probabilities. The proposed method achieved the sensitivity of 94.4% and 95.9% at 4 and 8 false positives per scan in Free Receiver Operating Characteristics (FROC) curve analysis, respectively.Comment: Draft of ISBI2018. arXiv admin note: text overlap with arXiv:1703.0031

    Development and application in clinical practice of Computer-aided Diagnosis systems for the early detection of lung cancer

    Get PDF
    Lung cancer is the main cause of cancer-related deaths both in Europe and United States, because often it is diagnosed at late stages of the disease, when the survival rate is very low if compared to first asymptomatic stage. Lung cancer screening using annual low-dose Computed Tomography (CT) reduces lung cancer 5-year mortality by about 20% in comparison to annual screening with chest radiography. However, the detection of pulmonary nodules in low-dose chest CT scans is a very difficult task for radiologists, because of the large number (300/500) of slices to be analyzed. In order to support radiologists, researchers have developed Computer aided Detection (CAD) algorithms for the automated detection of pulmonary nodules in chest CT scans. Despite proved benefits of those systems on the radiologists detection sensitivity, the usage of CADs in clinical practice has not spread yet. The main objective of this thesis is to investigate and tackle the issues underlying this inconsistency. In particular, in Chapter 2 we introduce M5L, a fully automated Web and Cloud-based CAD for the automated detection of pulmonary nodules in chest CT scans. This system introduces a new paradigm in clinical practice, by making available CAD systems without requiring to radiologists any additional software and hardware installation. The proposed solution provides an innovative cost-effective approach for clinical structures. In Chapter 3 we present our international challenge aiming at a large-scale validation of state-of-the-art CAD systems. We also investigate and prove how the combination of different CAD systems reaches performances much higher than any best stand-alone system developed so far. Our results open the possibility to introduce in clinical practice very high-performing CAD systems, which miss a tiny fraction of clinically relevant nodules. Finally, we tested the performance of M5L on clinical data-sets. In chapter 4 we present the results of its clinical validation, which prove the positive impact of CAD as second reader in the diagnosis of pulmonary metastases on oncological patients with extra-thoracic cancers. The proposed approaches have the potential to exploit at best the features of different algorithms, developed independently, for any possible clinical application, setting a collaborative environment for algorithm comparison, combination, clinical validation and, if all of the above were successful, clinical practice

    Intermediate Fusion Approach for Pneumonia Classification on Imbalanced Multimodal Data

    Get PDF
    In medical practice, the primary diagnosis of diseases should be carried out quickly and, if possible, automatically. The processing of multimodal data in medicine has become a ubiquitous technique in the classification, prediction and detection of diseases. Pneumonia is one of the most common lung diseases. In our study, we used chest X-ray images as the first modality and the results of laboratory studies on a patient as the second modality to detect pneumonia. The architecture of the multimodal deep learning model was based on intermediate fusion. The model was trained on balanced and imbalanced data when the presence of pneumonia was determined in 50% and 9% of the total number of cases, respectively. For a more objective evaluation of the results, we compared our model performance with several other open-source models on our data. The experiments demonstrate the high performance of the proposed model for pneumonia detection based on two modalities even in cases of imbalanced classes (up to 96.6%) compared to single-modality models’ results (up to 93.5%). We made several integral estimates of the performance of the proposed model to cover and investigate all aspects of multimodal data and architecture features. There were accuracy, ROC AUC, PR AUC, F1 score, and the Matthews correlation coefficient metrics. Using various metrics, we proved the possibility and meaningfulness of the usage of the proposed model, aiming to properly classify the disease. Experiments showed that the performance of the model trained on imbalanced data was even slightly higher than other models considered.In medical practice, the primary diagnosis of diseases should be carried out quickly and, if possible, automatically. The processing of multimodal data in medicine has become a ubiquitous technique in the classification, prediction and detection of diseases. Pneumonia is one of the most common lung diseases. In our study, we used chest X-ray images as the first modality and the results of laboratory studies on a patient as the second modality to detect pneumonia. The architecture of the multimodal deep learning model was based on intermediate fusion. The model was trained on balanced and imbalanced data when the presence of pneumonia was determined in 50% and 9% of the total number of cases, respectively. For a more objective evaluation of the results, we compared our model performance with several other open-source models on our data. The experiments demonstrate the high performance of the proposed model for pneumonia detection based on two modalities even in cases of imbalanced classes (up to 96.6%) compared to single-modality models’ results (up to 93.5%). We made several integral estimates of the performance of the proposed model to cover and investigate all aspects of multimodal data and architecture features. There were accuracy, ROC AUC, PR AUC, F1 score, and the Matthews correlation coefficient metrics. Using various metrics, we proved the possibility and meaningfulness of the usage of the proposed model, aiming to properly classify the disease. Experiments showed that the performance of the model trained on imbalanced data was even slightly higher than other models considered

    Multi-Modal Medical Imaging Analysis with Modern Neural Networks

    Get PDF
    Medical imaging is an important non-invasive tool for diagnostic and treatment purposes in medical practice. However, interpreting medical images is a time consuming and challenging task. Computer-aided diagnosis (CAD) tools have been used in clinical practice to assist medical practitioners in medical imaging analysis since the 1990s. Most of the current generation of CADs are built on conventional computer vision techniques, such as manually defined feature descriptors. Deep convolutional neural networks (CNNs) provide robust end-to-end methods that can automatically learn feature representations. CNNs are a promising building block of next-generation CADs. However, applying CNNs to medical imaging analysis tasks is challenging. This dissertation addresses three major issues that obstruct utilizing modern deep neural networks on medical image analysis tasks---lack of domain knowledge in architecture design, lack of labeled data in model training, and lack of uncertainty estimation in deep neural networks. We evaluated the proposed methods on six large, clinically-relevant datasets. The result shows that the proposed methods can significantly improve the deep neural network performance on medical imaging analysis tasks

    A Systematic Survey of Classification Algorithms for Cancer Detection

    Get PDF
    Cancer is a fatal disease induced by the occurrence of a count of inherited issues and also a count of pathological changes. Malignant cells are dangerous abnormal areas that could develop in any part of the human body, posing a life-threatening threat. To establish what treatment options are available, cancer, also referred as a tumor, should be detected early and precisely. The classification of images for cancer diagnosis is a complex mechanism that is influenced by a diverse of parameters. In recent years, artificial vision frameworks have focused attention on the classification of images as a key problem. Most people currently rely on hand-made features to demonstrate an image in a specific manner. Learning classifiers such as random forest and decision tree were used to determine a final judgment. When there are a vast number of images to consider, the difficulty occurs. Hence, in this paper, weanalyze, review, categorize, and discuss current breakthroughs in cancer detection utilizing machine learning techniques for image recognition and classification. We have reviewed the machine learning approaches like logistic regression (LR), Naïve Bayes (NB), K-nearest neighbors (KNN), decision tree (DT), and Support Vector Machines (SVM)

    Deep Learning in Chest Radiography: From Report Labeling to Image Classification

    Get PDF
    Chest X-ray (CXR) is the most common examination performed by a radiologist. Through CXR, radiologists must correctly and immediately diagnose a patient’s thorax to avoid the progression of life-threatening diseases. Not only are certified radiologists hard to find but also stress, fatigue, and lack of experience all contribute to the quality of an examination. As a result, providing a technique to aid radiologists in reading CXRs and a tool to help bridge the gap for communities without adequate access to radiological services would yield a huge advantage for patients and patient care. This thesis considers one essential task, CXR image classification, with Deep Learning (DL) technologies from the following three aspects: understanding the intersection of CXR interpretation and DL; extracting multiple image labels from radiology reports to facilitate the training of DL classifiers; and developing CXR classifiers using DL. First, we explain the core concepts and categorize the existing data and literature for researchers entering this field for ease of reference. Using CXRs and DL for medical image diagnosis is a relatively recent field of study because large, publicly available CXR datasets have not been around for very long. Second, we contribute to labeling large datasets with multi-label image annotations extracted from CXR reports. We describe the development of a DL-based report labeler named CXRlabeler, focusing on inductive sequential transfer learning. Lastly, we explain the design of three novel Convolutional Neural Network (CNN) classifiers, i.e., MultiViewModel, Xclassifier, and CovidXrayNet, for binary image classification, multi-label image classification, and multi-class image classification, respectively. This dissertation showcases significant progress in the field of automated CXR interpretation using DL; all source code used is publicly available. It provides methods and insights that can be applied to other medical image interpretation tasks

    Multiple Instance Learning: A Survey of Problem Characteristics and Applications

    Full text link
    Multiple instance learning (MIL) is a form of weakly supervised learning where training instances are arranged in sets, called bags, and a label is provided for the entire bag. This formulation is gaining interest because it naturally fits various problems and allows to leverage weakly labeled data. Consequently, it has been used in diverse application fields such as computer vision and document classification. However, learning from bags raises important challenges that are unique to MIL. This paper provides a comprehensive survey of the characteristics which define and differentiate the types of MIL problems. Until now, these problem characteristics have not been formally identified and described. As a result, the variations in performance of MIL algorithms from one data set to another are difficult to explain. In this paper, MIL problem characteristics are grouped into four broad categories: the composition of the bags, the types of data distribution, the ambiguity of instance labels, and the task to be performed. Methods specialized to address each category are reviewed. Then, the extent to which these characteristics manifest themselves in key MIL application areas are described. Finally, experiments are conducted to compare the performance of 16 state-of-the-art MIL methods on selected problem characteristics. This paper provides insight on how the problem characteristics affect MIL algorithms, recommendations for future benchmarking and promising avenues for research

    Heterogeneidad tumoral en imágenes PET-CT

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Ciencias Físicas, Departamento de Estructura de la Materia, Física Térmica y Electrónica, leída el 28/01/2021Cancer is a leading cause of morbidity and mortality [1]. The most frequent cancers worldwide are non–small cell lung carcinoma (NSCLC) and breast cancer [2], being their management a challenging task [3]. Tumor diagnosis is usually made through biopsy [4]. However, medical imaging also plays an important role in diagnosis, staging, response to treatment, and recurrence assessment [5]. Tumor heterogeneity is recognized to be involved in cancer treatment failure, with worse clinical outcomes for highly heterogeneous tumors [6,7]. This leads to the existence of tumor sub-regions with different biological behavior (some more aggressive and treatment-resistant than others) [8-10]. Which are characterized by a different pattern of vascularization, vessel permeability, metabolism, cell proliferation, cell death, and other features, that can be measured by modern medical imaging techniques, including positron emission tomography/computed tomography (PET/CT) [10-12]. Thus, the assessment of tumor heterogeneity through medical images could allow the prediction of therapy response and long-term outcomes of patients with cancer [13]. PET/CT has become essential in oncology [14,15] and is usually evaluated through semiquantitative metabolic parameters, such as maximum/mean standard uptake value (SUVmax, SUVmean) or metabolic tumor volume (MTV), which are valuables as prognostic image-based biomarkers in several tumors [16-17], but these do not assess tumor heterogeneity. Likewise, fluorodeoxyglucose (18F-FDG) PET/CT is important to differentiate malignant from benign solitary pulmonary nodules (SPN), reducing so the number of patients who undergo unnecessary surgical biopsies. Several publications have shown that some quantitative image features, extracted from medical images, are suitable for diagnosis, tumor staging, the prognosis of treatment response, and long-term evolution of cancer patients [18-20]. The process of extracting and relating image features with clinical or biological variables is called “Radiomics” [9,20-24]. Radiomic parameters, such as textural features have been related directly to tumor heterogeneity [25]. This thesis investigated the relationships of the tumor heterogeneity, assessed by 18F-FDG-PET/CT texture analysis, with metabolic parameters and pathologic staging in patients with NSCLC, and explored the diagnostic performance of different metabolic, morphologic, and clinical criteria for classifying (malignant or not) of solitary pulmonary nodules (SPN). Furthermore, 18F-FDG-PET/CT radiomic features of patients with recurrent/metastatic breast cancer were used for constructing predictive models of response to the chemotherapy, based on an optimal combination of several feature selection and machine learning (ML) methods...El cáncer es una de las principales causas de morbilidad y mortalidad. Los más frecuentes son el carcinoma de pulmón de células no pequeñas (NSCLC) y el cáncer de mama, siendo su tratamiento un reto. El diagnóstico se suele realizar mediante biopsia. La heterogeneidad tumoral (HT) está implicada en el fracaso del tratamiento del cáncer, con peores resultados clínicos para tumores muy heterogéneos. Esta conduce a la existencia de subregiones tumorales con diferente comportamiento biológico (algunas más agresivas y resistentes al tratamiento); las cuales se caracterizan por diferentes patrones de vascularización, permeabilidad de los vasos sanguíneos, metabolismo, proliferación y muerte celular, que se pueden medir mediante imágenes médicas, incluida la tomografía por emisión de positrones/tomografía computarizada con fluorodesoxiglucosa (18F-FDG-PET/CT). La evaluación de la HT a través de imágenes médicas, podría mejorar la predicción de la respuesta al tratamiento y de los resultados a largo plazo, en pacientes con cáncer. La 18F-FDG-PET/CT es esencial en oncología, generalmente se evalúa con parámetros metabólicos semicuantitativos, como el valor de captación estándar máximo/medio (SUVmáx, SUVmedio) o el volumen tumoral metabólico (MTV), que tienen un gran valor pronóstico en varios tumores, pero no evalúan la HT. Asimismo, es importante para diferenciar los nódulos pulmonares solitarios (NPS) malignos de los benignos, reduciendo el número de pacientes que van a biopsias quirúrgicas innecesarias. Publicaciones recientes muestran que algunas características cuantitativas, extraídas de las imágenes médicas, son robustas para diagnóstico, estadificación, pronóstico de la respuesta al tratamiento y la evolución, de pacientes con cáncer. El proceso de extraer y relacionar estas características con variables clínicas o biológicas se denomina “Radiomica”. Algunos parámetros radiómicos, como la textura, se han relacionado directamente con la HT. Esta tesis investigó las relaciones entre HT, evaluada mediante análisis de textura (AT) de imágenes 18F-FDG-PET/CT, con parámetros metabólicos y estadificación patológica en pacientes con NSCLC, y exploró el rendimiento diagnóstico de diferentes criterios metabólicos, morfológicos y clínicos para la clasificación de NPS. Además, se usaron características radiómicas de imágenes 18F-FDG-PET/CT de pacientes con cáncer de mama recurrente/metastásico, para construir modelos predictivos de la respuesta a la quimioterapia, combinándose varios métodos de selección de características y aprendizaje automático (ML)...Fac. de Ciencias FísicasTRUEunpu

    Machine Learning/Deep Learning in Medical Image Processing

    Get PDF
    Many recent studies on medical image processing have involved the use of machine learning (ML) and deep learning (DL). This special issue, “Machine Learning/Deep Learning in Medical Image Processing”, has been launched to provide an opportunity for researchers in the area of medical image processing to highlight recent developments made in their fields with ML/DL. Seven excellent papers that cover a wide variety of medical/clinical aspects are selected in this special issue

    Advanced Computational Methods for Oncological Image Analysis

    Get PDF
    [Cancer is the second most common cause of death worldwide and encompasses highly variable clinical and biological scenarios. Some of the current clinical challenges are (i) early diagnosis of the disease and (ii) precision medicine, which allows for treatments targeted to specific clinical cases. The ultimate goal is to optimize the clinical workflow by combining accurate diagnosis with the most suitable therapies. Toward this, large-scale machine learning research can define associations among clinical, imaging, and multi-omics studies, making it possible to provide reliable diagnostic and prognostic biomarkers for precision oncology. Such reliable computer-assisted methods (i.e., artificial intelligence) together with clinicians’ unique knowledge can be used to properly handle typical issues in evaluation/quantification procedures (i.e., operator dependence and time-consuming tasks). These technical advances can significantly improve result repeatability in disease diagnosis and guide toward appropriate cancer care. Indeed, the need to apply machine learning and computational intelligence techniques has steadily increased to effectively perform image processing operations—such as segmentation, co-registration, classification, and dimensionality reduction—and multi-omics data integration.
    corecore