405 research outputs found

    Lung nodule diagnosis and cancer histology classification from computed tomography data by convolutional neural networks: A survey

    Get PDF
    Lung cancer is among the deadliest cancers. Besides lung nodule classification and diagnosis, developing non-invasive systems to classify lung cancer histological types/subtypes may help clinicians to make targeted treatment decisions timely, having a positive impact on patients' comfort and survival rate. As convolutional neural networks have proven to be responsible for the significant improvement of the accuracy in lung cancer diagnosis, with this survey we intend to: show the contribution of convolutional neural networks not only in identifying malignant lung nodules but also in classifying lung cancer histological types/subtypes directly from computed tomography data; point out the strengths and weaknesses of slice-based and scan-based approaches employing convolutional neural networks; and highlight the challenges and prospective solutions to successfully apply convolutional neural networks for such classification tasks. To this aim, we conducted a comprehensive analysis of relevant Scopus-indexed studies involved in lung nodule diagnosis and cancer histology classification up to January 2022, dividing the investigation in convolutional neural network-based approaches fed with planar or volumetric computed tomography data. Despite the application of convolutional neural networks in lung nodule diagnosis and cancer histology classification is a valid strategy, some challenges raised, mainly including the lack of publicly-accessible annotated data, together with the lack of reproducibility and clinical interpretability. We believe that this survey will be helpful for future studies involved in lung nodule diagnosis and cancer histology classification prior to lung biopsy by means of convolutional neural networks

    Innovations in thoracic imaging:CT, radiomics, AI and x-ray velocimetry

    Get PDF
    In recent years, pulmonary imaging has seen enormous progress, with the introduction, validation and implementation of new hardware and software. There is a general trend from mere visual evaluation of radiological images to quantification of abnormalities and biomarkers, and assessment of 'non visual' markers that contribute to establishing diagnosis or prognosis. Important catalysts to these developments in thoracic imaging include new indications (like computed tomography [CT] lung cancer screening) and the COVID-19 pandemic. This review focuses on developments in CT, radiomics, artificial intelligence (AI) and x-ray velocimetry for imaging of the lungs. Recent developments in CT include the potential for ultra-low-dose CT imaging for lung nodules, and the advent of a new generation of CT systems based on photon-counting detector technology. Radiomics has demonstrated potential towards predictive and prognostic tasks particularly in lung cancer, previously not achievable by visual inspection by radiologists, exploiting high dimensional patterns (mostly texture related) on medical imaging data. Deep learning technology has revolutionized the field of AI and as a result, performance of AI algorithms is approaching human performance for an increasing number of specific tasks. X-ray velocimetry integrates x-ray (fluoroscopic) imaging with unique image processing to produce quantitative four dimensional measurement of lung tissue motion, and accurate calculations of lung ventilation

    Advanced machine learning methods for oncological image analysis

    Get PDF
    Cancer is a major public health problem, accounting for an estimated 10 million deaths worldwide in 2020 alone. Rapid advances in the field of image acquisition and hardware development over the past three decades have resulted in the development of modern medical imaging modalities that can capture high-resolution anatomical, physiological, functional, and metabolic quantitative information from cancerous organs. Therefore, the applications of medical imaging have become increasingly crucial in the clinical routines of oncology, providing screening, diagnosis, treatment monitoring, and non/minimally- invasive evaluation of disease prognosis. The essential need for medical images, however, has resulted in the acquisition of a tremendous number of imaging scans. Considering the growing role of medical imaging data on one side and the challenges of manually examining such an abundance of data on the other side, the development of computerized tools to automatically or semi-automatically examine the image data has attracted considerable interest. Hence, a variety of machine learning tools have been developed for oncological image analysis, aiming to assist clinicians with repetitive tasks in their workflow. This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis. The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the proposed prior-aware DL framework compared to the state of the art. Study II aims to address a crucial challenge faced by supervised segmentation models: dependency on the large-scale labeled dataset. In this study, an unsupervised segmentation approach is proposed based on the concept of image inpainting to segment lung and head- neck tumors in images from single and multiple modalities. The proposed autoinpainting pipeline shows great potential in synthesizing high-quality tumor-free images and outperforms a family of well-established unsupervised models in terms of segmentation accuracy. Studies III and IV aim to automatically discriminate the benign from the malignant pulmonary nodules by analyzing the low-dose computed tomography (LDCT) scans. In Study III, a dual-pathway deep classification framework is proposed to simultaneously take into account the local intra-nodule heterogeneities and the global contextual information. Study IV seeks to compare the discriminative power of a series of carefully selected conventional radiomics methods, end-to-end Deep Learning (DL) models, and deep features-based radiomics analysis on the same dataset. The numerical analyses show the potential of fusing the learned deep features into radiomic features for boosting the classification power. Study V focuses on the early assessment of lung tumor response to the applied treatments by proposing a novel feature set that can be interpreted physiologically. This feature set was employed to quantify the changes in the tumor characteristics from longitudinal PET-CT scans in order to predict the overall survival status of the patients two years after the last session of treatments. The discriminative power of the introduced imaging biomarkers was compared against the conventional radiomics, and the quantitative evaluations verified the superiority of the proposed feature set. Whereas Study V focuses on a binary survival prediction task, Study VI addresses the prediction of survival rate in patients diagnosed with lung and head-neck cancer by investigating the potential of spherical convolutional neural networks and comparing their performance against other types of features, including radiomics. While comparable results were achieved in intra- dataset analyses, the proposed spherical-based features show more predictive power in inter-dataset analyses. In summary, the six studies incorporate different imaging modalities and a wide range of image processing and machine-learning techniques in the methods developed for the quantitative assessment of tumor characteristics and contribute to the essential procedures of cancer diagnosis and prognosis

    DeTraC: Transfer Learning of Class Decomposed Medical Images in Convolutional Neural Networks

    Get PDF
    Due to the high availability of large-scale annotated image datasets, paramount progress has been made in deep convolutional neural networks (CNNs) for image classification tasks. CNNs enable learning highly representative and hierarchical local image features directly from data. However, the availability of annotated data, especially in the medical imaging domain, remains the biggest challenge in the field. Transfer learning can provide a promising and effective solution by transferring knowledge from generic image recognition tasks to the medical image classification. However, due to irregularities in the dataset distribution, transfer learning usually fails to provide a robust solution. Class decomposition facilitates easier to learn class boundaries of a dataset, and consequently can deal with any irregularities in the data distribution. Motivated by this challenging problem, the paper presents Decompose, Transfer, and Compose (DeTraC) approach, a novel CNN architecture based on class decomposition to improve the performance of medical image classification using transfer learning and class decomposition approach. DeTraC enables learning at the subclass level that can be more separable with a prospect to faster convergence.We validated our proposed approach with three different cohorts of chest X-ray images, histological images of human colorectal cancer, and digital mammograms. We compared DeTraC with the state-of-the-art CNN models to demonstrate its high performance in terms of accuracy, sensitivity, and specificity
    • …
    corecore