116 research outputs found
Advanced machine learning methods for oncological image analysis
Cancer is a major public health problem, accounting for an estimated 10 million deaths worldwide in 2020 alone. Rapid advances in the field of image acquisition and hardware development over the past three decades have resulted in the development of modern medical imaging modalities that can capture high-resolution anatomical, physiological, functional, and metabolic quantitative information from cancerous organs. Therefore, the applications of medical imaging have become increasingly crucial in the clinical routines of oncology, providing screening, diagnosis, treatment monitoring, and non/minimally- invasive evaluation of disease prognosis. The essential need for medical images, however, has resulted in the acquisition of a tremendous number of imaging scans. Considering the growing role of medical imaging data on one side and the challenges of manually examining such an abundance of data on the other side, the development of computerized tools to automatically or semi-automatically examine the image data has attracted considerable interest. Hence, a variety of machine learning tools have been developed for oncological image analysis, aiming to assist clinicians with repetitive tasks in their workflow.
This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis.
The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the proposed prior-aware DL framework compared to the state of the art. Study II aims to address a crucial challenge faced by supervised segmentation models: dependency on the large-scale labeled dataset. In this study, an unsupervised segmentation approach is proposed based on the concept of image inpainting to segment lung and head- neck tumors in images from single and multiple modalities. The proposed autoinpainting pipeline shows great potential in synthesizing high-quality tumor-free images and outperforms a family of well-established unsupervised models in terms of segmentation accuracy.
Studies III and IV aim to automatically discriminate the benign from the malignant pulmonary nodules by analyzing the low-dose computed tomography (LDCT) scans. In Study III, a dual-pathway deep classification framework is proposed to simultaneously take into account the local intra-nodule heterogeneities and the global contextual information. Study IV seeks to compare the discriminative power of a series of carefully selected conventional radiomics methods, end-to-end Deep Learning (DL) models, and deep features-based radiomics analysis on the same dataset. The numerical analyses show the potential of fusing the learned deep features into radiomic features for boosting the classification power.
Study V focuses on the early assessment of lung tumor response to the applied treatments by proposing a novel feature set that can be interpreted physiologically. This feature set was employed to quantify the changes in the tumor characteristics from longitudinal PET-CT scans in order to predict the overall survival status of the patients two years after the last session of treatments. The discriminative power of the introduced imaging biomarkers was compared against the conventional radiomics, and the quantitative evaluations verified the superiority of the proposed feature set. Whereas Study V focuses on a binary survival prediction task, Study VI addresses the prediction of survival rate in patients diagnosed with lung and head-neck cancer by investigating the potential of spherical convolutional neural networks and comparing their performance against other types of features, including radiomics. While comparable results were achieved in intra- dataset analyses, the proposed spherical-based features show more predictive power in inter-dataset analyses.
In summary, the six studies incorporate different imaging modalities and a wide range of image processing and machine-learning techniques in the methods developed for the quantitative assessment of tumor characteristics and contribute to the essential procedures of cancer diagnosis and prognosis
A radiomics-based decision support tool improves lung cancer diagnosis in combination with the Herder score in large lung nodules
Background: Large lung nodules (≥15 mm) have the highest risk of malignancy, and may exhibit important differences in phenotypic or clinical characteristics to their smaller counterparts. Existing risk models do not stratify large nodules well. We aimed to develop and validate an integrated segmentation and classification pipeline, incorporating deep-learning and traditional radiomics, to classify large lung nodules according to cancer risk. Methods: 502 patients from five U.K. centres were recruited to the large-nodule arm of the retrospective LIBRA study between July 2020 and April 2022. 838 CT scans were used for model development, split into training and test sets (70% and 30% respectively). An nnUNet model was trained to automate lung nodule segmentation. A radiomics signature was developed to classify nodules according to malignancy risk. Performance of the radiomics model, termed the large-nodule radiomics predictive vector (LN-RPV), was compared to three radiologists and the Brock and Herder scores. Findings: 499 patients had technically evaluable scans (mean age 69 ± 11, 257 men, 242 women). In the test set of 252 scans, the nnUNet achieved a DICE score of 0.86, and the LN-RPV achieved an AUC of 0.83 (95% CI 0.77–0.88) for malignancy classification. Performance was higher than the median radiologist (AUC 0.75 [95% CI 0.70–0.81], DeLong p = 0.03). LN-RPV was robust to auto-segmentation (ICC 0.94). For baseline solid nodules in the test set (117 patients), LN-RPV had an AUC of 0.87 (95% CI 0.80–0.93) compared to 0.67 (95% CI 0.55–0.76, DeLong p = 0.002) for the Brock score and 0.83 (95% CI 0.75–0.90, DeLong p = 0.4) for the Herder score. In the international external test set (n = 151), LN-RPV maintained an AUC of 0.75 (95% CI 0.63–0.85). 18 out of 22 (82%) malignant nodules in the Herder 10–70% category in the test set were identified as high risk by the decision-support tool, and may have been referred for earlier intervention. Interpretation: The model accurately segments and classifies large lung nodules, and may improve upon existing clinical models. Funding This project represents independent research funded by: 1) Royal Marsden Partners Cancer Alliance, 2) the Royal Marsden Cancer Charity, 3) the National Institute for Health Research (NIHR) Biomedical Research Centre at the Royal Marsden NHS Foundation Trust and The Institute of Cancer Research, London, 4) the National Institute for Health Research (NIHR) Biomedical Research Centre at Imperial College London, 5) Cancer Research UK (C309/A31316)
Deep Learning for Automated Medical Image Analysis
Medical imaging is an essential tool in many areas of medical applications,
used for both diagnosis and treatment. However, reading medical images and
making diagnosis or treatment recommendations require specially trained medical
specialists. The current practice of reading medical images is labor-intensive,
time-consuming, costly, and error-prone. It would be more desirable to have a
computer-aided system that can automatically make diagnosis and treatment
recommendations. Recent advances in deep learning enable us to rethink the ways
of clinician diagnosis based on medical images. In this thesis, we will
introduce 1) mammograms for detecting breast cancers, the most frequently
diagnosed solid cancer for U.S. women, 2) lung CT images for detecting lung
cancers, the most frequently diagnosed malignant cancer, and 3) head and neck
CT images for automated delineation of organs at risk in radiotherapy. First,
we will show how to employ the adversarial concept to generate the hard
examples improving mammogram mass segmentation. Second, we will demonstrate how
to use the weakly labeled data for the mammogram breast cancer diagnosis by
efficiently design deep learning for multi-instance learning. Third, the thesis
will walk through DeepLung system which combines deep 3D ConvNets and GBM for
automated lung nodule detection and classification. Fourth, we will show how to
use weakly labeled data to improve existing lung nodule detection system by
integrating deep learning with a probabilistic graphic model. Lastly, we will
demonstrate the AnatomyNet which is thousands of times faster and more accurate
than previous methods on automated anatomy segmentation.Comment: PhD Thesi
Optimization of neural networks for deep learning and applications to CT image segmentation
[eng] During the last few years, AI development in deep learning has been going so fast that even important researchers, politicians, and entrepreneurs are signing petitions to try to slow it down. The newest methods for natural language processing and image generation are achieving results so unbelievable that people are seriously starting to think they can be dangerous for society. In reality, they are not dangerous (at the moment) even if we have to admit we reached a point where we have no more control over the flux of data inside the deep networks. It is impossible to open a modern
deep neural network and interpret how it processes the information and, in many cases, explain how or why it gives back that particular result. One of the goals of this doctoral work has been to study the behavior of weights in convolutional neural networks and in transformers. We hereby present a work that demonstrates how to invert 3x3 convolutions after training a neural network able to learn how to classify images, with the future aim of having precisely invertible convolutional neural networks. We demonstrate that a simple network can learn to classify images on an open-source dataset without loss in accuracy, with respect to a non-invertible one. All that with the ability to reconstruct the original image without detectable error
(on 8-bit images) in up to 20 convolutions stacked in a row. We present a thorough comparison between our method and the standard. We tested the
performances of the five most used transformers for image classification on an open- source dataset. Studying the embedded matrices, we have been
able to provide two criteria that can help transformers learn with a training time reduction of up to 30% and with no impact on classification accuracy.
The evolution of deep learning techniques is also touching the field of digital health. With tens of thousands of new start-ups and more than 1B $ of investments only in the last year, this field is growing rapidly and promising to revolutionize healthcare. In this thesis, we present several neural networks for the segmentation of lungs, lung nodules, and areas affected by pneumonia induced by COVID-19, in chest CT scans. The architecturesm we used are all residual convolutional neural networks inspired by UNet and Inception. We customized them with novel loss functions and layers
studied to achieve high performances on these particular applications. The errors on the surface of nodule segmentation masks are not over 1mm in more than 99% of the cases. Our algorithm for COVID-19 lesion detection has a specificity of 100% and overall accuracy of 97.1%. In general, it surpasses the state-of-the-art in all the considered statistics, using UNet as a benchmark. Combining these with other algorithms able to detect and predict lung cancer, the whole work was presented in a European innovation program and judged of high interest by worldwide experts.
With this work, we set the basis for the future development of better AI tools in healthcare and scientific investigation into the fundamentals of deep learning.[spa] Durante los últimos años, el desarrollo de la IA en el aprendizaje profundo ha ido tan rápido que Incluso importantes investigadores, polÃticos y empresarios están firmando peticiones para intentar para ralentizarlo. Los métodos más nuevos para el procesamiento y la generación de imágenes y lenguaje natural, están logrando resultados tan increÃbles que la gente está empezando a preocuparse seriamente. Pienso que pueden ser peligrosos para la sociedad. En realidad, no son peligrosos (al menos de momento) incluso si tenemos que admitir que llegamos a un punto en el que ya no tenemos control sobre el flujo de datos dentro de las redes profundas. Es imposible abrir una moderna red neuronal profunda e interpretar cómo procesa la información y, en muchos casos, explique cómo o por qué devuelve ese resultado en particular, uno de los objetivos de este doctorado.
El trabajo ha consistido en estudiar el comportamiento de los pesos en redes neuronales convolucionales y en transformadores. Por la presente presentamos un trabajo que demuestra cómo invertir 3x3 convoluciones después de entrenar una red neuronal capaz de aprender a clasificar imágenes, con el objetivo futuro de tener redes neuronales convolucionales precisamente invertibles. Nosotros queremos demostrar que una red simple puede aprender a clasificar imágenes en un código abierto conjunto de datos sin pérdida de precisión, con respecto a uno no invertible. Todo eso con la capacidad de reconstruir la imagen original sin errores detectables (en imágenes de 8 bits) en hasta 20 convoluciones apiladas en fila. Presentamos una exhaustiva comparación entre nuestro método y el estándar. Probamos las prestaciones de los cinco transformadores más utilizados para la clasificación de imágenes en abierto. conjunto de datos de origen. Al estudiar las matrices incrustadas, hemos sido capaz de proporcionar dos criterios que pueden ayudar a los transformadores a aprender con un tiempo de capacitación reducción de hasta el 30% y sin impacto en la precisión de la clasificación.
La evolución de las técnicas de aprendizaje profundo también está afectando al campo de la salud digital. Con decenas de miles de nuevas empresas y más de mil millones de dólares en inversiones sólo en el año pasado, este campo está creciendo rápidamente y promete revolucionar la atención médica. En esta tesis, presentamos varias redes neuronales para la segmentación de pulmones, nódulos pulmonares, y zonas afectadas por neumonÃa inducida por COVID-19, en tomografÃas computarizadas de tórax. La arquitectura que utilizamos son todas redes neuronales convolucionales residuales inspiradas en UNet. Las personalizamos con nuevas funciones y capas de pérdida, estudiado para lograr altos rendimientos en estas aplicaciones particulares. Los errores en la superficie de las máscaras de segmentación de los nódulos no supera 1 mm en más del 99% de los casos. Nuestro algoritmo para la detección de lesiones de COVID-19 tiene una especificidad del 100% y en general precisión del 97,1%. En general supera el estado del arte en todos los aspectos considerados, estadÃsticas, utilizando UNet como punto de referencia. Combinando estos con otros algoritmos capaces de detectar y predecir el cáncer de pulmón, todo el trabajo se presentó en una innovación europea programa y considerado de gran interés por expertos de todo el mundo.
Con este trabajo, sentamos las bases para el futuro desarrollo de mejores herramientas de IA en Investigación sanitaria y cientÃfica sobre los fundamentos del aprendizaje profundo
- …