329 research outputs found
Evaluation of cancer outcome assessment using MRI: A review of deep-learning methods
Accurate evaluation of tumor response to treatment is critical to allow personalized treatment regimens according to the predicted response and to support clinical trials investigating new therapeutic agents by providing them with an accurate response indicator. Recent advances in medical imaging, computer hardware, and machine-learning algorithms have resulted in the increased use of these tools in the field of medicine as a whole and specifically in cancer imaging for detection and characterization of malignant lesions, prognosis, and assessment of treatment response. Among the currently available imaging techniques, magnetic resonance imaging (MRI) plays an important role in the evaluation of treatment assessment of many cancers, given its superior soft-tissue contrast and its ability to allow multiplanar imaging and functional evaluation. In recent years, deep learning (DL) has become an active area of research, paving the way for computer-assisted clinical and radiological decision support. DL can uncover associations between imaging features that cannot be visually identified by the naked eye and pertinent clinical outcomes. The aim of this review is to highlight the use of DL in the evaluation of tumor response assessed on MRI. In this review, we will first provide an overview of common DL architectures used in medical imaging research in general. Then, we will review the studies to date that have applied DL to magnetic resonance imaging for the task of treatment response assessment. Finally, we will discuss the challenges and opportunities of using DL within the clinical workflow
Deep learning-based prediction of response to HER2-targeted neoadjuvant chemotherapy from pre-treatment dynamic breast MRI: A multi-institutional validation study
Predicting response to neoadjuvant therapy is a vexing challenge in breast
cancer. In this study, we evaluate the ability of deep learning to predict
response to HER2-targeted neo-adjuvant chemotherapy (NAC) from pre-treatment
dynamic contrast-enhanced (DCE) MRI acquired prior to treatment. In a
retrospective study encompassing DCE-MRI data from a total of 157 HER2+ breast
cancer patients from 5 institutions, we developed and validated a deep learning
approach for predicting pathological complete response (pCR) to HER2-targeted
NAC prior to treatment. 100 patients who received HER2-targeted neoadjuvant
chemotherapy at a single institution were used to train (n=85) and tune (n=15)
a convolutional neural network (CNN) to predict pCR. A multi-input CNN
leveraging both pre-contrast and late post-contrast DCE-MRI acquisitions was
identified to achieve optimal response prediction within the validation set
(AUC=0.93). This model was then tested on two independent testing cohorts with
pre-treatment DCE-MRI data. It achieved strong performance in a 28 patient
testing set from a second institution (AUC=0.85, 95% CI 0.67-1.0, p=.0008) and
a 29 patient multicenter trial including data from 3 additional institutions
(AUC=0.77, 95% CI 0.58-0.97, p=0.006). Deep learning-based response prediction
model was found to exceed a multivariable model incorporating predictive
clinical variables (AUC < .65 in testing cohorts) and a model of
semi-quantitative DCE-MRI pharmacokinetic measurements (AUC < .60 in testing
cohorts). The results presented in this work across multiple sites suggest that
with further validation deep learning could provide an effective and reliable
tool to guide targeted therapy in breast cancer, thus reducing overtreatment
among HER2+ patients.Comment: Braman and El Adoui contributed equally to this work. 33 pages, 3
figures in main tex
Deep Learning in Breast Cancer Imaging: A Decade of Progress and Future Directions
Breast cancer has reached the highest incidence rate worldwide among all
malignancies since 2020. Breast imaging plays a significant role in early
diagnosis and intervention to improve the outcome of breast cancer patients. In
the past decade, deep learning has shown remarkable progress in breast cancer
imaging analysis, holding great promise in interpreting the rich information
and complex context of breast imaging modalities. Considering the rapid
improvement in the deep learning technology and the increasing severity of
breast cancer, it is critical to summarize past progress and identify future
challenges to be addressed. In this paper, we provide an extensive survey of
deep learning-based breast cancer imaging research, covering studies on
mammogram, ultrasound, magnetic resonance imaging, and digital pathology images
over the past decade. The major deep learning methods, publicly available
datasets, and applications on imaging-based screening, diagnosis, treatment
response prediction, and prognosis are described in detail. Drawn from the
findings of this survey, we present a comprehensive discussion of the
challenges and potential avenues for future research in deep learning-based
breast cancer imaging.Comment: Survey, 41 page
Emerging Techniques in Breast MRI
As indicated throughout this chapter, there is a constant effort to move to more sensitive, specific, and quantitative methods for characterizing breast tissue via magnetic resonance imaging (MRI). In the present chapter, we focus on six emerging techniques that seek to quantitatively interrogate the physiological and biochemical properties of the breast. At the physiological scale, we present an overview of ultrafast dynamic contrast-enhanced MRI and magnetic resonance elastography which provide remarkable insights into the vascular and mechanical properties of tissue, respectively. Moving to the biochemical scale, magnetization transfer, chemical exchange saturation transfer, and spectroscopy (both “conventional” and hyperpolarized) methods all provide unique, noninvasive, insights into tumor metabolism. Given the breadth and depth of information that can be obtained in a single MRI session, methods of data synthesis and interpretation must also be developed. Thus, we conclude the chapter with an introduction to two very different, though complementary, methods of data analysis: (1) radiomics and habitat imaging, and (2) mechanism-based mathematical modeling
Analysis of interpretability methods applied to DCE-MRI of Breasts Images
Tese de mestrado integrado, Engenharia Biomédica e Biofísica (Sinais e Imagens Biomédicas), 2022, Universidade de Lisboa; Faculdade de CiênciasO cancro da mama é uma doença que afeta um elevado número de mulheres a uma escala mundial [1].
Os exames físicos e a mamografia são as formas mais eficazes de detetar lesões e nódulos na mama.
Contudo, estes métodos podem revelar-se inconclusivos. Uma maneira de solidificar o diagnóstico de
cancro da mama é a realização de testes suplementares, tal como a ressonância magnética. O exame de
ressonância magnética mais comum para detetar cancro da mama é DCE-MRI, um exame que obtém
imagens através da injeção de um agente de contraste [2]. A consolidação do diagnóstico pode também
ser realizada via meios de machine learning. Vários métodos de machine learning têm vindo a ajudar
técnicos a realizar tarefas como deteção e segmentação de tumores. Apesar destes métodos serem
eficazes, as tarefas que este realizam são caracterizadas por um elevado grau de responsabilidade visto
que estão diretamente relacionadas com o bem-estar de um ser humano. Isto leva à necessidade de
justificar os resultados destes métodos de maneira a aumentar a confiança nos mesmos. As técnicas que
tentam explicar os resultados de métodos de machine learning pertencem à área de Explainable Artificial
Intelligence [3].
Esta dissertação foca-se em aplicar e analisar métodos state-of-the-art de Explainable Artificial
Intelligence a modelos de machine learning. Como estes modelos foram construídos tendo como base
imagens de DCE-MR de mamas, os métodos aplicados a estes modelos visam explicar os seus resultados
visualmente. Um dos métodos aplicados foi SHAP, SHapley Addictive exPlanations. Este método pode
ser aplicado a uma variedade de modelos e baseia-se nos Shapley Values da teoria de jogos para explicar
a importância das características da imagem de acordo com os resultados do modelo [4]. Outro método
aplicado foi Local Interpretable Model-agnostic Explanations, ou LIME. Este método cria imagens
alteradas e testa-as nos modelos criados. Estas imagens perturbadas têm um peso de acordo com o grau
das perturbações. Quando testadas nos modelos, LIME calcula quais as perturbações que influenciam a
mudança do resultado do modelo e, consequentemente, encontra as áreas da imagem que são mais
importantes para a classificação da imagem de acordo com o modelo [5]. O último método aplicado foi
o Gradient-weighted Class Activation Mapping, ou Grad-CAM. Este método pode ser aplicado em
diversos modelos, sendo uma generalização do método CAM [6], mas apenas pode ser aplicado em
tarefas de classificação. O método de Grad-CAM utiliza os valores dos gradientes específicos de classes
e as feature maps extraídas de convolutional layers para realçar áreas discriminativas de uma certa classe
na imagem. Estas layers são componentes importantes que constituem o corpo dos modelos. Para lá
destes métodos, extraiu-se e analisou-se matrizes convolucionais, chamadas de filtros, usadas pelas
convolutional layers para obter o output destas layers. Esta tarefa foi realizada para observar os padrões
que estão a ser filtrados nestas camadas.
Para aplicar estes métodos, foi necessário construir e treinar vários modelos. Nesse sentido, três modelos
com a mesma estrutura foram criados para realizar tarefas de regressão. Estes modelos têm uma
arquitetura constituída por três convolutional layers seguidas de uma linear layer, uma dropout layer e
outra linear layer. Um dos modelos tem como objetivo medir a área do tumor em maximum intensity
projections dos volumes. Os outros dois modelos têm como objetivo medir a percentagem de redução
do tumor quando introduzido dois maximum intensity projections. A diferença entre estes dois modelos
está nas labels criadas para os inputs. Um dos modelos usa valores calculados através da diferença entre
a área dos tumores dos duas maximum intensity projections, enquanto o outro modelo usa valores da
regressão da área do tumor fornecidos por técnicos. A performance destes modelos foi avaliada através
da computação dos coeficientes de correlação de Pearson e de Spearman. Estes coeficientes são
calculados usando a covariância e o produto do desvio-padrão de duas variáveis, e diferem no facto de
o coeficiente de Pearson apenas captar relações lineares enquanto o coeficiente de Spearman capta
qualquer tipo de relação. Do modelo que teve como objetivo medir a área do tumor calculou-se os
coeficientes de Pearson e de Spearman de 0.53 e 0.69, respetivamente. O modelo que teve como objetivo calcular a percentagem de redução do tumor e que usou valores calculados como labels teve a
melhor performance dos três modelos, com coeficientes de Pearson e de Spearman com valores de 0.82
e 0.87, respetivamente. O último modelo utilizado não conseguiu prever corretamente os valores
fornecidos pelos técnicos e, consequentemente, este modelo foi descartado. De seguida, os métodos de
visualização de filtros e SHAP foram aplicados aos dois restantes modelos. A técnica de visualização
de filtros permitiu demonstrar as partes da imagem que estão a ser filtradas nas convolutional layers,
sendo possível observar certos padrões nestes filtros. O método SHAP realçou áreas da mama que
contribuíram para as previsões dos modelos. Como ambas as tarefas se focam em calcular algo através
da área dos tumores, consideramos imagens SHAP bem-sucedidas aquelas que realçam áreas do tumor.
Com isto em mente, as imagens obtidas através do método SHAP tiveram um sucesso de 57% e de 69%
para o modelo que mede a área do tumor e para o modelo que mede a percentagem de redução do tumor,
respetivamente.
Outro modelo foi construído com o objetivo de classificar pares de maximum intensity projections de
acordo com percentagem de redução de área do tumor. Cada par foi previamente classificado numa de
quatro classes, sendo que cada classe corresponde a uma redução incremental de 25%, ou seja, a primeira
classe corresponde a uma redução do tumor de 0% a 25%, enquanto a última classe corresponde a uma
redução do tumor de 75% a 100%. Este modelo tem uma arquitetura semelhante à de um modelo de
Resnet18 [7]. A performance deste modelo foi avaliada através de uma matriz de confusão. Através
desta matriz podemos observar um sucesso de 70% no que toca a previsões corretas feitas pelo modelo.
De seguida, os três métodos, SHAP, LIME e Grad-CAM, foram aplicados neste modelo. Como o
objetivo deste modelo baseia-se em classificar as imagens de acordo com a percentagem de redução de
tumor, também se considerou imagens de SHAP com sucesso aquelas que realçam áreas do tumor.
Tendo isto em conta, observou-se uma taxa de sucesso de 82% em realçar a zona do tumor nas maximum
intensity projections. As perturbações criadas para serem aplicadas no método LIME correspondem a
áreas quadradas na imagem. O método LIME cria imagens atribuindo valores nulos a estas áreas
aleatoriamente. O método LIME atribui um peso às imagens perturbadas de acordo com o nível de
perturbação que estas sofrem. Neste trabalho, duas diferentes perturbações foram criadas, sendo a
primeira perturbação áreas quadradas de 10 por 10 pixéis e a segunda áreas quadradas de 25 por 25
pixéis. Após a perturbação das imagens, estas foram inseridas novamente no modelo e as diferenças na
previsão do modelo foram aprendidas pelo algoritmo do LIME. Imagens criadas com as perturbações
mais pequenas tiveram uma maior taxa de sucesso que as perturbações maiores, realçando perturbações
na área do tumor com uma certidão de 48%. Apesar deste facto, as imagens criadas com as perturbações
de 25 por 25 pixéis tiveram os resultados mais claros no que toca a localizar o tumor visto que o tamanho
das perturbações permitiu englobar todo o tumor. Por último, o método Grad-CAM foi aplicado a todas
as importantes convolutional layers do modelo. Este método foi bastante eficaz a localizar as áreas
discriminativas de uma certa classe, visto que conseguiu localizar o tumor bastante facilmente quando
aplicado na última convolutional layer. Para lá deste facto, foi possível observar as áreas discriminativas
de uma certa classe nas imagens quando se aplica este método a convolutional layers intermédias.
Concluindo, a aplicação destas técnicas permitiu explicar parte das decisões feitas pelos modelos de
machine learning no âmbito da análise de imagens de DCE-MRI de cancro da mama.Computer aided diagnosis has had an exponential growth in medical imaging. Machine learning has
helped technicians in tasks such as tumor segmentation and tumor detection. Despite the growth in this
area, there is still a need to justify and fully understand the computer results, in order to increase the
trust of medical professionals in these computer tasks. By applying explainable methods to the machine
learning algorithms, we can extract information from techniques that are often considered black boxes.
This dissertation focuses on applying and analyzing state-of-the-art XAI (eXplainable Artificial
Intelligence) methods to machine learning models that handle DCE-MR (Dynamic Contrast-Enhanced
Magnetic Resonance) breast images. The methods used to justify the model’s decisions were SHAP
(SHapley Additive exPlanations) [4], LIME (Local Interpretable Model-agnostic Explanations) [5] and
Grad-CAM (Gradient-weighted Class Activation Mapping) [8], which correspond to three visual
explanation methods. SHAP uses Shapley Values from game theory to explain the importance of
features in the image to the model’s prediction. LIME is a method that uses weighted perturbed images
and tests then using the existing models. From the model’s response to these perturbed images, the
algorithm can find which perturbations cause the model to change its prediction and, consequently, can
find the important areas in the image that lead to the model’s prediction. Grad-CAM is a visual
explanation method that can be applied to a variety of neural network architectures. It uses gradient
scores from a specific class and feature maps extracted from convolutional layers to highlight classdiscriminative regions in the images.
Two neural network models were built to perform regression tasks such as measuring tumor area and
measuring tumor shrinkage. To justify the network’s results, filters were extracted from the network’s
convolutional layers and the SHAP method was applied. The filter visualization technique was able to
demonstrate which parts of the image are being convoluted by the layer’s filters while the SHAP method
highlighted the areas of the tumor that contributed most to the model’s predictions. The SHAP method
had a success rate of 57% at highlighting the correct area of the breast when applied to the neural network
which measured the tumor area, and a success rate of 69% when applied to the neural network which
measured the tumor shrinkage. Another model was created using a Resnet18’s architecture. This
network had the task of classifying the breast images according to the shrinkage of the tumor and the
SHAP, LIME and Grad-CAM methods were applied to it. The SHAP method had a success rate of 82%.
The LIME method was applied two times by using perturbations of different sizes. The smaller sized
perturbations performed better, having a success rate of 48% at highlighting the tumor area, but the
larger sized perturbations had better results in terms of locating the entire tumor, because the area
covered was larger. Lastly, the Grad-CAM method excelled at locating the tumor in the breast when
applied to the last important convolutional layer in the network
Patient-specific, mechanistic models of tumor growth incorporating artificial intelligence and big data
Despite the remarkable advances in cancer diagnosis, treatment, and
management that have occurred over the past decade, malignant tumors remain a
major public health problem. Further progress in combating cancer may be
enabled by personalizing the delivery of therapies according to the predicted
response for each individual patient. The design of personalized therapies
requires patient-specific information integrated into an appropriate
mathematical model of tumor response. A fundamental barrier to realizing this
paradigm is the current lack of a rigorous, yet practical, mathematical theory
of tumor initiation, development, invasion, and response to therapy. In this
review, we begin by providing an overview of different approaches to modeling
tumor growth and treatment, including mechanistic as well as data-driven models
based on ``big data" and artificial intelligence. Next, we present illustrative
examples of mathematical models manifesting their utility and discussing the
limitations of stand-alone mechanistic and data-driven models. We further
discuss the potential of mechanistic models for not only predicting, but also
optimizing response to therapy on a patient-specific basis. We then discuss
current efforts and future possibilities to integrate mechanistic and
data-driven models. We conclude by proposing five fundamental challenges that
must be addressed to fully realize personalized care for cancer patients driven
by computational models
DEVELOPING NOVEL COMPUTER-AIDED DETECTION AND DIAGNOSIS SYSTEMS OF MEDICAL IMAGES
Reading medical images to detect and diagnose diseases is often difficult and has large inter-reader variability. To address this issue, developing computer-aided detection and diagnosis (CAD) schemes or systems of medical images has attracted broad research interest in the last several decades. Despite great effort and significant progress in previous studies, only limited CAD schemes have been used in clinical practice. Thus, developing new CAD schemes is still a hot research topic in medical imaging informatics field. In this dissertation, I investigate the feasibility of developing several new innovative CAD schemes for different application purposes. First, to predict breast tumor response to neoadjuvant chemotherapy and reduce unnecessary aggressive surgery, I developed two CAD schemes of breast magnetic resonance imaging (MRI) to generate quantitative image markers based on quantitative analysis of global kinetic features. Using the image marker computed from breast MRI acquired pre-chemotherapy, CAD scheme enables to predict radiographic complete response (CR) of breast tumors to neoadjuvant chemotherapy, while using the imaging marker based on the fusion of kinetic and texture features extracted from breast MRI performed after neoadjuvant chemotherapy, CAD scheme can better predict the pathologic complete response (pCR) of the patients. Second, to more accurately predict prognosis of stroke patients, quantifying brain hemorrhage and ventricular cerebrospinal fluid depicting on brain CT images can play an important role. For this purpose, I developed a new interactive CAD tool to segment hemorrhage regions and extract radiological imaging marker to quantitatively determine the severity of aneurysmal subarachnoid hemorrhage at presentation and correlate the estimation with various homeostatic/metabolic derangements and predict clinical outcome. Third, to improve the efficiency of primary antibody screening processes in new cancer drug development, I developed a CAD scheme to automatically identify the non-negative tissue slides, which indicate reactive antibodies in digital pathology images. Last, to improve operation efficiency and reliability of storing digital pathology image data, I developed a CAD scheme using optical character recognition algorithm to automatically extract metadata from tissue slide label images and reduce manual entry for slide tracking and archiving in the tissue pathology laboratories.
In summary, in these studies, we developed and tested several innovative approaches to identify quantitative imaging markers with high discriminatory power. In all CAD schemes, the graphic user interface-based visual aid tools were also developed and implemented. Study results demonstrated feasibility of applying CAD technology to several new application fields, which has potential to assist radiologists, oncologists and pathologists improving accuracy and consistency in disease diagnosis and prognosis assessment of using medical image
Recommended from our members
Functional Magnetic Resonance Imaging of Breast Cancer
This thesis examines the use of magnetic resonance imaging (MRI) techniques in the detection of breast cancer and the prediction of pathological complete response (pCR) to neoadjuvant chemotherapy (NACT).
This thesis compares the diagnostic performance of diffusion-weighted imaging (DWI) models in the breast using a systematic review and meta-analysis. Advanced diffusion models have been proposed that may improve the performance of standard DWI using the apparent diffusion coefficient (ADC) to discriminate between malignant and benign breast lesions. Pooling the results from 73 studies, comparable diagnostic accuracy is shown using the ADC and parameters from the intra-voxel incoherent motion (IVIM) and diffusion tensor imaging (DTI) models. This work highlights a lack of standardisation in DWI protocols and methodology. Conventional acquisition techniques used in DWI often suffer from image artefacts and low spatial resolution. A multi-shot DWI technique, multiplexed sensitivity encoding (MUSE), can improve the image quality of DWI. A MUSE protocol has been optimised through a series of phantom experiments and validated in 20 patients. Comparing MUSE to conventional DWI, statistically significant improvements are shown in distortion and blurring metrics and qualitative image quality metrics such as lesion conspicuity and diagnostic confidence, increasing the clinical utility of DWI.
This thesis investigates the use of dynamic contrast-enhanced MRI (DCE-MRI) in the detection of breast cancer and the prediction of pCR. Abbreviated MRI (ABB-MRI) protocols have gained increasing attention for the detection of breast cancer, acquiring a shortened version of a full diagnostic protocol (FDP-MRI) in a fraction of the time, reducing the cost of the examination. The diagnostic performance of abbreviated and full diagnostic protocols is systematically compared using a meta-analysis. Pooling 13 studies, equivalent diagnostic accuracy is shown for ABB-MRI in cohorts enriched with cancers, and lower but not significantly different diagnostic performance is shown in screening cohorts.
Higher order imaging features derived from pre-treatment DCE-MRI could be used to predict pCR and inform decisions regarding targeted treatment, avoiding unnecessary toxicity. Using data from 152 patients undergoing NACT, radiomics features are extracted from baseline DCE-MRI and machine learning models trained to predict pCR with moderate accuracy. The stability of feature selection using logistic regression classification is demonstrated and a comparison of models trained using features from different time points in the dynamic series demonstrates that a full dynamic series enables the most accurate prediction of pCR.GE Healthcare funded PhD Studentshi
- …