69 research outputs found
Deep learning-based prediction of response to HER2-targeted neoadjuvant chemotherapy from pre-treatment dynamic breast MRI: A multi-institutional validation study
Predicting response to neoadjuvant therapy is a vexing challenge in breast
cancer. In this study, we evaluate the ability of deep learning to predict
response to HER2-targeted neo-adjuvant chemotherapy (NAC) from pre-treatment
dynamic contrast-enhanced (DCE) MRI acquired prior to treatment. In a
retrospective study encompassing DCE-MRI data from a total of 157 HER2+ breast
cancer patients from 5 institutions, we developed and validated a deep learning
approach for predicting pathological complete response (pCR) to HER2-targeted
NAC prior to treatment. 100 patients who received HER2-targeted neoadjuvant
chemotherapy at a single institution were used to train (n=85) and tune (n=15)
a convolutional neural network (CNN) to predict pCR. A multi-input CNN
leveraging both pre-contrast and late post-contrast DCE-MRI acquisitions was
identified to achieve optimal response prediction within the validation set
(AUC=0.93). This model was then tested on two independent testing cohorts with
pre-treatment DCE-MRI data. It achieved strong performance in a 28 patient
testing set from a second institution (AUC=0.85, 95% CI 0.67-1.0, p=.0008) and
a 29 patient multicenter trial including data from 3 additional institutions
(AUC=0.77, 95% CI 0.58-0.97, p=0.006). Deep learning-based response prediction
model was found to exceed a multivariable model incorporating predictive
clinical variables (AUC < .65 in testing cohorts) and a model of
semi-quantitative DCE-MRI pharmacokinetic measurements (AUC < .60 in testing
cohorts). The results presented in this work across multiple sites suggest that
with further validation deep learning could provide an effective and reliable
tool to guide targeted therapy in breast cancer, thus reducing overtreatment
among HER2+ patients.Comment: Braman and El Adoui contributed equally to this work. 33 pages, 3
figures in main tex
Evaluation of cancer outcome assessment using MRI: A review of deep-learning methods
Accurate evaluation of tumor response to treatment is critical to allow personalized treatment regimens according to the predicted response and to support clinical trials investigating new therapeutic agents by providing them with an accurate response indicator. Recent advances in medical imaging, computer hardware, and machine-learning algorithms have resulted in the increased use of these tools in the field of medicine as a whole and specifically in cancer imaging for detection and characterization of malignant lesions, prognosis, and assessment of treatment response. Among the currently available imaging techniques, magnetic resonance imaging (MRI) plays an important role in the evaluation of treatment assessment of many cancers, given its superior soft-tissue contrast and its ability to allow multiplanar imaging and functional evaluation. In recent years, deep learning (DL) has become an active area of research, paving the way for computer-assisted clinical and radiological decision support. DL can uncover associations between imaging features that cannot be visually identified by the naked eye and pertinent clinical outcomes. The aim of this review is to highlight the use of DL in the evaluation of tumor response assessed on MRI. In this review, we will first provide an overview of common DL architectures used in medical imaging research in general. Then, we will review the studies to date that have applied DL to magnetic resonance imaging for the task of treatment response assessment. Finally, we will discuss the challenges and opportunities of using DL within the clinical workflow
Deep Learning in Breast Cancer Imaging: A Decade of Progress and Future Directions
Breast cancer has reached the highest incidence rate worldwide among all
malignancies since 2020. Breast imaging plays a significant role in early
diagnosis and intervention to improve the outcome of breast cancer patients. In
the past decade, deep learning has shown remarkable progress in breast cancer
imaging analysis, holding great promise in interpreting the rich information
and complex context of breast imaging modalities. Considering the rapid
improvement in the deep learning technology and the increasing severity of
breast cancer, it is critical to summarize past progress and identify future
challenges to be addressed. In this paper, we provide an extensive survey of
deep learning-based breast cancer imaging research, covering studies on
mammogram, ultrasound, magnetic resonance imaging, and digital pathology images
over the past decade. The major deep learning methods, publicly available
datasets, and applications on imaging-based screening, diagnosis, treatment
response prediction, and prognosis are described in detail. Drawn from the
findings of this survey, we present a comprehensive discussion of the
challenges and potential avenues for future research in deep learning-based
breast cancer imaging.Comment: Survey, 41 page
Machine Learning Strategies to Analyze Quantitative Ultrasound Multi-Parametric Images for Prediction of Therapy Response in Breast Cancer Patients
In this thesis project, two novel machine learning strategies were investigated to predict tumor response to neoadjuvant chemotherapy (NAC) at pre-treatment using quantitative ultrasound (QUS) multi-parametric images. The ultrasound data for analytical development and evaluation of the methodologies investigated in this project were acquired from 181 patients diagnosed with locally advanced breast cancer (LABC) and planned for NAC followed by surgery. The QUS multi-parametric images were generated using spectral analyses on the raw ultrasound radiofrequency (RF) data acquired before starting the NAC. In the first machine learning approach investigated in this project, distinct intra-tumor regions were identified within the parametric maps using a hidden Markov random field (HMRF) and its expectation-maximization (EM) algorithm. Several hand-crafted features characterizing the tumor, intra-tumor regions, and the tumor margin were extracted from different parametric images. A multi-step feature selection procedure was applied to construct a QUS biomarker for response prediction. Evaluation results on an independent test set indicated that the developed biomarker using the characteristics of intra-tumor regions and tumor margin in conjunction with a decision tree model with adaptive boosting (AdaBoost) as the classifier could predict the treatment response of patients at pre-treatment with an accuracy of 85.4% and an area under the receiver operating characteristic (ROC) curve (AUC) of 0.89. In the second machine learning approach investigated in this project, two deep convolutional neural network (DCNN) architectures including the residual network (ResNet) and residual attention network (RAN) were explored for extracting optimal feature maps from the parametric images, with a fully connected network for response prediction. Results demonstrated that the developed model with the RAN architecture to extract feature maps from the expanded parametric images of the tumor core and margin had a superior performance with an accuracy of 0.88 and an AUC of 0.86 on the independent test set. Also, survival analysis demonstrated a statistically significant difference between survival curves of the two response cohorts identified at pre-treatment based on both the conventional machine learning method and the deep learning model. Obtained results in this study demonstrated a great promise of QUS multi-parametric imaging integrated with both unsupervised learning methods in identifying distinct breast cancer intra-tumor regions and traditional classification techniques, as well as deep convolutional neural networks in predicting tumor response to NAC prior to start of treatment
Machine Learning Strategies to Analyze Quantitative Ultrasound Multi-Parametric Images for Prediction of Therapy Response in Breast Cancer Patients
In this thesis project, two novel machine learning strategies were investigated to predict tumor response to neoadjuvant chemotherapy (NAC) at pre-treatment using quantitative ultrasound (QUS) multi-parametric images. The ultrasound data for analytical development and evaluation of the methodologies investigated in this project were acquired from 181 patients diagnosed with locally advanced breast cancer (LABC) and planned for NAC followed by surgery. The QUS multi-parametric images were generated using spectral analyses on the raw ultrasound radiofrequency (RF) data acquired before starting the NAC. In the first machine learning approach investigated in this project, distinct intra-tumor regions were identified within the parametric maps using a hidden Markov random field (HMRF) and its expectation-maximization (EM) algorithm. Several hand-crafted features characterizing the tumor, intra-tumor regions, and the tumor margin were extracted from different parametric images. A multi-step feature selection procedure was applied to construct a QUS biomarker for response prediction. Evaluation results on an independent test set indicated that the developed biomarker using the characteristics of intra-tumor regions and tumor margin in conjunction with a decision tree model with adaptive boosting (AdaBoost) as the classifier could predict the treatment response of patients at pre-treatment with an accuracy of 85.4% and an area under the receiver operating characteristic (ROC) curve (AUC) of 0.89. In the second machine learning approach investigated in this project, two deep convolutional neural network (DCNN) architectures including the residual network (ResNet) and residual attention network (RAN) were explored for extracting optimal feature maps from the parametric images, with a fully connected network for response prediction. Results demonstrated that the developed model with the RAN architecture to extract feature maps from the expanded parametric images of the tumor core and margin had a superior performance with an accuracy of 0.88 and an AUC of 0.86 on the independent test set. Also, survival analysis demonstrated a statistically significant difference between survival curves of the two response cohorts identified at pre-treatment based on both the conventional machine learning method and the deep learning model. Obtained results in this study demonstrated a great promise of QUS multi-parametric imaging integrated with both unsupervised learning methods in identifying distinct breast cancer intra-tumor regions and traditional classification techniques, as well as deep convolutional neural networks in predicting tumor response to NAC prior to start of treatment
DEVELOPING NOVEL COMPUTER-AIDED DETECTION AND DIAGNOSIS SYSTEMS OF MEDICAL IMAGES
Reading medical images to detect and diagnose diseases is often difficult and has large inter-reader variability. To address this issue, developing computer-aided detection and diagnosis (CAD) schemes or systems of medical images has attracted broad research interest in the last several decades. Despite great effort and significant progress in previous studies, only limited CAD schemes have been used in clinical practice. Thus, developing new CAD schemes is still a hot research topic in medical imaging informatics field. In this dissertation, I investigate the feasibility of developing several new innovative CAD schemes for different application purposes. First, to predict breast tumor response to neoadjuvant chemotherapy and reduce unnecessary aggressive surgery, I developed two CAD schemes of breast magnetic resonance imaging (MRI) to generate quantitative image markers based on quantitative analysis of global kinetic features. Using the image marker computed from breast MRI acquired pre-chemotherapy, CAD scheme enables to predict radiographic complete response (CR) of breast tumors to neoadjuvant chemotherapy, while using the imaging marker based on the fusion of kinetic and texture features extracted from breast MRI performed after neoadjuvant chemotherapy, CAD scheme can better predict the pathologic complete response (pCR) of the patients. Second, to more accurately predict prognosis of stroke patients, quantifying brain hemorrhage and ventricular cerebrospinal fluid depicting on brain CT images can play an important role. For this purpose, I developed a new interactive CAD tool to segment hemorrhage regions and extract radiological imaging marker to quantitatively determine the severity of aneurysmal subarachnoid hemorrhage at presentation and correlate the estimation with various homeostatic/metabolic derangements and predict clinical outcome. Third, to improve the efficiency of primary antibody screening processes in new cancer drug development, I developed a CAD scheme to automatically identify the non-negative tissue slides, which indicate reactive antibodies in digital pathology images. Last, to improve operation efficiency and reliability of storing digital pathology image data, I developed a CAD scheme using optical character recognition algorithm to automatically extract metadata from tissue slide label images and reduce manual entry for slide tracking and archiving in the tissue pathology laboratories.
In summary, in these studies, we developed and tested several innovative approaches to identify quantitative imaging markers with high discriminatory power. In all CAD schemes, the graphic user interface-based visual aid tools were also developed and implemented. Study results demonstrated feasibility of applying CAD technology to several new application fields, which has potential to assist radiologists, oncologists and pathologists improving accuracy and consistency in disease diagnosis and prognosis assessment of using medical image
Pattern classification approaches for breast cancer identification via MRI: state‐of‐the‐art and vision for the future
Mining algorithms for Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCEMRI)
of breast tissue are discussed. The algorithms are based on recent advances in multidimensional
signal processing and aim to advance current state‐of‐the‐art computer‐aided detection
and analysis of breast tumours when these are observed at various states of development. The topics
discussed include image feature extraction, information fusion using radiomics, multi‐parametric
computer‐aided classification and diagnosis using information fusion of tensorial datasets as well
as Clifford algebra based classification approaches and convolutional neural network deep learning
methodologies. The discussion also extends to semi‐supervised deep learning and self‐supervised
strategies as well as generative adversarial networks and algorithms using generated
confrontational learning approaches. In order to address the problem of weakly labelled tumour
images, generative adversarial deep learning strategies are considered for the classification of
different tumour types. The proposed data fusion approaches provide a novel Artificial Intelligence
(AI) based framework for more robust image registration that can potentially advance the early
identification of heterogeneous tumour types, even when the associated imaged organs are
registered as separate entities embedded in more complex geometric spaces. Finally, the general
structure of a high‐dimensional medical imaging analysis platform that is based on multi‐task
detection and learning is proposed as a way forward. The proposed algorithm makes use of novel
loss functions that form the building blocks for a generated confrontation learning methodology
that can be used for tensorial DCE‐MRI. Since some of the approaches discussed are also based on
time‐lapse imaging, conclusions on the rate of proliferation of the disease can be made possible. The
proposed framework can potentially reduce the costs associated with the interpretation of medical
images by providing automated, faster and more consistent diagnosis
Breast Cancer Analysis in DCE-MRI
Breast cancer is the most common women tumour worldwide, about 2 million new cases diagnosed each year (second most common cancer overall). This disease represents about 12% of all new cancer cases and 25% of all cancers in women. Early detection of breast cancer is one of the key factors in determining the prognosis for women with malignant tumours. The standard diagnostic tool for the detection of breast cancer is x-ray mammography. The disadvantage of this method is its low specificity, especially in the case of radiographically dense breast tissue (young or under-forty women), or in the presence of scars and implants within the breast.
Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) has demonstrated a great potential in the screening of high-risk women for breast cancer, in staging newly diagnosed patients and in assessing therapy effects.
However, due to the large amount of information, DCE-MRI manual examination is error prone and can hardly be inspected without the use of a Computer-Aided Detection and Diagnosis (CAD) system. Breast imaging analysis is made harder by the dynamical characteristics of soft tissues since any patient movements (such as involuntary due to breathing) may affect the voxel-by-voxel dynamical analysis.
Breast DCE-MRI computer-aided analysis needs a pre-processing stage to identify breast parenchyma and reduce motion artefacts. Among the major issues in developing CAD for breast DCE-MRI, there is the detection and classification of lesions according to their aggressiveness. Moreover, it would be convenient to determine those subjects who are likely to not respond to the treatment so that a modification may be applied as soon as possible, relieving them from potentially unnecessary or toxic treatments.
In this thesis, an automated CAD system is presented. The proposed CAD aims to support radiologist in lesion detection, diagnosis and therapy assessment after a suitable preprocessing stage.
Segmentation of breast parenchyma has been addressed relying on fuzzy binary clustering, breast anatomical priors and morphological refinements. The breast mask extraction module combines three 2D Fuzzy C-Means clustering (executed from the three projection, axial, coronal and transversal) and geometrical breast anatomy characterization. In particular, seven well-defined key-points have been considered in order to accurately segment breast parenchyma from air and chest-wall.
To diminish the effects of involuntary movement artefacts, it is usual to apply a motion correction of the DCE-MRI volumes before of any data analysis. However, there is no evidence that a single Motion Correction Technique (MCT) can handle different deformations - small or large, rigid or non-rigid - and different patients or tissues. Therefore, it would be useful to develop a quality index (QI) to evaluate the performance of different MCTs. The existent QI might not be adequate to deal with DCE-MRI data because of the intensity variation due to contrast media. Therefore, in developing a novel QI, the underlying idea is that once DCE-MRI data have been realigned using a specific MCT, the dynamic course of the signal intensity should be as close as possible to physiological models, such as the currently accepted ones (e.g. Tofts-Kermode, Extended Tofts-Kermode, Hayton-Brady, Gamma Capillary Transit Time, etc.). The motion correction module ranks all the MCTs, using the QI, selects the best MCT and applies a correction before of further data analysis.
The proposed lesion detection module performs the segmentation of lesions in Regions of Interest (ROIs) by means of classification at a pixel level. It is based on a Support Vector Machine (SVM) trained with dynamic features, extracted from a suitably pre-selected area by using a pixel-based approach. The pre-selection mask strongly improves the final result.
The lesion classification module evaluates the malignity of each ROI by means of 3D textural features. The Local Binary Patterns descriptor has been used in the Three Orthogonal Planes (LBP-TOP) configuration. A Random Forest has been used to achieve the final classification into a benignant or malignant lesion.
The therapy assessment stage aims to predict the patient primary tumour recurrence to support the physician in the evaluation of the therapy effects and benefits. For each patient which has at least a malignant lesion, the recurrence of the disease has been evaluated by means of a multiple classifiers system. A set of dynamic, textural, clinicopathologic and pharmacokinetic features have been used to assess the probability of recurrence for the lesions.
Finally, to improve the usability of the proposed work, we developed a framework for tele-medicine that allows advanced medical image remote analysis in a secure and versatile client-server environment, at a low cost. The benefits of using the proposed framework will be presented in a real-case scenario where OsiriX, a wide-spread medical image analysis software, is allowed to perform advanced remote image processing in a simple manner over a secure channel.
The proposed CAD system have been tested on real breast DCE-MRI data for the available protocols. The breast mask extraction stage shows a median segmentation accuracy and Dice similarity index of 98% (+/-0,49) and 93% %(+/-1,48) respectively and 100% of neoplastic lesion coverage. The motion correction module is able to rank the MCTs with an accordance of 74% with a 'reference ranking'. Moreover, by only using 40% of the available volume, the computational load is reduced selecting always the best MCT. The automatic detection maximises the area of correctly detected lesions while minimising the number of false alarms with an accuracy of 99% and the lesions are, then, diagnosed according to their stage with an accuracy of 85%. The therapy assessment module provides a forecasting of the tumour recurrence with an accuracy of 78% and an AUC of 79%. Each module has been evaluated by a leave-one-patient-out approach, and results show a confidence level of 95% (p<0.05).
Finally, the proposed remote architecture showed a very low transmission overhead which settles on about 2.5% for the widespread 10\100 Mbps. Security has been achieved using client-server certificates and up-to-date standards
- …