9 research outputs found
Computer-Aided Detection and diagnosis for prostate cancer based on mono and multi-parametric MRI: A review
International audienceProstate cancer is the second most diagnosed cancer of men all over the world. In the last decades, new imaging techniques based on Magnetic Resonance Imaging (MRI) have been developed improving diagnosis.In practise, diagnosis can be affected by multiple factors such as observer variability and visibility and complexity of the lesions. In this regard, computer-aided detection and computer-aided diagnosis systemshave been designed to help radiologists in their clinical practice. Research on computer-aided systems specifically focused for prostate cancer is a young technology and has been part of a dynamic field ofresearch for the last ten years. This survey aims to provide a comprehensive review of the state of the art in this lapse of time, focusing on the different stages composing the work-flow of a computer-aidedsystem. We also provide a comparison between studies and a discussion about the potential avenues for future research. In addition, this paper presents a new public online dataset which is made available to theresearch community with the aim of providing a common evaluation framework to overcome some of the current limitations identified in this survey
Recommended from our members
Unsupervised and Weakly-Supervised Learning of Localized Texture Patterns of Lung Diseases on Computed Tomography
Computed tomography (CT) imaging enables in vivo assessment of lung parenchyma and several lung diseases. CT scans are key in particular for the diagnosis of 1) chronic obstructive pulmonary disease (COPD), which is the fourth leading cause of death worldwide, and largely overlaps with pulmonary emphysema; and 2) lung cancer, which is the first leading cause of cancer-related death, and manifests in its early stage with the presence of lung nodules.
Most lung CT image analysis methods to-date have relied on supervised learning requiring manually annotated local regions of interest (ROIs), which are slow and labor-intensive to obtain. Machine learning models requiring less or no manual annotations are important for a sustainable development of computer-aided diagnosis (CAD) systems.
This thesis focused on exploiting CT scans for lung disease characterization via two learning strategies: 1) fully unsupervised learning on a very large amount of unannotated image patches to discover novel lung texture patterns for pulmonary emphysema; and 2) weakly-supervised learning to generate voxel-level localization of lung nodules from CT whole-slice labels.
In the first part of this thesis, we proposed an original unsupervised approach to learn emphysema-specific radiological texture patterns. We have designed dedicated spatial and texture features and a two-stage learning strategy incorporating clustering and graph partitioning. Learning was performed on a cohort of 2,922 high-resolution full-lung CT scans, which included a high prevalence of smokers and COPD subjects. Experiments lead to discovering 10 highly-reproducible spatially-informed lung texture patterns and 6 quantitative emphysema subtypes (QES). Our discovered QES were associated independently with distinct risk of symptoms, physiological changes, exacerbations and mortality. Genome-wide association studies identified loci associated with four subtypes.
Then we designed a deep-learning approach, using unsupervised domain adaptation with adversarial training, to label the QES on cardiac CT scans, which included approximately 70% of the lung. Our proposed method accounted for the differences in CT image qualities, and enabled us to study the progression of QES on a cohort of 17,039 longitudinal cardiac and full-lung CT scans.
Overall, the discovered QES provide novel emphysema sub-phenotyping that may facilitate future study of emphysema development, understanding the stages of COPD and the design of personalized therapies.
In the second part of the thesis, we have designed a deep-learning method for lung nodule detection with weak labels, using classification convolutional neural networks (CNNs) with skip-connections to generate high-quality discriminative class activation maps, and a novel candidate screening framework to reduce the number of false positives. Given that the vast majority of annotated nodules are benign, we further exploited a data augmentation framework with a generative adversarial network (GAN) to address the issue of data imbalance for lung cancer prediction. Our weakly-supervised lung nodule detection on 1,000s CT scans achieved competitive performance compared to a fully-supervised method, while requiring 100 times less annotations. Our data augmentation framework enabled synthesizing nodules with high fidelity in specified categories, and is beneficial for predicting nodule malignancy scores and hence improving the accuracy / reliability of lung cancer screening
Mammography
In this volume, the topics are constructed from a variety of contents: the bases of mammography systems, optimization of screening mammography with reference to evidence-based research, new technologies of image acquisition and its surrounding systems, and case reports with reference to up-to-date multimodality images of breast cancer. Mammography has been lagged in the transition to digital imaging systems because of the necessity of high resolution for diagnosis. However, in the past ten years, technical improvement has resolved the difficulties and boosted new diagnostic systems. We hope that the reader will learn the essentials of mammography and will be forward-looking for the new technologies. We want to express our sincere gratitude and appreciation?to all the co-authors who have contributed their work to this volume
Improving radiotherapy using image analysis and machine learning
With ever increasing advancements in imaging, there is an increasing abundance of images
being acquired in the clinical environment. However, this increase in information can be a
burden as well as a blessing as it may require significant amounts of time to interpret the
information contained in these images. Computer assisted evaluation is one way in which better
use could be made of these images. This thesis presents the combination of texture analysis of
images acquired during the treatment of cancer with machine learning in order to improve
radiotherapy. The first application is to the prediction of radiation induced pneumonitis. In 13-
37% of cases, lung cancer patients treated with radiotherapy develop radiation induced lung
disease, such as radiation induced pneumonitis. Three dimensional texture analysis, combined
with patient-specific clinical parameters, were used to compute unique features. On radiotherapy
planning CT data of 57 patients, (14 symptomatic, 43 asymptomatic), a Support Vector
Machine (SVM) obtained an area under the receiver operator curve (AUROC) of 0.873 with
sensitivity, specificity and accuracy of 92%, 72% and 87% respectively. Furthermore, it was
demonstrated that a Decision Tree classifier was capable of a similar level of performance
using sub-regions of the lung volume. The second application is related to prostate cancer
identification.
T2 MRI scans are used in the diagnosis of prostate cancer and in the identification of the
primary cancer within the prostate gland. The manual identification of the cancer relies on
the assessment of multiple scans and the integration of clinical information by a clinician.
This requires considerable experience and time. As MRI becomes more integrated within the
radiotherapy work flow and as adaptive radiotherapy (where the treatment plan is modified
based on multi-modality image information acquired during or between RT fractions) develops
it is timely to develop automatic segmentation techniques for reliably identifying cancerous
regions. In this work a number of texture features were coupled with a supervised learning
model for the automatic segmentation of the main cancerous focus in the prostate - the focal
lesion. A mean AUROC of 0.713 was demonstrated with 10-fold stratified cross validation
strategy on an aggregate data set. On a leave one case out basis a mean AUROC of 0.60 was
achieved which resulted in a mean DICE coefficient of 0.710. These results showed that is was
possible to delineate the focal lesion in the majority (11) of the 14 cases used in the study
Curvelet-Based Texture Classification in Computerized Critical Gleason Grading of Prostate Cancer Histological Images
Classical multi-resolution image processing using wavelets provides an efficient analysis of image characteristics represented in terms of pixel-based singularities such as connected edge pixels of objects and texture elements given by the pixel intensity statistics. Curvelet transform is a recently developed approach based on curved singularities that provides a more sparse representation for a variety of directional multi-resolution image processing tasks such as denoising and texture analysis. The objective of this research is to develop a multi-class classifier for the automated classification of Gleason patterns of prostate cancer histological images with the utilization of curvelet-based texture analysis. This problem of computer-aided recognition of four pattern classes between Gleason Score 6 (primary Gleason grade 3 plus secondary Gleason grade 3) and Gleason Score 8 (both primary and secondary grades 4) is of critical importance affecting treatment decision and patients’ quality of life. Multiple spatial sampling within each histological image is examined through the curvelet transform, the significant curvelet coefficient at each location of an image patch is obtained by maximization with respect to all curvelet orientations at a given location which represents the apparent curved-based singularity such as a short edge segment in the image structure. This sparser representation reduces greatly the redundancy in the original set of curvelet coefficients. The statistical textural features are extracted from these curvelet coefficients at multiple scales. We have designed a 2-level 4-class classification scheme, attempting to mimic the human expert’s decision process. It consists of two Gaussian kernel support vector machines, one support vector machine in each level and each is incorporated with a voting mechanism from classifications of multiple windowed patches in an image to reach the final decision for the image. At level 1, the support vector machine with voting is trained to ascertain the classification of Gleason grade 3 and grade 4, thus Gleason score 6 and score 8, by unanimous votes to one of the two classes, while the mixture voting inside the margin between decision boundaries will be assigned to the third class for consideration at level 2. The support vector machine in level 2 with supplemental features is trained to classify an image patch to Gleason grade 3+4 or 4+3 and the majority decision from multiple patches to consolidate the two-class discrimination of the image within Gleason score 7, or else, assign to an Indecision category. The developed tree classifier with voting from sampled image patches is distinct from the traditional voting by multiple machines. With a database of TMA prostate histological images from Urology/Pathology Laboratory of the Johns Hopkins Medical Center, the classifier using curvelet-based statistical texture features for recognition of 4-class critical Gleason scores was successfully trained and tested achieving a remarkable performance with 97.91% overall 4-class validation accuracy and 95.83% testing accuracy. This lends to an expectation of more testing and further improvement toward a plausible practical implementation
Characterisation of Dynamic Process Systems by Use of Recurrence Texture Analysis
This thesis proposes a method to analyse the dynamic behaviour of process systems using sets of textural features extracted from distance matrices obtained from time series data. Algorithms based on the use of grey level co-occurrence matrices, wavelet transforms, local binary patterns, textons, and the pretrained convolutional neural networks (AlexNet and VGG16) were used to extract features. The method was demonstrated to effectively capture the dynamics of mineral process systems and could outperform competing approaches