6,563 research outputs found

    Deep Lesion Graphs in the Wild: Relationship Learning and Organization of Significant Radiology Image Findings in a Diverse Large-scale Lesion Database

    Full text link
    Radiologists in their daily work routinely find and annotate significant abnormalities on a large number of radiology images. Such abnormalities, or lesions, have collected over years and stored in hospitals' picture archiving and communication systems. However, they are basically unsorted and lack semantic annotations like type and location. In this paper, we aim to organize and explore them by learning a deep feature representation for each lesion. A large-scale and comprehensive dataset, DeepLesion, is introduced for this task. DeepLesion contains bounding boxes and size measurements of over 32K lesions. To model their similarity relationship, we leverage multiple supervision information including types, self-supervised location coordinates and sizes. They require little manual annotation effort but describe useful attributes of the lesions. Then, a triplet network is utilized to learn lesion embeddings with a sequential sampling strategy to depict their hierarchical similarity structure. Experiments show promising qualitative and quantitative results on lesion retrieval, clustering, and classification. The learned embeddings can be further employed to build a lesion graph for various clinically useful applications. We propose algorithms for intra-patient lesion matching and missing annotation mining. Experimental results validate their effectiveness.Comment: Accepted by CVPR2018. DeepLesion url adde

    Pattern classification approaches for breast cancer identification via MRI: state‐of‐the‐art and vision for the future

    Get PDF
    Mining algorithms for Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCEMRI) of breast tissue are discussed. The algorithms are based on recent advances in multidimensional signal processing and aim to advance current state‐of‐the‐art computer‐aided detection and analysis of breast tumours when these are observed at various states of development. The topics discussed include image feature extraction, information fusion using radiomics, multi‐parametric computer‐aided classification and diagnosis using information fusion of tensorial datasets as well as Clifford algebra based classification approaches and convolutional neural network deep learning methodologies. The discussion also extends to semi‐supervised deep learning and self‐supervised strategies as well as generative adversarial networks and algorithms using generated confrontational learning approaches. In order to address the problem of weakly labelled tumour images, generative adversarial deep learning strategies are considered for the classification of different tumour types. The proposed data fusion approaches provide a novel Artificial Intelligence (AI) based framework for more robust image registration that can potentially advance the early identification of heterogeneous tumour types, even when the associated imaged organs are registered as separate entities embedded in more complex geometric spaces. Finally, the general structure of a high‐dimensional medical imaging analysis platform that is based on multi‐task detection and learning is proposed as a way forward. The proposed algorithm makes use of novel loss functions that form the building blocks for a generated confrontation learning methodology that can be used for tensorial DCE‐MRI. Since some of the approaches discussed are also based on time‐lapse imaging, conclusions on the rate of proliferation of the disease can be made possible. The proposed framework can potentially reduce the costs associated with the interpretation of medical images by providing automated, faster and more consistent diagnosis

    Learning Algorithms for Fat Quantification and Tumor Characterization

    Get PDF
    Obesity is one of the most prevalent health conditions. About 30% of the world\u27s and over 70% of the United States\u27 adult populations are either overweight or obese, causing an increased risk for cardiovascular diseases, diabetes, and certain types of cancer. Among all cancers, lung cancer is the leading cause of death, whereas pancreatic cancer has the poorest prognosis among all major cancers. Early diagnosis of these cancers can save lives. This dissertation contributes towards the development of computer-aided diagnosis tools in order to aid clinicians in establishing the quantitative relationship between obesity and cancers. With respect to obesity and metabolism, in the first part of the dissertation, we specifically focus on the segmentation and quantification of white and brown adipose tissue. For cancer diagnosis, we perform analysis on two important cases: lung cancer and Intraductal Papillary Mucinous Neoplasm (IPMN), a precursor to pancreatic cancer. This dissertation proposes an automatic body region detection method trained with only a single example. Then a new fat quantification approach is proposed which is based on geometric and appearance characteristics. For the segmentation of brown fat, a PET-guided CT co-segmentation method is presented. With different variants of Convolutional Neural Networks (CNN), supervised learning strategies are proposed for the automatic diagnosis of lung nodules and IPMN. In order to address the unavailability of a large number of labeled examples required for training, unsupervised learning approaches for cancer diagnosis without explicit labeling are proposed. We evaluate our proposed approaches (both supervised and unsupervised) on two different tumor diagnosis challenges: lung and pancreas with 1018 CT and 171 MRI scans respectively. The proposed segmentation, quantification and diagnosis approaches explore the important adiposity-cancer association and help pave the way towards improved diagnostic decision making in routine clinical practice

    Automated Grading of Bladder Cancer using Deep Learning

    Get PDF
    PhD thesis in Information technologyUrothelial carcinoma is the most common type of bladder cancer and is among the cancer types with the highest recurrence rate and lifetime treatment cost per patient. Diagnosed patients are stratified into risk groups, mainly based on the histological grade and stage. However, it is well known that correct grading of bladder cancer suffers from intra- and interobserver variability and inconsistent reproducibility between pathologists, potentially leading to under- or overtreatment of the patients. The economic burden, unnecessary patient suffering, and additional load on the health care system illustrate the importance of developing new tools to aid pathologists. With the introduction of digital pathology, large amounts of data have been made available in the form of digital histological whole-slide images (WSI). However, despite the massive amount of data, annotations for the given data are lacking. Another potential problem is that the tissue samples of urothelial carcinoma contain a mixture of damaged tissue, blood, stroma, muscle, and urothelium, where it is mainly the urothelium tissue that is diagnostically relevant for grading. A method for tissue segmentation is investigated, where the aim is to segment WSIs into the six tissue classes: urothelium, stroma, muscle, damaged tissue, blood, and background. Several methods based on convolutional neural networks (CNN) for tile-wise classification are proposed. Both single-scale and multiscale models were explored to see if including more magnification levels would improve the performance. Different techniques, such as unsupervised learning, semi-supervised learning, and domain adaptation techniques, are explored to mitigate the challenge of missing large quantities of annotated data. It is necessary to extract tiles from the WSI since it is intractable to process the entire WSI at full resolution at once. We have proposed a method to parameterize and automate the task of extracting tiles from different scales with a region of interest (ROI) defined at one of the scales. The method is reproducible and easy to describe by reporting the parameters. A pipeline for automated diagnostic grading is proposed, called TRIgrade. First, the tissue segmentation method is utilized to find the diagnostically relevant urothelium tissue. Then, the parameterized tile extraction method is used to extract tiles from the urothelium regions at three magnification levels from 300 WSIs. The extracted tiles form the training, validation, and test data used to train and test a diagnostic model. The final system outputs a segmented tissue image showing all the tissue regions in the WSI, a WHO grade heatmap indicating low- and high-grade carcinoma regions, and finally, a slide-level WHO grade prediction. The proposed TRIgrade pipeline correctly graded 45 of 50 WSIs, achieving an accuracy of 90%

    Unsupervised tissue segmentation via deep constrained Gaussian network

    Get PDF
    Tissue segmentation is the mainstay of pathological examination, whereas the manual delineation is unduly burdensome. To assist this time-consuming and subjective manual step, researchers have devised methods to automatically segment structures in pathological images. Recently, automated machine and deep learning based methods dominate tissue segmentation research studies. However, most machine and deep learning based approaches are supervised and developed using a large number of training samples, in which the pixel-wise annotations are expensive and sometimes can be impossible to obtain. This paper introduces a novel unsupervised learning paradigm by integrating an end-to-end deep mixture model with a constrained indicator to acquire accurate semantic tissue segmentation. This constraint aims to centralise the components of deep mixture models during the calculation of the optimisation function. In so doing, the redundant or empty class issues, which are common in current unsupervised learning methods, can be greatly reduced. By validation on both public and in-house datasets, the proposed deep constrained Gaussian network achieves significantly (Wilcoxon signed-rank test) better performance (with the average Dice scores of 0.737 and 0.735, respectively) on tissue segmentation with improved stability and robustness, compared to other existing unsupervised segmentation approaches. Furthermore, the proposed method presents a similar performance (p-value > 0.05) compared to the fully supervised U-Net
    corecore