2,762 research outputs found

    Using Feature Extraction From Deep Convolutional Neural Networks for Pathological Image Analysis and Its Visual Interpretability

    Get PDF
    This dissertation presents a computer-aided diagnosis (CAD) system using deep learning approaches for lesion detection and classification on whole-slide images (WSIs) with breast cancer. The deep features being distinguishing in classification from the convolutional neural networks (CNN) are demonstrated in this study to provide comprehensive interpretability for the proposed CAD system using the domain knowledge in pathology. In the experiment, a total of 186 slides of WSIs were collected and classified into three categories: Non-Carcinoma, Ductal Carcinoma in Situ (DCIS), and Invasive Ductal Carcinoma (IDC). Instead of conducting pixel-wise classification (segmentation) into three classes directly, a hierarchical framework with the multi-view scheme was designed in the proposed system that performs lesion detection for region proposal at higher magnification first and then conducts lesion classification at lower magnification for each detected lesion. The majority voting scheme was adopted to improve the error tolerance of the system in lesion-wise prediction. For all collected 186 slides, the slide-wise prediction accuracy rate strikes to 95.16% (177/186) in binary classification to predict carcinoma (malignant) or non-carcinoma (benign), and the sensitivity for cases with carcinoma reaches 96.36% (106/110). In multi-classification, the accuracy rate is 92.47% (172/186) when predicting Non-Carcinoma, DCIS, and IDC for each slide. Most importantly, the interpretability for the mechanism of the proposed CAD system is provided from the pathological perspective. The experimental results show that the morphological characteristics and co-occurrence properties learned by the deep learning models for lesion detection and classification meet the clinical rules in diagnosis. Accordingly, the pathological interpretability of the deep features not only enhances the reliability of the proposed CAD system to gain acceptance from medical specialists, but also facilitates the development of deep learning frameworks for various tasks in pathology

    Understanding the Mechanism of Deep Learning Frameworks in Lesion Detection for Pathological Images with Breast Cancer

    Get PDF
    With the advances of scanning sensors and deep learning algorithms, computational pathology has drawn much attention in recent years and started to play an important role in the clinical workflow. Computer-aided detection (CADe) systems have been developed to assist pathologists in slide assessment, increasing diagnosis efficiency and reducing misdetections. In this study, we conducted four experiments to demonstrate that the features learned by deep learning models are interpretable from a pathological perspective. In addition, classifiers such as the support vector machine (SVM) and random forests (RF) were used in experiments to replace the fully connected layers and decompose the end-to-end framework, verifying the validity of feature extraction in the convolutional layers. The experimental results reveal that the features learned from the convolutional layers work as morphological descriptors for specific cells or tissues, in agreement with the diagnostic rules in practice. Most of the properties learned by the deep learning models summarized detection rules that agree with those of experienced pathologists. The interpretability of deep features from a clinical viewpoint not only enhances the reliability of AI systems, enabling them to gain acceptance from medical experts, but also facilitates the development of deep learning frameworks for different tasks in pathological analytics

    Machine learning methods for histopathological image analysis

    Full text link
    Abundant accumulation of digital histopathological images has led to the increased demand for their analysis, such as computer-aided diagnosis using machine learning techniques. However, digital pathological images and related tasks have some issues to be considered. In this mini-review, we introduce the application of digital pathological image analysis using machine learning algorithms, address some problems specific to such analysis, and propose possible solutions.Comment: 23 pages, 4 figure

    Deep Learning in Breast Cancer Imaging: A Decade of Progress and Future Directions

    Full text link
    Breast cancer has reached the highest incidence rate worldwide among all malignancies since 2020. Breast imaging plays a significant role in early diagnosis and intervention to improve the outcome of breast cancer patients. In the past decade, deep learning has shown remarkable progress in breast cancer imaging analysis, holding great promise in interpreting the rich information and complex context of breast imaging modalities. Considering the rapid improvement in the deep learning technology and the increasing severity of breast cancer, it is critical to summarize past progress and identify future challenges to be addressed. In this paper, we provide an extensive survey of deep learning-based breast cancer imaging research, covering studies on mammogram, ultrasound, magnetic resonance imaging, and digital pathology images over the past decade. The major deep learning methods, publicly available datasets, and applications on imaging-based screening, diagnosis, treatment response prediction, and prognosis are described in detail. Drawn from the findings of this survey, we present a comprehensive discussion of the challenges and potential avenues for future research in deep learning-based breast cancer imaging.Comment: Survey, 41 page

    Explainable artificial intelligence (XAI) in deep learning-based medical image analysis

    Full text link
    With an increase in deep learning-based methods, the call for explainability of such methods grows, especially in high-stakes decision making areas such as medical image analysis. This survey presents an overview of eXplainable Artificial Intelligence (XAI) used in deep learning-based medical image analysis. A framework of XAI criteria is introduced to classify deep learning-based medical image analysis methods. Papers on XAI techniques in medical image analysis are then surveyed and categorized according to the framework and according to anatomical location. The paper concludes with an outlook of future opportunities for XAI in medical image analysis.Comment: Submitted for publication. Comments welcome by email to first autho

    注目領域検出のための視覚的注意モデル設計に関する研究

    Get PDF
    Visual attention is an important mechanism in the human visual system. When human observe images and videos, they usually do not describe all the contents in them. Instead, they tend to talk about the semantically important regions and objects in the images. The human eye is usually attracted by some regions of interest rather than the entire scene. These regions of interest that present the mainly meaningful or semantic content are called saliency region. Visual saliency detection refers to the use of intelligent algorithms to simulate human visual attention mechanism, extract both the low-level features and high-level semantic information and localize the salient object regions in images and videos. The generated saliency map indicates the regions that are likely to attract human attention. As a fundamental problem of image processing and computer vision, visual saliency detection algorithms have been extensively studied by researchers to solve practical tasks, such as image and video compression, image retargeting, object detection, etc. The visual attention mechanism adopted by saliency detection in general are divided into two categories, namely the bottom-up model and top-down model. The bottom-up attention algorithm focuses on utilizing the low-level visual features such as colour and edges to locate the salient objects. While the top-down attention utilizes the supervised learning to detect saliency. In recent years, more and more research tend to design deep neural networks with attention mechanisms to improve the accuracy of saliency detection. The design of deep attention neural network is inspired by human visual attention. The main goal is to enable the network to automatically capture the information that is critical to the target tasks and suppress irrelevant information, shift the attention from focusing on all to local. Currently various domain’s attention has been developed for saliency detection and semantic segmentation, such as the spatial attention module in convolution network, it generates a spatial attention map by utilizing the inter-spatial relationship of features; the channel attention module produces a attention by exploring the inter-channel relationship of features. All these well-designed attentions have been proven to be effective in improving the accuracy of saliency detection. This paper investigates the visual attention mechanism of salient object detection and applies it to digital histopathology image analysis for the detection and classification of breast cancer metastases. As shown in following contents, the main research contents include three parts: First, we studied the semantic attention mechanism and proposed a semantic attention approach to accurately localize the salient objects in complex scenarios. The proposed semantic attention uses Faster-RCNN to capture high-level deep features and replaces the last layer of Faster-RCNN by a FC layer and sigmoid function for visual saliency detection; it calculates proposals' attention probabilities by comparing their feature distances with the possible salient object. The proposed method introduces a re-weighting mechanism to reduce the influence of the complexity background, and a proposal selection mechanism to remove the background noise to obtain objects with accurate shape and contour. The simulation result shows that the semantic attention mechanism is robust to images with complex background due to the consideration of high-level object concept, the algorithm achieved outstanding performance among the salient object detection algorithms in the same period. Second, we designed a deep segmentation network (DSNet) for saliency object prediction. We explored a Pyramidal Attentional ASPP (PA-ASPP) module which can provide pixel level attention. DSNet extracts multi-level features with dilated ResNet-101 and the multiscale contextual information was locally weighted with the proposed PA-ASPP. The pyramid feature aggregation encodes the multi-level features from three different scales. This feature fusion incorporates neighboring scales of context features more precisely to produce better pixel-level attention. Finally, we use a scale-aware selection (SAS) module to locally weight multi-scale contextual features, capture important contexts of ASPP for the accurate and consistent dense prediction. The simulation results demonstrated that the proposed PA-ASPP is effective and can generate more coherent results. Besides, with the SAS, the model can adaptively capture the regions with different scales effectively. Finally, based on previous research on attentional mechanisms, we proposed a novel Deep Regional Metastases Segmentation (DRMS) framework for the detection and classification of breast cancer metastases. As we know, the digitalized whole slide image has high-resolution, usually has gigapixel, however the size of abnormal region is often relatively small, and most of the slide region are normal. The highly trained pathologists usually localize the regions of interest first in the whole slide, then perform precise examination in the selected regions. Even though the process is time-consuming and prone to miss diagnosis. Through observation and analysis, we believe that visual attention should be perfectly suited for the application of digital pathology image analysis. The integrated framework for WSI analysis can capture the granularity and variability of WSI, rich information from multi-grained pathological image. We first utilize the proposed attention mechanism based DSNet to detect the regional metastases in patch-level. Then, adopt the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) to predict the whole metastases from individual slides. Finally, determine patient-level pN-stages by aggregating each individual slide-level prediction. In combination with the above techniques, the framework can make better use of the multi-grained information in histological lymph node section of whole-slice images. Experiments on large-scale clinical datasets (e.g., CAMELYON17) demonstrate that our method delivers advanced performance and provides consistent and accurate metastasis detection

    Breast Cancer MRI Classification Based on Fractional Entropy Image Enhancement and Deep Feature Extraction

    Get PDF
    سرطان الثدي يعتبر واحد من الامراض القاتلة الشائعة بين النساء في جميع أنحاء العالم. والتشخيص المبكر لسرطان الثدي الكشف المبكر من أهم استراتيجيات الوقاية الثانوية. نظرًا لاستخدام التصوير الطبي على نطاق واسع في تشخيص العديد من الأمراض المزمنة ومراقبتها، فقد تم اقتراح العديد من خوارزميات معالجة الصور على مر السنين لزيادة مجال التصوير الطبي بحيث تصبح عملية التشخيص أكثر دقة وكفاءة. تقدم هذه الدراسة خوارزمية جديدة لاستخراج الخواص العميقة من نوعين من صور الرنين المغناطيسي T2W-TSE و STIR MRI كمدخلات للشبكات العصبية العميقة المقترحة والتي تُستخدم لاستخراج الخواص للتمييز بين فحوصات التصوير بالرنين المغناطيسي للثدي المرضية والصحية. في هذه الخوارزمية، تتم معالجة فحوصات التصوير بالرنين المغناطيسي للثدي مسبقًا قبل خطوة استخراج الخواص لتقليل تأثيرات الاختلافات بين شرائح التصوير بالرنين المغناطيسي، وفصل الثدي الايمن عن الايسر، بالإضافة الى عزل خلفية الصور. وقد كانت أقصى دقة تم تحقيقها لتصنيف مجموعة بيانات تضم 326 شريحة تصوير بالرنين المغناطيسي للثدي 98.77٪. يبدو أن النموذج يتسم بالكفاءة والأداء ويمكن بالتالي اعتباره مرشحًا للتطبيق في بيئة سريرية.Disease diagnosis with computer-aided methods has been extensively studied and applied in diagnosing and monitoring of several chronic diseases. Early detection and risk assessment of breast diseases based on clinical data is helpful for doctors to make early diagnosis and monitor the disease progression. The purpose of this study is to exploit the Convolutional Neural Network (CNN) in discriminating breast MRI scans into pathological and healthy. In this study, a fully automated and efficient deep features extraction algorithm that exploits the spatial information obtained from both T2W-TSE and STIR MRI sequences to discriminate between pathological and healthy breast MRI scans. The breast MRI scans are preprocessed prior to the feature extraction step to enhance and preserve the fine details of the breast MRI scans boundaries by using fractional integral entropy FIE algorithm, to reduce the effects of the intensity variations between MRI slices, and finally to separate the right and left breast regions by exploiting the symmetry information. The obtained features are classified using a long short-term memory (LSTM) neural network classifier. Subsequently, all extracted features significantly improves the performance of the LSTM network to precisely discriminate between pathological and healthy cases. The maximum achieved accuracy for classifying the collected dataset comprising 326 T2W-TSE images and 326 STIR images is 98.77%. The experimental results demonstrate that FIE enhancement method improve the performance of CNN in classifying breast MRI scans. The proposed model appears to be efficient and might represent a useful diagnostic tool in the evaluation of MRI breast scans
    corecore