17 research outputs found

    Interpretable Deep Learning Model for Prostate Cancer Detection

    No full text
    Prostate cancer is the second leading cause of cancer death in American men, behind only lung cancer. Detecting prostate cancer early and accurately are key factors in preventing these deaths. Progress has been made in creating deep learning systems that are able to detect prostate cancer with a high degree of accuracy. However, an indispensable problem with these systems is while the performance can be exceptionally accurate, the classification outputs are non-interpretable. This non-interpretable characteristic significantly inhibits these models from being implemented in medical settings. We address this problem of interpretability of deep learning systems in the domain of prostate cancer detection. We develop a deep convolutional neural network based on the VGG16 architecture for the classification of prostate cancer lesions using T2 weighted magnetic resonance images. Our model achieves high level performance with an AUC of 0.86, sensitivity of 0.88, and specificity of 0.88. We use saliency maps for interpretation by calculating how much each individual pixel contributes to the overall class scores. We show the clusters of pixels that contribute the most to the prediction thus showing the reasoning behind the classification. We then show the interpretation caliber to demonstrate the exactness of the interpretation. This work demonstrates the potential to use saliency maps to interpret classifications of deep learning prostate cancer detection systems

    A Review of Explainable Deep Learning Cancer Detection Models in Medical Imaging

    No full text
    Deep learning has demonstrated remarkable accuracy analyzing images for cancer detection tasks in recent years. The accuracy that has been achieved rivals radiologists and is suitable for implementation as a clinical tool. However, a significant problem is that these models are black-box algorithms therefore they are intrinsically unexplainable. This creates a barrier for clinical implementation due to lack of trust and transparency that is a characteristic of black box algorithms. Additionally, recent regulations prevent the implementation of unexplainable models in clinical settings which further demonstrates a need for explainability. To mitigate these concerns, there have been recent studies that attempt to overcome these issues by modifying deep learning architectures or providing after-the-fact explanations. A review of the deep learning explanation literature focused on cancer detection using MR images is presented here. The gap between what clinicians deem explainable and what current methods provide is discussed and future suggestions to close this gap are provided

    Dynamic-GAN: Learning Spatial-Temporal Attention for Dynamic Object Removal in Feature Dense Environments

    No full text
    Robot navigation using simultaneous localization and mapping (SLAM) utilizes landmarks to localize the robot within the environment. This process of localization may fail in dynamic scenarios due to dynamic nature of the environment which can occlude critical landmarks. One example would be human walking in front of the robot and occluding the sensor data. To alleviate this problem, the work presented here utilizes an attention-based, deep learning framework that converts robot camera frames with dynamic content into static frames to more easily apply simultaneous localization and mapping (SLAM) algorithms. The vast majority of SLAM methods have difficulty in the presence of dynamic objects appearing in the environment and occluding the area being captured by the camera. Despite past attempts to deal with dynamic objects, challenges remain to reconstruct large, occluded areas with complex backgrounds. Our proposed Dynamic-GAN framework employed a generative adversarial network to remove dynamic objects from a scene and inpaint a static image, free of dynamic objects. The novelty of our approach is in utilizing spatial-temporal attention to encourage the generative model to focus on areas of the image occluded by dynamic content as opposed to equally weighting the whole image. The evaluation of Dynamic-GAN was conducted both quantitatively and qualitatively by testing it on benchmark datasets, and on a mobile robot in indoor navigation environments. As people appeared dynamically in close proximity to the robot, results showed that large, feature-rich occluded areas can be accurately reconstructed in real-time with our attention-based deep learning framework for dynamic object removal. Through experiments we demonstrated that our proposed algorithm has about 25% better performance on average, under various circumstances, as compared to the standard benchmark algorithms

    Why Are Explainable AI Methods for Prostate Lesion Detection Rated Poorly by Radiologists?

    No full text
    Deep learning offers significant advancements in the accuracy of prostate identification and classification, underscoring its potential for clinical integration. However, the opacity of deep learning models presents interpretability challenges, critical for their acceptance and utility in medical diagnosis and detection. While explanation methods have been proposed to demystify these models, enhancing their clinical viability, the efficacy and acceptance of these methods in medical tasks are not well documented. This pilot study investigates the effectiveness of deep learning explanation methods in clinical settings and identifies the attributes that radiologists consider crucial for explainability, aiming to direct future enhancements. This study reveals that while explanation methods can improve clinical task performance by up to 20%, their perceived usefulness varies, with some methods being rated poorly. Radiologists prefer explanation methods that are robust against noise, precise, and consistent. These preferences underscore the need for refining explanation methods to align with clinical expectations, emphasizing clarity, accuracy, and reliability. The findings highlight the importance of developing explanation methods that not only improve performance but also are tailored to meet the stringent requirements of clinical practice, thereby facilitating deeper trust and a broader acceptance of deep learning in medical diagnostics

    sj-pdf-1-cpx-10.1177_21677026221083284 – Supplemental material for Electrodermal Activity and Heart Rate Variability During Exposure Fear Scripts Predict Trait-Level and Momentary Social Anxiety and Eating-Disorder Symptoms in an Analogue Sample

    No full text
    Supplemental material, sj-pdf-1-cpx-10.1177_21677026221083284 for Electrodermal Activity and Heart Rate Variability During Exposure Fear Scripts Predict Trait-Level and Momentary Social Anxiety and Eating-Disorder Symptoms in an Analogue Sample by Caroline Christian, Elizabeth Cash, Dan A. Cohen, Christopher M. Trombley and Cheri A. Levinson in Clinical Psychological Science</p

    El Diario de Pontevedra : periódico liberal: Ano XXVIII Número 8038 - 1911 febreiro 24

    No full text
    <div><p>Arthropods are the most diverse taxonomic group of terrestrial eukaryotes and are sensitive to physical alterations in their environment such as those caused by forestry. With their enormous diversity and physical omnipresence, arthropods could be powerful indicators of the effects of disturbance following forestry. When arthropods have been used to measure the effects of disturbance, the total diversity of some groups is often found to increase following forestry. However, these findings are frequently derived using a coarse taxonomic grain (family or order) to accommodate for various taxonomic impediments (including cryptic diversity and poorly resourced taxonomists). Our intent with this work was to determine the diversity of arthropods in and around Algonquin Park, and how this diversity was influenced by disturbance (in this case, forestry within the past 25 years). We used DNA barcode-derived diversity estimates (Barcode Index Number (BIN) richness) to avoid taxonomic impediments and as a source of genetic information with which we could conduct phylogenetic estimates of diversity (PD). Diversity patterns elucidated with PD are often, but not always congruent with taxonomic estimates–and departures from these expectations can help clarify disturbance effects that are hidden from richness studies alone. We found that BIN richness and PD were greater in disturbed (forested) areas, however when we controlled for the expected relationship between PD and BIN richness, we found that cut sites contained less PD than expected and that this diversity was more phylogenetically clustered than would be predicted by taxonomic richness. While disturbance may cause an evident increase in diversity, this diversity may not reflect the full evolutionary history of the assemblage within that area and thus a subtle effect of disturbance can be found decades following forestry.</p></div
    corecore