17 research outputs found
UOLO - automatic object detection and segmentation in biomedical images
We propose UOLO, a novel framework for the simultaneous detection and
segmentation of structures of interest in medical images. UOLO consists of an
object segmentation module which intermediate abstract representations are
processed and used as input for object detection. The resulting system is
optimized simultaneously for detecting a class of objects and segmenting an
optionally different class of structures. UOLO is trained on a set of bounding
boxes enclosing the objects to detect, as well as pixel-wise segmentation
information, when available. A new loss function is devised, taking into
account whether a reference segmentation is accessible for each training image,
in order to suitably backpropagate the error. We validate UOLO on the task of
simultaneous optic disc (OD) detection, fovea detection, and OD segmentation
from retinal images, achieving state-of-the-art performance on public datasets.Comment: Publised on DLMIA 2018. Licensed under the Creative Commons
CC-BY-NC-ND 4.0 license: http://creativecommons.org/licenses/by-nc-nd/4.0
Sanity Checks for Saliency Methods Explaining Object Detectors
Saliency methods are frequently used to explain Deep Neural Network-based models. Adebayo et al.'s work on evaluating saliency methods for classification models illustrate certain explanation methods fail the model and data randomization tests. However, on extending the tests for various state of the art object detectors we illustrate that the ability to explain a model is more dependent on the model itself than the explanation method. We perform sanity checks for object detection and define new qualitative criteria to evaluate the saliency explanations, both for object classification and bounding box decisions, using Guided Backpropagation, Integrated Gradients, and their Smoothgrad versions, together with Faster R-CNN, SSD, and EfficientDet-D0, trained on COCO. In addition, the sensitivity of the explanation method to model parameters and data labels varies class-wise motivating to perform the sanity checks for each class. We find that EfficientDet-D0 is the most interpretable method independent of the saliency method, which passes the sanity checks with little problems
DExT: Detector Explanation Toolkit
State-of-the-art object detectors are treated as black boxes due to their
highly non-linear internal computations. Even with unprecedented advancements
in detector performance, the inability to explain how their outputs are
generated limits their use in safety-critical applications. Previous work fails
to produce explanations for both bounding box and classification decisions, and
generally make individual explanations for various detectors. In this paper, we
propose an open-source Detector Explanation Toolkit (DExT) which implements the
proposed approach to generate a holistic explanation for all detector decisions
using certain gradient-based explanation methods. We suggests various
multi-object visualization methods to merge the explanations of multiple
objects detected in an image as well as the corresponding detections in a
single image. The quantitative evaluation show that the Single Shot MultiBox
Detector (SSD) is more faithfully explained compared to other detectors
regardless of the explanation methods. Both quantitative and human-centric
evaluations identify that SmoothGrad with Guided Backpropagation (GBP) provides
more trustworthy explanations among selected methods across all detectors. We
expect that DExT will motivate practitioners to evaluate object detectors from
the interpretability perspective by explaining both bounding box and
classification decisions.Comment: 24 pages, with appendix. 1st World Conference on eXplainable
Artificial Intelligence camera read