1 research outputs found

    Measuring and improving the quality of visual explanations

    Full text link
    The ability of to explain neural network decisions goes hand in hand with their safe deployment. Several methods have been proposed to highlight features important for a given network decision. However, there is no consensus on how to measure effectiveness of these methods. We propose a new procedure for evaluating explanations. We use it to investigate visual explanations extracted from a range of possible sources in a neural network. We quantify the benefit of combining these sources and challenge a recent appeal for taking bias parameters into account. We support our conclusions with a general assessment of the impact of bias parameters in ImageNet classifier
    corecore