8 research outputs found

    Learning to Explain: A Model-Agnostic Framework for Explaining Black Box Models

    Full text link
    We present Learning to Explain (LTX), a model-agnostic framework designed for providing post-hoc explanations for vision models. The LTX framework introduces an "explainer" model that generates explanation maps, highlighting the crucial regions that justify the predictions made by the model being explained. To train the explainer, we employ a two-stage process consisting of initial pretraining followed by per-instance finetuning. During both stages of training, we utilize a unique configuration where we compare the explained model's prediction for a masked input with its original prediction for the unmasked input. This approach enables the use of a novel counterfactual objective, which aims to anticipate the model's output using masked versions of the input image. Importantly, the LTX framework is not restricted to a specific model architecture and can provide explanations for both Transformer-based and convolutional models. Through our evaluations, we demonstrate that LTX significantly outperforms the current state-of-the-art in explainability across various metrics

    Deep Integrated Explanations

    Full text link
    This paper presents Deep Integrated Explanations (DIX) - a universal method for explaining vision models. DIX generates explanation maps by integrating information from the intermediate representations of the model, coupled with their corresponding gradients. Through an extensive array of both objective and subjective evaluations spanning diverse tasks, datasets, and model configurations, we showcase the efficacy of DIX in generating faithful and accurate explanation maps, while surpassing current state-of-the-art methods.Comment: CIKM 202

    Visual Explanations via Iterated Integrated Attributions

    Full text link
    We introduce Iterated Integrated Attributions (IIA) - a generic method for explaining the predictions of vision models. IIA employs iterative integration across the input image, the internal representations generated by the model, and their gradients, yielding precise and focused explanation maps. We demonstrate the effectiveness of IIA through comprehensive evaluations across various tasks, datasets, and network architectures. Our results showcase that IIA produces accurate explanation maps, outperforming other state-of-the-art explanation techniques.Comment: ICCV 202

    Neuronal Mechanism for Compensation of Longitudinal Chromatic Aberration-Derived Algorithm

    No full text
    The human visual system faces many challenges, among them the need to overcome the imperfections of its optics, which degrade the retinal image. One of the most dominant limitations is longitudinal chromatic aberration (LCA), which causes short wavelengths (blue light) to be focused in front of the retina with consequent blurring of the retinal chromatic image. The perceived visual appearance, however, does not display such chromatic distortions. The intriguing question, therefore, is how the perceived visual appearance of a sharp and clear chromatic image is achieved despite the imperfections of the ocular optics. To address this issue, we propose a neural mechanism and computational model, based on the unique properties of the S-cone pathway. The model suggests that the visual system overcomes LCA through two known properties of the S channel: (1) omitting the contribution of the S channel from the high-spatial resolution pathway (utilizing only the L and M channels). (b) Having large and coextensive receptive fields that correspond to the small bistratified cells. Here, we use computational simulations of our model on real images to show how integrating these two basic principles can provide a significant compensation for LCA. Further support for the proposed neuronal mechanism is given by the ability of the model to predict an enigmatic visual phenomenon of large color shifts as part of the assimilation effect
    corecore