419 research outputs found

    Towards explainable face aging with Generative Adversarial Networks

    Get PDF
    Generative Adversarial Networks (GAN) are being increasingly used to perform face aging due to their capabilities of automatically generating highly-realistic synthetic images by using an adversarial model often based on Convolutional Neural Networks (CNN). However, GANs currently represent black box models since it is not known how the CNNs store and process the information learned from data. In this paper, we propose the \ufb01rst method that deals with explaining GANs, by introducing a novel qualitative and quantitative analysis of the inner structure of the model. Similarly to analyzing the common genes in two DNA sequences, we analyze the common \ufb01lters in two CNNs. We show that the GANs for face aging partially share their parameters with GANs trained for heterogeneous applications and that the aging transformation can be learned using general purpose image databases and a \ufb01ne-tuning step. Results on public databases con\ufb01rm the validity of our approach, also enabling future studies on similar models

    Explainable artificial intelligence (XAI) in deep learning-based medical image analysis

    Full text link
    With an increase in deep learning-based methods, the call for explainability of such methods grows, especially in high-stakes decision making areas such as medical image analysis. This survey presents an overview of eXplainable Artificial Intelligence (XAI) used in deep learning-based medical image analysis. A framework of XAI criteria is introduced to classify deep learning-based medical image analysis methods. Papers on XAI techniques in medical image analysis are then surveyed and categorized according to the framework and according to anatomical location. The paper concludes with an outlook of future opportunities for XAI in medical image analysis.Comment: Submitted for publication. Comments welcome by email to first autho

    Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks

    Full text link
    The last decade of machine learning has seen drastic increases in scale and capabilities. Deep neural networks (DNNs) are increasingly being deployed in the real world. However, they are difficult to analyze, raising concerns about using them without a rigorous understanding of how they function. Effective tools for interpreting them will be important for building more trustworthy AI by helping to identify problems, fix bugs, and improve basic understanding. In particular, "inner" interpretability techniques, which focus on explaining the internal components of DNNs, are well-suited for developing a mechanistic understanding, guiding manual modifications, and reverse engineering solutions. Much recent work has focused on DNN interpretability, and rapid progress has thus far made a thorough systematization of methods difficult. In this survey, we review over 300 works with a focus on inner interpretability tools. We introduce a taxonomy that classifies methods by what part of the network they help to explain (weights, neurons, subnetworks, or latent representations) and whether they are implemented during (intrinsic) or after (post hoc) training. To our knowledge, we are also the first to survey a number of connections between interpretability research and work in adversarial robustness, continual learning, modularity, network compression, and studying the human visual system. We discuss key challenges and argue that the status quo in interpretability research is largely unproductive. Finally, we highlight the importance of future work that emphasizes diagnostics, debugging, adversaries, and benchmarking in order to make interpretability tools more useful to engineers in practical applications

    Explaining Deep Face Algorithms through Visualization: A Survey

    Full text link
    Although current deep models for face tasks surpass human performance on some benchmarks, we do not understand how they work. Thus, we cannot predict how it will react to novel inputs, resulting in catastrophic failures and unwanted biases in the algorithms. Explainable AI helps bridge the gap, but currently, there are very few visualization algorithms designed for faces. This work undertakes a first-of-its-kind meta-analysis of explainability algorithms in the face domain. We explore the nuances and caveats of adapting general-purpose visualization algorithms to the face domain, illustrated by computing visualizations on popular face models. We review existing face explainability works and reveal valuable insights into the structure and hierarchy of face networks. We also determine the design considerations for practical face visualizations accessible to AI practitioners by conducting a user study on the utility of various explainability algorithms

    MaLP: Manipulation Localization Using a Proactive Scheme

    Full text link
    Advancements in the generation quality of various Generative Models (GMs) has made it necessary to not only perform binary manipulation detection but also localize the modified pixels in an image. However, prior works termed as passive for manipulation localization exhibit poor generalization performance over unseen GMs and attribute modifications. To combat this issue, we propose a proactive scheme for manipulation localization, termed MaLP. We encrypt the real images by adding a learned template. If the image is manipulated by any GM, this added protection from the template not only aids binary detection but also helps in identifying the pixels modified by the GM. The template is learned by leveraging local and global-level features estimated by a two-branch architecture. We show that MaLP performs better than prior passive works. We also show the generalizability of MaLP by testing on 22 different GMs, providing a benchmark for future research on manipulation localization. Finally, we show that MaLP can be used as a discriminator for improving the generation quality of GMs. Our models/codes are available at www.github.com/vishal3477/pro_loc.Comment: Published at Conference on Computer Vision and Pattern Recognition 202

    Behavioral Use Licensing for Responsible AI

    Full text link
    Scientific research and development relies on the sharing of ideas and artifacts. With the growing reliance on artificial intelligence (AI) for many different applications, the sharing of code, data, and models is important to ensure the ability to replicate methods and the democratization of scientific knowledge. Many high-profile journals and conferences expect code to be submitted and released with papers. Furthermore, developers often want to release code and models to encourage development of technology that leverages their frameworks and services. However, AI algorithms are becoming increasingly powerful and generalized. Ultimately, the context in which an algorithm is applied can be far removed from that which the developers had intended. A number of organizations have expressed concerns about inappropriate or irresponsible use of AI and have proposed AI ethical guidelines and responsible AI initiatives. While such guidelines are useful and help shape policy, they are not easily enforceable. Governments have taken note of the risks associated with certain types of AI applications and have passed legislation. While these are enforceable, they require prolonged scientific and political deliberation. In this paper we advocate the use of licensing to enable legally enforceable behavioral use conditions on software and data. We argue that licenses serve as a useful tool for enforcement in situations where it is difficult or time-consuming to legislate AI usage. Furthermore, by using such licenses, AI developers provide a signal to the AI community, as well as governmental bodies, that they are taking responsibility for their technologies and are encouraging responsible use by downstream users
    • …
    corecore