419 research outputs found
Towards explainable face aging with Generative Adversarial Networks
Generative Adversarial Networks (GAN) are being increasingly used to perform face aging due to their capabilities of automatically generating highly-realistic synthetic images by using an adversarial model often based on Convolutional Neural Networks (CNN). However, GANs currently represent black box models since it is not known how the CNNs store and process the information learned from data. In this paper, we propose the \ufb01rst method that deals with explaining GANs, by introducing a novel qualitative and quantitative analysis of the inner structure of the model. Similarly to analyzing the common genes in two DNA sequences, we analyze the common \ufb01lters in two CNNs. We show that the GANs for face aging partially share their parameters with GANs trained for heterogeneous applications and that the aging transformation can be learned using general purpose image databases and a \ufb01ne-tuning step. Results on public databases con\ufb01rm the validity of our approach, also enabling future studies on similar models
Recommended from our members
Advancing Artificial Intelligence in Sensors, Signals, and Imaging Informatics.
ObjectiveTo identify research works that exemplify recent developments in the field of sensors, signals, and imaging informatics.MethodA broad literature search was conducted using PubMed and Web of Science, supplemented with individual papers that were nominated by section editors. A predefined query made from a combination of Medical Subject Heading (MeSH) terms and keywords were used to search both sources. Section editors then filtered the entire set of retrieved papers with each paper having been reviewed by two section editors. Papers were assessed on a three-point Likert scale by two section editors, rated from 0 (do not include) to 2 (should be included). Only papers with a combined score of 2 or above were considered.ResultsA search for papers was executed at the start of January 2019, resulting in a combined set of 1,459 records published in 2018 in 119 unique journals. Section editors jointly filtered the list of candidates down to 14 nominations. The 14 candidate best papers were then ranked by a group of eight external reviewers. Four papers, representing different international groups and journals, were selected as the best papers by consensus of the International Medical Informatics Association (IMIA) Yearbook editorial board.ConclusionsThe fields of sensors, signals, and imaging informatics have rapidly evolved with the application of novel artificial intelligence/machine learning techniques. Studies have been able to discover hidden patterns and integrate different types of data towards improving diagnostic accuracy and patient outcomes. However, the quality of papers varied widely without clear reporting standards for these types of models. Nevertheless, a number of papers have demonstrated useful techniques to improve the generalizability, interpretability, and reproducibility of increasingly sophisticated models
Explainable artificial intelligence (XAI) in deep learning-based medical image analysis
With an increase in deep learning-based methods, the call for explainability
of such methods grows, especially in high-stakes decision making areas such as
medical image analysis. This survey presents an overview of eXplainable
Artificial Intelligence (XAI) used in deep learning-based medical image
analysis. A framework of XAI criteria is introduced to classify deep
learning-based medical image analysis methods. Papers on XAI techniques in
medical image analysis are then surveyed and categorized according to the
framework and according to anatomical location. The paper concludes with an
outlook of future opportunities for XAI in medical image analysis.Comment: Submitted for publication. Comments welcome by email to first autho
Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks
The last decade of machine learning has seen drastic increases in scale and
capabilities. Deep neural networks (DNNs) are increasingly being deployed in
the real world. However, they are difficult to analyze, raising concerns about
using them without a rigorous understanding of how they function. Effective
tools for interpreting them will be important for building more trustworthy AI
by helping to identify problems, fix bugs, and improve basic understanding. In
particular, "inner" interpretability techniques, which focus on explaining the
internal components of DNNs, are well-suited for developing a mechanistic
understanding, guiding manual modifications, and reverse engineering solutions.
Much recent work has focused on DNN interpretability, and rapid progress has
thus far made a thorough systematization of methods difficult. In this survey,
we review over 300 works with a focus on inner interpretability tools. We
introduce a taxonomy that classifies methods by what part of the network they
help to explain (weights, neurons, subnetworks, or latent representations) and
whether they are implemented during (intrinsic) or after (post hoc) training.
To our knowledge, we are also the first to survey a number of connections
between interpretability research and work in adversarial robustness, continual
learning, modularity, network compression, and studying the human visual
system. We discuss key challenges and argue that the status quo in
interpretability research is largely unproductive. Finally, we highlight the
importance of future work that emphasizes diagnostics, debugging, adversaries,
and benchmarking in order to make interpretability tools more useful to
engineers in practical applications
Explaining Deep Face Algorithms through Visualization: A Survey
Although current deep models for face tasks surpass human performance on some
benchmarks, we do not understand how they work. Thus, we cannot predict how it
will react to novel inputs, resulting in catastrophic failures and unwanted
biases in the algorithms. Explainable AI helps bridge the gap, but currently,
there are very few visualization algorithms designed for faces. This work
undertakes a first-of-its-kind meta-analysis of explainability algorithms in
the face domain. We explore the nuances and caveats of adapting general-purpose
visualization algorithms to the face domain, illustrated by computing
visualizations on popular face models. We review existing face explainability
works and reveal valuable insights into the structure and hierarchy of face
networks. We also determine the design considerations for practical face
visualizations accessible to AI practitioners by conducting a user study on the
utility of various explainability algorithms
MaLP: Manipulation Localization Using a Proactive Scheme
Advancements in the generation quality of various Generative Models (GMs) has
made it necessary to not only perform binary manipulation detection but also
localize the modified pixels in an image. However, prior works termed as
passive for manipulation localization exhibit poor generalization performance
over unseen GMs and attribute modifications. To combat this issue, we propose a
proactive scheme for manipulation localization, termed MaLP. We encrypt the
real images by adding a learned template. If the image is manipulated by any
GM, this added protection from the template not only aids binary detection but
also helps in identifying the pixels modified by the GM. The template is
learned by leveraging local and global-level features estimated by a two-branch
architecture. We show that MaLP performs better than prior passive works. We
also show the generalizability of MaLP by testing on 22 different GMs,
providing a benchmark for future research on manipulation localization.
Finally, we show that MaLP can be used as a discriminator for improving the
generation quality of GMs. Our models/codes are available at
www.github.com/vishal3477/pro_loc.Comment: Published at Conference on Computer Vision and Pattern Recognition
202
Behavioral Use Licensing for Responsible AI
Scientific research and development relies on the sharing of ideas and
artifacts. With the growing reliance on artificial intelligence (AI) for many
different applications, the sharing of code, data, and models is important to
ensure the ability to replicate methods and the democratization of scientific
knowledge. Many high-profile journals and conferences expect code to be
submitted and released with papers. Furthermore, developers often want to
release code and models to encourage development of technology that leverages
their frameworks and services. However, AI algorithms are becoming increasingly
powerful and generalized. Ultimately, the context in which an algorithm is
applied can be far removed from that which the developers had intended. A
number of organizations have expressed concerns about inappropriate or
irresponsible use of AI and have proposed AI ethical guidelines and responsible
AI initiatives. While such guidelines are useful and help shape policy, they
are not easily enforceable. Governments have taken note of the risks associated
with certain types of AI applications and have passed legislation. While these
are enforceable, they require prolonged scientific and political deliberation.
In this paper we advocate the use of licensing to enable legally enforceable
behavioral use conditions on software and data. We argue that licenses serve as
a useful tool for enforcement in situations where it is difficult or
time-consuming to legislate AI usage. Furthermore, by using such licenses, AI
developers provide a signal to the AI community, as well as governmental
bodies, that they are taking responsibility for their technologies and are
encouraging responsible use by downstream users
- …