681 research outputs found
Towards Accountable AI: Hybrid Human-Machine Analyses for Characterizing System Failure
As machine learning systems move from computer-science laboratories into the
open world, their accountability becomes a high priority problem.
Accountability requires deep understanding of system behavior and its failures.
Current evaluation methods such as single-score error metrics and confusion
matrices provide aggregate views of system performance that hide important
shortcomings. Understanding details about failures is important for identifying
pathways for refinement, communicating the reliability of systems in different
settings, and for specifying appropriate human oversight and engagement.
Characterization of failures and shortcomings is particularly complex for
systems composed of multiple machine learned components. For such systems,
existing evaluation methods have limited expressiveness in describing and
explaining the relationship among input content, the internal states of system
components, and final output quality. We present Pandora, a set of hybrid
human-machine methods and tools for describing and explaining system failures.
Pandora leverages both human and system-generated observations to summarize
conditions of system malfunction with respect to the input content and system
architecture. We share results of a case study with a machine learning pipeline
for image captioning that show how detailed performance views can be beneficial
for analysis and debugging
Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks
The last decade of machine learning has seen drastic increases in scale and
capabilities. Deep neural networks (DNNs) are increasingly being deployed in
the real world. However, they are difficult to analyze, raising concerns about
using them without a rigorous understanding of how they function. Effective
tools for interpreting them will be important for building more trustworthy AI
by helping to identify problems, fix bugs, and improve basic understanding. In
particular, "inner" interpretability techniques, which focus on explaining the
internal components of DNNs, are well-suited for developing a mechanistic
understanding, guiding manual modifications, and reverse engineering solutions.
Much recent work has focused on DNN interpretability, and rapid progress has
thus far made a thorough systematization of methods difficult. In this survey,
we review over 300 works with a focus on inner interpretability tools. We
introduce a taxonomy that classifies methods by what part of the network they
help to explain (weights, neurons, subnetworks, or latent representations) and
whether they are implemented during (intrinsic) or after (post hoc) training.
To our knowledge, we are also the first to survey a number of connections
between interpretability research and work in adversarial robustness, continual
learning, modularity, network compression, and studying the human visual
system. We discuss key challenges and argue that the status quo in
interpretability research is largely unproductive. Finally, we highlight the
importance of future work that emphasizes diagnostics, debugging, adversaries,
and benchmarking in order to make interpretability tools more useful to
engineers in practical applications
- …