5,900 research outputs found
The Quantum Frontier
The success of the abstract model of computation, in terms of bits, logical
operations, programming language constructs, and the like, makes it easy to
forget that computation is a physical process. Our cherished notions of
computation and information are grounded in classical mechanics, but the
physics underlying our world is quantum. In the early 80s researchers began to
ask how computation would change if we adopted a quantum mechanical, instead of
a classical mechanical, view of computation. Slowly, a new picture of
computation arose, one that gave rise to a variety of faster algorithms, novel
cryptographic mechanisms, and alternative methods of communication. Small
quantum information processing devices have been built, and efforts are
underway to build larger ones. Even apart from the existence of these devices,
the quantum view on information processing has provided significant insight
into the nature of computation and information, and a deeper understanding of
the physics of our universe and its connections with computation.
We start by describing aspects of quantum mechanics that are at the heart of
a quantum view of information processing. We give our own idiosyncratic view of
a number of these topics in the hopes of correcting common misconceptions and
highlighting aspects that are often overlooked. A number of the phenomena
described were initially viewed as oddities of quantum mechanics. It was
quantum information processing, first quantum cryptography and then, more
dramatically, quantum computing, that turned the tables and showed that these
oddities could be put to practical effect. It is these application we describe
next. We conclude with a section describing some of the many questions left for
future work, especially the mysteries surrounding where the power of quantum
information ultimately comes from.Comment: Invited book chapter for Computation for Humanity - Information
Technology to Advance Society to be published by CRC Press. Concepts
clarified and style made more uniform in version 2. Many thanks to the
referees for their suggestions for improvement
Rethinking AI Explainability and Plausibility
Setting proper evaluation objectives for explainable artificial intelligence
(XAI) is vital for making XAI algorithms follow human communication norms,
support human reasoning processes, and fulfill human needs for AI explanations.
In this article, we examine explanation plausibility, which is the most
pervasive human-grounded concept in XAI evaluation. Plausibility measures how
reasonable the machine explanation is compared to the human explanation.
Plausibility has been conventionally formulated as an important evaluation
objective for AI explainability tasks. We argue against this idea, and show how
optimizing and evaluating XAI for plausibility is sometimes harmful, and always
ineffective to achieve model understandability, transparency, and
trustworthiness. Specifically, evaluating XAI algorithms for plausibility
regularizes the machine explanation to express exactly the same content as
human explanation, which deviates from the fundamental motivation for humans to
explain: expressing similar or alternative reasoning trajectories while
conforming to understandable forms or language. Optimizing XAI for plausibility
regardless of the model decision correctness also jeopardizes model
trustworthiness, as doing so breaks an important assumption in human-human
explanation namely that plausible explanations typically imply correct
decisions, and violating this assumption eventually leads to either undertrust
or overtrust of AI models. Instead of being the end goal in XAI evaluation,
plausibility can serve as an intermediate computational proxy for the human
process of interpreting explanations to optimize the utility of XAI. We further
highlight the importance of explainability-specific evaluation objectives by
differentiating the AI explanation task from the object localization task
Recommended from our members
Explainable and Advisable Learning for Self-driving Vehicles
Deep neural perception and control networks are likely to be a key component of self-driving vehicles. These models need to be explainable - they should provide easy-to-interpret rationales for their behavior - so that passengers, insurance companies, law enforcement, developers, etc., can understand what triggered a particular behavior. Explanations may be triggered by the neural controller, namely introspective explanations, or informed by the neural controller's output, namely rationalizations. Our work has focused on the challenge of generating introspective explanations of deep models for self-driving vehicles. In Chapter 3, we begin by exploring the use of visual explanations. These explanations take the form of real-time highlighted regions of an image that causally influence the network's output (steering control). In the first stage, we use a visual attention model to train a convolution network end-to-end from images to steering angle. The attention model highlights image regions that potentially influence the network's output. Some of these are true influences, but some are spurious. We then apply a causal filtering step to determine which input regions actually influence the output. This produces more succinct visual explanations and more accurately exposes the network's behavior. In Chapter 4, we add an attention-based video-to-text model to produce textual explanations of model actions, e.g. "the car slows down because the road is wet". The attention maps of controller and explanation model are aligned so that explanations are grounded in the parts of the scene that mattered to the controller. We explore two approaches to attention alignment, strong- and weak-alignment. These explainable systems represent an externalization of tacit knowledge. The network's opaque reasoning is simplified to a situation-specific dependence on a visible object in the image. This makes them brittle and potentially unsafe in situations that do not match training data. In Chapter 5, we propose to address this issue by augmenting training data with natural language advice from a human. Advice includes guidance about what to do and where to attend. We present the first step toward advice-giving, where we train an end-to-end vehicle controller that accepts advice. The controller adapts the way it attends to the scene (visual attention) and the control (steering and speed). Further, in Chapter 6, we propose a new approach that learns vehicle control with the help of long-term (global) human advice. Specifically, our system learns to summarize its visual observations in natural language, predict an appropriate action response (e.g. "I see a pedestrian crossing, so I stop"), and predict the controls, accordingly
- …