78,192 research outputs found
Visualizations for an Explainable Planning Agent
In this paper, we report on the visualization capabilities of an Explainable
AI Planning (XAIP) agent that can support human in the loop decision making.
Imposing transparency and explainability requirements on such agents is
especially important in order to establish trust and common ground with the
end-to-end automated planning system. Visualizing the agent's internal
decision-making processes is a crucial step towards achieving this. This may
include externalizing the "brain" of the agent -- starting from its sensory
inputs, to progressively higher order decisions made by it in order to drive
its planning components. We also show how the planner can bootstrap on the
latest techniques in explainable planning to cast plan visualization as a plan
explanation problem, and thus provide concise model-based visualization of its
plans. We demonstrate these functionalities in the context of the automated
planning components of a smart assistant in an instrumented meeting space.Comment: PREVIOUSLY Mr. Jones -- Towards a Proactive Smart Room Orchestrator
(appeared in AAAI 2017 Fall Symposium on Human-Agent Groups
Recommended from our members
A debate dashboard to enhance on-line knowledge sharing
Purpose – Web 2.0 technologies have radically modified the way in which knowledge is created, managed and shared, improving productivity and accelerating innovation processes for the enterprises. These technologies have allowed enterprises to produce knowledge, leverage collective intelligence and build social capital on a scale that was unimaginable a few years ago. In this paper we focus on a particular kind of web-based collaborative platforms known as argument mapping tools and we discuss the main barriers to the adoption of them. Literature has proved that these argument mapping tools provide large and small and medium enterprise with several advantages, but nevertheless, they have low level adoption. In this paper we explore new technological solutions to support the adoption of argument mapping tools. In particular, we propose the design of a Debate Dashboard to provide visual feedback to support online deliberation. These visual feedback aims at compensating the loss of information due to the mediation of the technology. The Debate Dashboard is composed of a set of suitable visualization tools that have been selected on the basis of a literature review of the visualization tools.
Design/methodology/approach - We propose a literature review of existing visualization tools. Building on the literature review we selected thirty visualization tools, which have been classified on the basis of the kind of feedback they are able to provide. We identify three classes of feedback: Community feedback (identikit of users), Interaction feedback (about how users interact) and Absorption feedback (about generated content and its organization). We distilled the Debate Dashboard features by building on results of a literature review on Web 2.0 tools for data visualization. As output of literature review we selected six visualization tools. We consider these selected tools as a sort of starting point. Indeed, our aim is the improvement of them through the addition of further features and functions in order to make them more effective in providing feedback.
Originality/value – Our paper enriches the debate about computer mediated conversation and visualization tools. We propose a Dashboard prototype to augment collaborative
knowledge mapping tools by providing visual feedback on conversations. The Dashboard will provide at the same time three different kinds of feedback about: details of the
participants to the conversation, interaction processes and generated content. This will allow the improvement of the benefits and reduce the costs deriving from the use of
mapping tools. Moreover, another important novelty is that visualization tools will be integrated to mapping tools, as until now they have been used only to visualize data contained in forums (as Usenet or Slash.dot), chat or email archives
Practical implications – The Dashboard provides feedback about participants, interaction processes and generated contents, thus supporting the adoption of mapping tools as
technologies able to foster knowledge sharing among remote workers or/and customers and supplier.
The integration of Debate Dashboard with common online argument mapping tools aims at enabling the following advantages:
1. Reduction of misunderstanding;
2. Reduction of cognitive effort required to use argument mapping tools;
3. Improvement of the exploration and the analysis of the maps - the Debate Dashboard feedback improves the usability of the object (the map), thus allowing users to pitch into the conversation in the right place
Recommended from our members
Explainable and Advisable Learning for Self-driving Vehicles
Deep neural perception and control networks are likely to be a key component of self-driving vehicles. These models need to be explainable - they should provide easy-to-interpret rationales for their behavior - so that passengers, insurance companies, law enforcement, developers, etc., can understand what triggered a particular behavior. Explanations may be triggered by the neural controller, namely introspective explanations, or informed by the neural controller's output, namely rationalizations. Our work has focused on the challenge of generating introspective explanations of deep models for self-driving vehicles. In Chapter 3, we begin by exploring the use of visual explanations. These explanations take the form of real-time highlighted regions of an image that causally influence the network's output (steering control). In the first stage, we use a visual attention model to train a convolution network end-to-end from images to steering angle. The attention model highlights image regions that potentially influence the network's output. Some of these are true influences, but some are spurious. We then apply a causal filtering step to determine which input regions actually influence the output. This produces more succinct visual explanations and more accurately exposes the network's behavior. In Chapter 4, we add an attention-based video-to-text model to produce textual explanations of model actions, e.g. "the car slows down because the road is wet". The attention maps of controller and explanation model are aligned so that explanations are grounded in the parts of the scene that mattered to the controller. We explore two approaches to attention alignment, strong- and weak-alignment. These explainable systems represent an externalization of tacit knowledge. The network's opaque reasoning is simplified to a situation-specific dependence on a visible object in the image. This makes them brittle and potentially unsafe in situations that do not match training data. In Chapter 5, we propose to address this issue by augmenting training data with natural language advice from a human. Advice includes guidance about what to do and where to attend. We present the first step toward advice-giving, where we train an end-to-end vehicle controller that accepts advice. The controller adapts the way it attends to the scene (visual attention) and the control (steering and speed). Further, in Chapter 6, we propose a new approach that learns vehicle control with the help of long-term (global) human advice. Specifically, our system learns to summarize its visual observations in natural language, predict an appropriate action response (e.g. "I see a pedestrian crossing, so I stop"), and predict the controls, accordingly
Visualizing practical knowledge: The Haughton-Mars Project
To improve how we envision knowledge, we must improve our ability to see knowledge in everyday life. That is, visualization is concerned not only with displaying facts and theories, but also with finding ways to express and relate tacit understanding. Such knowledge, although often referred to as "common," is not necessarily shared and may be distributed socially in choreographies for working together—in the manner that a chef and a maitre d’hôtel, who obviously possess very different skills, coordinate their work. Furthermore, non-verbal concepts cannot in principle be inventoried. Reifying practical knowledge is not a process of converting the implicit into the explicit, but pointing to what we know, showing its manifestations in our everyday life. To this end, I illustrate the study and reification of practical knowledge by examining the activities of a scientific expedition in the Canadian Arctic—a group of scientists preparing for a mission to Mar
Uncertainty in phylogenetic tree estimates
Estimating phylogenetic trees is an important problem in evolutionary
biology, environmental policy and medicine. Although trees are estimated, their
uncertainties are discarded by mathematicians working in tree space. Here we
explicitly model the multivariate uncertainty of tree estimates. We consider
both the cases where uncertainty information arises extrinsically (through
covariate information) and intrinsically (through the tree estimates
themselves). The importance of accounting for tree uncertainty in tree space is
demonstrated in two case studies. In the first instance, differences between
gene trees are small relative to their uncertainties, while in the second, the
differences are relatively large. Our main goal is visualization of tree
uncertainty, and we demonstrate advantages of our method with respect to
reproducibility, speed and preservation of topological differences compared to
visualization based on multidimensional scaling. The proposal highlights that
phylogenetic trees are estimated in an extremely high-dimensional space,
resulting in uncertainty information that cannot be discarded. Most
importantly, it is a method that allows biologists to diagnose whether
differences between gene trees are biologically meaningful, or due to
uncertainty in estimation.Comment: Final version accepted to Journal of Computational and Graphical
Statistic
- …