5,573 research outputs found
Conversational Exploratory Search via Interactive Storytelling
Conversational interfaces are likely to become more efficient, intuitive and
engaging way for human-computer interaction than today's text or touch-based
interfaces. Current research efforts concerning conversational interfaces focus
primarily on question answering functionality, thereby neglecting support for
search activities beyond targeted information lookup. Users engage in
exploratory search when they are unfamiliar with the domain of their goal,
unsure about the ways to achieve their goals, or unsure about their goals in
the first place. Exploratory search is often supported by approaches from
information visualization. However, such approaches cannot be directly
translated to the setting of conversational search.
In this paper we investigate the affordances of interactive storytelling as a
tool to enable exploratory search within the framework of a conversational
interface. Interactive storytelling provides a way to navigate a document
collection in the pace and order a user prefers. In our vision, interactive
storytelling is to be coupled with a dialogue-based system that provides verbal
explanations and responsive design. We discuss challenges and sketch the
research agenda required to put this vision into life.Comment: Accepted at ICTIR'17 Workshop on Search-Oriented Conversational AI
(SCAI 2017
Visualizations for an Explainable Planning Agent
In this paper, we report on the visualization capabilities of an Explainable
AI Planning (XAIP) agent that can support human in the loop decision making.
Imposing transparency and explainability requirements on such agents is
especially important in order to establish trust and common ground with the
end-to-end automated planning system. Visualizing the agent's internal
decision-making processes is a crucial step towards achieving this. This may
include externalizing the "brain" of the agent -- starting from its sensory
inputs, to progressively higher order decisions made by it in order to drive
its planning components. We also show how the planner can bootstrap on the
latest techniques in explainable planning to cast plan visualization as a plan
explanation problem, and thus provide concise model-based visualization of its
plans. We demonstrate these functionalities in the context of the automated
planning components of a smart assistant in an instrumented meeting space.Comment: PREVIOUSLY Mr. Jones -- Towards a Proactive Smart Room Orchestrator
(appeared in AAAI 2017 Fall Symposium on Human-Agent Groups
Guidelines for annotating the LUNA corpus with frame information
This document defines the annotation workflow aimed at adding frame information to the LUNA corpus of conversational speech. In particular, it details both the corpus pre-processing steps and the proper annotation process, giving hints about how to choose the frame and the frame element labels. Besides, the description of 20 new domain-specific and language-specific frames is reported. To our knowledge, this is the first attempt to adapt the frame paradigm to dialogs and at the same time to define new frames and frame elements for the specific domain of software/hardware assistance. The technical report is structured as follows: in Section 2 an overview of the FrameNet project is given, while Section 3 introduces the LUNA project and the annotation framework involving the Italian dialogs. Section 4 details the annotation workflow, including the format preparation of the dialog files and the annotation strategy. In Section 5 we discuss the main issues of the annotation of frame information in dialogs and we describe how the standard annotation procedure was changed in order to face such issues. Then, the 20 newly introduced frames are reported in Section 6
Agents for educational games and simulations
This book consists mainly of revised papers that were presented at the Agents for Educational Games and Simulation (AEGS) workshop held on May 2, 2011, as part of the Autonomous Agents and MultiAgent Systems (AAMAS) conference in Taipei, Taiwan. The 12 full papers presented were carefully reviewed and selected from various submissions. The papers are organized topical sections on middleware applications, dialogues and learning, adaption and convergence, and agent applications
Recommended from our members
uC: Ubiquitous Collaboration Platform for Multimodal Team Interaction Support
A human-centered computing platform that improves teamwork and transforms the “human- computer interaction experience” for distributed teams is presented. This Ubiquitous Collaboration, or uC (“you see”), platform\u27s objective is to transform distributed teamwork (i.e., work occurring when teams of workers and learners are geographically dispersed and often interacting at different times). It achieves this goal through a multimodal team interaction interface realized through a reconfigurable open architecture. The approach taken is to integrate: (1) an intuitive speech- and video-centric multi-modal interface to augment more conventional methods (e.g., mouse, stylus and touch), (2) an open and reconfigurable architecture supporting information gathering, and (3) a machine intelligent approach to analysis and management of heterogeneous live and stored sensor data to support collaboration. The system will transform how teams of people interact with computers by drawing on both the virtual and physical environment
- …