152,566 research outputs found
Debated backpropagation
Dialogue has long been used in human society to explain seemingly opaque concepts. In this paper we focus on how to better explain training models for neural networks, to entertain as well as inform. We present a multi-agent argumentation-based dialogue system to generate human understandable dialogue to explain backpropagation. The system incorporates a model of agent personality and introduces social elements between agents to produce characterful discussion. Natural language templates are used to render utterances in English.Publisher PD
Multi-User MultiWOZ: Task-Oriented Dialogues among Multiple Users
While most task-oriented dialogues assume conversations between the agent and
one user at a time, dialogue systems are increasingly expected to communicate
with multiple users simultaneously who make decisions collaboratively. To
facilitate development of such systems, we release the Multi-User MultiWOZ
dataset: task-oriented dialogues among two users and one agent. To collect this
dataset, each user utterance from MultiWOZ 2.2 was replaced with a small chat
between two users that is semantically and pragmatically consistent with the
original user utterance, thus resulting in the same dialogue state and system
response. These dialogues reflect interesting dynamics of collaborative
decision-making in task-oriented scenarios, e.g., social chatter and
deliberation. Supported by this data, we propose the novel task of multi-user
contextual query rewriting: to rewrite a task-oriented chat between two users
as a concise task-oriented query that retains only task-relevant information
and that is directly consumable by the dialogue system. We demonstrate that in
multi-user dialogues, using predicted rewrites substantially improves dialogue
state tracking without modifying existing dialogue systems that are trained for
single-user dialogues. Further, this method surpasses training a medium-sized
model directly on multi-user dialogues and generalizes to unseen domains.Comment: To Appear in EMNLP-Findings 202
A Virtual Conversational Agent for Teens with Autism: Experimental Results and Design Lessons
We present the design of an online social skills development interface for
teenagers with autism spectrum disorder (ASD). The interface is intended to
enable private conversation practice anywhere, anytime using a web-browser.
Users converse informally with a virtual agent, receiving feedback on nonverbal
cues in real-time, and summary feedback. The prototype was developed in
consultation with an expert UX designer, two psychologists, and a pediatrician.
Using the data from 47 individuals, feedback and dialogue generation were
automated using a hidden Markov model and a schema-driven dialogue manager
capable of handling multi-topic conversations. We conducted a study with nine
high-functioning ASD teenagers. Through a thematic analysis of post-experiment
interviews, identified several key design considerations, notably: 1) Users
should be fully briefed at the outset about the purpose and limitations of the
system, to avoid unrealistic expectations. 2) An interface should incorporate
positive acknowledgment of behavior change. 3) Realistic appearance of a
virtual agent and responsiveness are important in engaging users. 4)
Conversation personalization, for instance in prompting laconic users for more
input and reciprocal questions, would help the teenagers engage for longer
terms and increase the system's utility
Dialoguing DeLP-based agents
A multi-agent system is made up of multiple interacting autonomous agents. It can be viewed as a society in which each agent performs its activity cooperating to achieve common goals, or competing for them. They establish dialogues via some kind of agent-communication language, under some communication protocol. We think argumentation is suitable to model several kind of dialogues in multi-agents systems. In this paper we define dialogues and persuasion dialogues between two agents using Defeasible Logic Programs as a knowledge base, together with an algorithm defining how this dialogue may be engaged. We also show an indication of how an agent could use opponent’s information for its own benefit.Eje: AgentesRed de Universidades con Carreras en Informática (RedUNCI
Unleashing Infinite-Length Input Capacity for Large-scale Language Models with Self-Controlled Memory System
Large-scale Language Models (LLMs) are constrained by their inability to
process lengthy inputs. To address this limitation, we propose the
Self-Controlled Memory (SCM) system to unleash infinite-length input capacity
for large-scale language models. Our SCM system is composed of three key
modules: the language model agent, the memory stream, and the memory
controller. The language model agent iteratively processes ultra-long inputs
and stores all historical information in the memory stream. The memory
controller provides the agent with both long-term memory (archived memory) and
short-term memory (flash memory) to generate precise and coherent responses.
The controller determines which memories from archived memory should be
activated and how to incorporate them into the model input. Our SCM system can
be integrated with any LLMs to enable them to process ultra-long texts without
any modification or fine-tuning. Experimental results show that our SCM system
enables LLMs, which are not optimized for multi-turn dialogue, to achieve
multi-turn dialogue capabilities that are comparable to ChatGPT, and to
outperform ChatGPT in scenarios involving ultra-long document summarization or
long-term conversations. Additionally, we will supply a test set, which covers
common long-text input scenarios, for evaluating the abilities of LLMs in
processing long documents.~\footnote{Working in
progress.}\footnote{\url{https://github.com/wbbeyourself/SCM4LLMs}}Comment: Working in progres
- …