174 research outputs found
Lilia, A Showcase for Fast Bootstrap of Conversation-Like Dialogues Based on a Goal-Oriented System
International audienceRecently many works have proposed to cast human-machine interaction in a sentence generation scheme. Neural networks models can learn how to generate a probable sentence based on the user's statement along with a partial view of the dialogue history. While appealing to some extent, these approaches require huge training sets of general-purpose data and lack a principled way to intertwine language generation with information retrieval from back-end resources to fuel the dialogue with actualised and precise knowledge. As a practical alternative, in this paper, we present Lilia, a showcase for fast bootstrap of conversation-like dialogues based on a goal-oriented system. First, a comparison of goal-oriented and conversational system features is led, then a conversion process is described for the fast bootstrap of a new system, finalised with an on-line training of the system's main components. Lilia is dedicated to a chitchat task, where speakers exchange viewpoints on a displayed image while trying collaboratively to derive its author's intention. Evaluations with user trials showed its efficiency in a realistic setup
Reinforcement adaptation of an attention-based neural natural language generator for spoken dialogue systems
Following some recent propositions to handle natural language generation in spoken dialogue systems with long short-term memory recurrent neural network models~\citep{Wen2016a} we first investigate a variant thereof with the objective of a better integration of the attention subnetwork. Then our next objective is to propose and evaluate a framework to adapt the NLG module online through direct interactions with the users. When doing so the basic way is to ask the user to utter an alternative sentence to express a particular dialogue act. But then the system has to decide between using an automatic transcription or to ask for a manual transcription. To do so a reinforcement learning approach based on an adversarial bandit scheme is retained. We show that by defining appropriately the rewards as a linear combination of expected payoffs and costs of acquiring the new data provided by the user, a system design can balance between improving the system's performance towards a better match with the user's preferences and the burden associated with it. Then the actual benefits of this system is assessed with a human evaluation, showing that the addition of more diverse utterances allows to produce sentences more satisfying for the user
Data-efficient methods for dialogue systems
Conversational User Interface (CUI) has become ubiquitous in everyday life, in consumer-focused products like Siri and Alexa or more business-oriented customer support automation
solutions. Deep learning underlies many recent breakthroughs in dialogue systems but requires
very large amounts of training data, often annotated by experts — and this dramatically increases the cost of deploying such systems in production setups and reduces their flexibility as
software products. Trained with smaller data, these methods end up severely lacking robustness
to various phenomena of spoken language (e.g. disfluencies), out-of-domain input, and often
just have too little generalisation power to other tasks and domains.
In this thesis, we address the above issues by introducing a series of methods for bootstrapping
robust dialogue systems from minimal data. Firstly, we study two orthogonal approaches to dialogue: a linguistically informed model (DyLan) and a machine learning-based one (MemN2N) —
from the data efficiency perspective, i.e. their potential to generalise from minimal data and
robustness to natural spontaneous input. We outline the steps to obtain data-efficient solutions
with either approach and proceed with the neural models for the rest of the thesis.
We then introduce the core contributions of this thesis, two data-efficient models for dialogue
response generation: the Dialogue Knowledge Transfer Network (DiKTNet) based on transferable latent dialogue representations, and the Generative-Retrieval Transformer (GRTr) combining response generation logic with a retrieval mechanism as the fallback. GRTr ranked first at
the Dialog System Technology Challenge 8 Fast Domain Adaptation task.
Next, we the problem of training robust neural models from minimal data. As such, we look at
robustness to disfluencies and propose a multitask LSTM-based model for domain-general disfluency detection. We then go on to explore robustness to anomalous, or out-of-domain (OOD)
input. We address this problem by (1) presenting Turn Dropout, a data-augmentation technique
facilitating training for anomalous input only using in-domain data, and (2) introducing VHCN
and AE-HCN, autoencoder-augmented models for efficient training with turn dropout based on
the Hybrid Code Networks (HCN) model family.
With all the above work addressing goal-oriented dialogue, our final contribution in this thesis
focuses on social dialogue where the main objective is maintaining natural, coherent, and engaging conversation for as long as possible. We introduce a neural model for response ranking
in social conversation used in Alana, the 3rd place winner in the Amazon Alexa Prize 2017 and
2018. For our model, we employ a novel technique of predicting the dialogue length as the main
objective for ranking. We show that this approach matches the performance of its counterpart
based on the conventional, human rating-based objective — and surpasses it given more raw
dialogue transcripts, thus reducing the dependence on costly and cumbersome dialogue annotations.EPSRC project BABBLE (grant EP/M01553X/1)
Dialogue Agents 101: A Beginner's Guide to Critical Ingredients for Designing Effective Conversational Systems
Sharing ideas through communication with peers is the primary mode of human
interaction. Consequently, extensive research has been conducted in the area of
conversational AI, leading to an increase in the availability and diversity of
conversational tasks, datasets, and methods. However, with numerous tasks being
explored simultaneously, the current landscape of conversational AI becomes
fragmented. Therefore, initiating a well-thought-out model for a dialogue agent
can pose significant challenges for a practitioner. Towards highlighting the
critical ingredients needed for a practitioner to design a dialogue agent from
scratch, the current study provides a comprehensive overview of the primary
characteristics of a dialogue agent, the supporting tasks, their corresponding
open-domain datasets, and the methods used to benchmark these datasets. We
observe that different methods have been used to tackle distinct dialogue
tasks. However, building separate models for each task is costly and does not
leverage the correlation among the several tasks of a dialogue agent. As a
result, recent trends suggest a shift towards building unified foundation
models. To this end, we propose UNIT, a UNified dIalogue dataseT constructed
from conversations of existing datasets for different dialogue tasks capturing
the nuances for each of them. We also examine the evaluation strategies used to
measure the performance of dialogue agents and highlight the scope for future
research in the area of conversational AI.Comment: 21 pages, 3 figures, 3 table
A Survey on Semantic Processing Techniques
Semantic processing is a fundamental research domain in computational
linguistics. In the era of powerful pre-trained language models and large
language models, the advancement of research in this domain appears to be
decelerating. However, the study of semantics is multi-dimensional in
linguistics. The research depth and breadth of computational semantic
processing can be largely improved with new technologies. In this survey, we
analyzed five semantic processing tasks, e.g., word sense disambiguation,
anaphora resolution, named entity recognition, concept extraction, and
subjectivity detection. We study relevant theoretical research in these fields,
advanced methods, and downstream applications. We connect the surveyed tasks
with downstream applications because this may inspire future scholars to fuse
these low-level semantic processing tasks with high-level natural language
processing tasks. The review of theoretical research may also inspire new tasks
and technologies in the semantic processing domain. Finally, we compare the
different semantic processing techniques and summarize their technical trends,
application trends, and future directions.Comment: Published at Information Fusion, Volume 101, 2024, 101988, ISSN
1566-2535. The equal contribution mark is missed in the published version due
to the publication policies. Please contact Prof. Erik Cambria for detail
Recommended from our members
Continually improving grounded natural language understanding through human-robot dialog
As robots become ubiquitous in homes and workplaces such as hospitals and factories, they must be able to communicate with humans. Several kinds of knowledge are required to understand and respond to a human's natural language commands and questions. If a person requests an assistant robot to take me to Alice's office, the robot must know that Alice is a person who owns some unique office, and that take me means it should navigate there. Similarly, if a person requests bring me the heavy, green mug, the robot must have accurate mental models of the physical concepts heavy, green, and mug. To avoid forcing humans to use key phrases or words robots already know, this thesis focuses on helping robots understanding new language constructs through interactions with humans and with the world around them. To understand a command in natural language, a robot must first convert that command to an internal representation that it can reason with. Semantic parsing is a method for performing this conversion, and the target representation is often semantic forms represented as predicate logic with lambda calculus. Traditional semantic parsing relies on hand-crafted resources from a human expert: an ontology of concepts, a lexicon connecting language to those concepts, and training examples of language with abstract meanings. One thrust of this thesis is to perform semantic parsing with sparse initial data. We use the conversations between a robot and human users to induce pairs of natural language utterances with the target semantic forms a robot discovers through its questions, reducing the annotation effort of creating training examples for parsing. We use this data to build more dialog-capable robots in new domains with much less expert human effort (Thomason et al., 2015; Padmakumar et al., 2017). Meanings of many language concepts are bound to the physical world. Understanding object properties and categories, such as heavy, green, and mug requires interacting with and perceiving the physical world. Embodied robots can use manipulation capabilities, such as pushing, picking up, and dropping objects to gather sensory data about them. This data can be used to understand non-visual concepts like heavy and empty (e.g. get the empty carton of milk from the fridge), and assist with concepts that have both visual and non-visual expression (e.g. tall things look big and also exert force sooner than short things when pressed down on). A second thrust of this thesis focuses on strategies for learning these concepts using multi-modal sensory information. We use human-in-the-loop learning to get labels between concept words and actual objects in the environment (Thomason et al., 2016, 2017). We also explore ways to tease out polysemy and synonymy in concept words (Thomason and Mooney, 2017) such as light, which can refer to a weight or a color, the latter sense being synonymous with pale. Additionally, pushing, picking up, and dropping objects to gather sensory information is prohibitively time-consuming, so we investigate strategies for using linguistic information and human input to expedite exploration when learning a new concept (Thomason et al., 2018). Finally, we build an integrated agent with both parsing and perception capabilities that learns from conversations with users to improve both components over time. We demonstrate that parser learning from conversations (Thomason et al., 2015) can be combined with multi-modal perception (Thomason et al., 2016) using predicate-object labels gathered through opportunistic active learning (Thomason et al., 2017) during those conversations to improve performance for understanding natural language commands from humans. Human users also qualitatively rate this integrated learning agent as more usable after it has improved from conversation-based learning.Computer Science
- …