12 research outputs found

    Data-efficient methods for dialogue systems

    Get PDF
    Conversational User Interface (CUI) has become ubiquitous in everyday life, in consumer-focused products like Siri and Alexa or more business-oriented customer support automation solutions. Deep learning underlies many recent breakthroughs in dialogue systems but requires very large amounts of training data, often annotated by experts — and this dramatically increases the cost of deploying such systems in production setups and reduces their flexibility as software products. Trained with smaller data, these methods end up severely lacking robustness to various phenomena of spoken language (e.g. disfluencies), out-of-domain input, and often just have too little generalisation power to other tasks and domains. In this thesis, we address the above issues by introducing a series of methods for bootstrapping robust dialogue systems from minimal data. Firstly, we study two orthogonal approaches to dialogue: a linguistically informed model (DyLan) and a machine learning-based one (MemN2N) — from the data efficiency perspective, i.e. their potential to generalise from minimal data and robustness to natural spontaneous input. We outline the steps to obtain data-efficient solutions with either approach and proceed with the neural models for the rest of the thesis. We then introduce the core contributions of this thesis, two data-efficient models for dialogue response generation: the Dialogue Knowledge Transfer Network (DiKTNet) based on transferable latent dialogue representations, and the Generative-Retrieval Transformer (GRTr) combining response generation logic with a retrieval mechanism as the fallback. GRTr ranked first at the Dialog System Technology Challenge 8 Fast Domain Adaptation task. Next, we the problem of training robust neural models from minimal data. As such, we look at robustness to disfluencies and propose a multitask LSTM-based model for domain-general disfluency detection. We then go on to explore robustness to anomalous, or out-of-domain (OOD) input. We address this problem by (1) presenting Turn Dropout, a data-augmentation technique facilitating training for anomalous input only using in-domain data, and (2) introducing VHCN and AE-HCN, autoencoder-augmented models for efficient training with turn dropout based on the Hybrid Code Networks (HCN) model family. With all the above work addressing goal-oriented dialogue, our final contribution in this thesis focuses on social dialogue where the main objective is maintaining natural, coherent, and engaging conversation for as long as possible. We introduce a neural model for response ranking in social conversation used in Alana, the 3rd place winner in the Amazon Alexa Prize 2017 and 2018. For our model, we employ a novel technique of predicting the dialogue length as the main objective for ranking. We show that this approach matches the performance of its counterpart based on the conventional, human rating-based objective — and surpasses it given more raw dialogue transcripts, thus reducing the dependence on costly and cumbersome dialogue annotations.EPSRC project BABBLE (grant EP/M01553X/1)

    A practical guide to conversation research: how to study what people say to each other

    Get PDF
    Conversation—a verbal interaction between two or more people—is a complex, pervasive, and consequential human behavior. Conversations have been studied across many academic disciplines. However, advances in recording and analysis techniques over the last decade have allowed researchers to more directly and precisely examine conversations in natural contexts and at a larger scale than ever before, and these advances open new paths to understand humanity and the social world. Existing reviews of text analysis and conversation research have focused on text generated by a single author (e.g., product reviews, news articles, and public speeches) and thus leave open questions about the unique challenges presented by interactive conversation data (i.e., dialogue). In this article, we suggest approaches to overcome common challenges in the workflow of conversation science, including recording and transcribing conversations, structuring data (to merge turn-level and speaker-level data sets), extracting and aggregating linguistic features, estimating effects, and sharing data. This practical guide is meant to shed light on current best practices and empower more researchers to study conversations more directly—to expand the community of conversation scholars and contribute to a greater cumulative scientific understanding of the social world

    L’individualità del parlante nelle scienze fonetiche: applicazioni tecnologiche e forensi

    Full text link

    Proceedings of the VIIth GSCP International Conference

    Get PDF
    The 7th International Conference of the Gruppo di Studi sulla Comunicazione Parlata, dedicated to the memory of Claire Blanche-Benveniste, chose as its main theme Speech and Corpora. The wide international origin of the 235 authors from 21 countries and 95 institutions led to papers on many different languages. The 89 papers of this volume reflect the themes of the conference: spoken corpora compilation and annotation, with the technological connected fields; the relation between prosody and pragmatics; speech pathologies; and different papers on phonetics, speech and linguistic analysis, pragmatics and sociolinguistics. Many papers are also dedicated to speech and second language studies. The online publication with FUP allows direct access to sound and video linked to papers (when downloaded)

    Um, One Large Pizza. A Preliminary Study of Disfluency Modelling for Improving ASR

    No full text
    A corpus of spontaneous telephone transactions between call centre operators of a pizza company and its customers is examined for disfluencies (fillers and speech repairs) with the aim of improving automatic speech recognition. From this, a subset of the customer orders is selected as a test set. An architecture is presented which allows filled pauses and repairs to be detected and corrected. A language repair module removes fillers and reparanda and transforms utterances containing them into fluent utterances. An experiment on filled pauses using this module and architecture is then described. A speech recognition grammar for recognising fluent speech is used to provide a baseline. This grammar is then enriched with filled pauses, based on their placement in relation to syntactic boundaries. Evaluation is done at the level of understanding, using a metric on feature structures. Initial results indicate that incorporating filled pauses at syntactic boundaries improves the recognition results for spontaneous continuous speech containing disfluencies

    Reduktion in natĂŒrlicher Sprache

    Get PDF
    Natural (conversational) speech, compared to cannonical speech, is earmarked by the tremendous amount of variation that often leads to a massive change in pronunciation. Despite many attempts to explain and theorize the variability in conversational speech, its unique characteristics have not played a significant role in linguistic modeling. One of the reasons for variation in natural speech lies in a tendency of speakers to reduce speech, which may drastically alter the phonetic shape of words. Despite the massive loss of information due to reduction, listeners are often able to understand conversational speech even in the presence of background noise. This dissertation investigates two reduction processes, namely regressive place assimilation across word boundaries, and massive reduction and provides novel data from the analyses of speech corpora combined with experimental results from perception studies to reach a better understanding of how humans handle natural speech. The successes and failures of two models dealing with data from natural speech are presented: The FUL-model (Featurally Underspecified Lexicon, Lahiri & Reetz, 2002), and X-MOD (an episodic model, Johnson, 1997). Based on different assumptions, both models make different predictions for the two types of reduction processes under investigation. This dissertation explores the nature and dynamics of these processes in speech production and discusses its consequences for speech perception. More specifically, data from analyses of running speech are presented investigating the amount of reduction that occurs in naturally spoken German. Concerning production, the corpus analysis of regressive place assimilation reveals that it is not an obligatory process. At the same time, there emerges a clear asymmetry: With only very few exceptions, only [coronal] segments undergo assimilation, [labial] and [dorsal] segments usually do not. Furthermore, there seem to be cases of complete neutralization where the underlying Place of Articulation feature has undergone complete assimilation to the Place of Articulation feature of the upcoming segment. Phonetic analyses further underpin these findings. Concerning deletions and massive reductions, the results clearly indicate that phonological rules in the classical generative tradition are not able to explain the reduction patterns attested in conversational speech. Overall, the analyses of deletion and massive reduction in natural speech did not exhibit clear-cut patterns. For a more in-depth examination of reduction factors, the case of final /t/ deletion is examined by means of a new corpus constructed for this purpose. The analysis of this corpus indicates that although phonological context plays an important role on the deletion of segments (i.e. /t/), this arises in the form of tendencies, not absolute conditions. This is true for other deletion processes, too. Concerning speech perception, a crucial part for both models under investigation (X-MOD and FUL) is how listeners handle reduced speech. Five experiments investigate the way reduced speech is perceived by human listeners. Results from two experiments show that regressive place assimilations can be treated as instances of complete neutralizations by German listeners. Concerning massively reduced words, the outcome of transcription and priming experiments suggest that such words are not acceptable candidates of the intended lexical items for listeners in the absence of their proper phrasal context. Overall, the abstractionist FUL-model is found to be superior in explaining the data. While at first sight, X-MOD deals with the production data more readily, FUL provides a better fit for the perception results. Another important finding concerns the role of phonology and phonetics in general. The results presented in this dissertation make a strong case for models, such as FUL, where phonology and phonetics operate at different levels of the mental lexicon, rather than being integrated into one. The findings suggest that phonetic variation is not part of the representation in the mental lexicon.NatĂŒrliche (spontane) Sprache in Dialogen zeichnet sich, im Vergleich zu kanonischer Sprache, vor allem durch das enorme Ausmaß an Variation aus. Diese kann oft dazu fĂŒhren, dass Wörter in der Aussprache massiv verĂ€ndert werden. Trotz einiger BemĂŒhungen, VariabilitĂ€t in natĂŒrlicher Sprache zu erklĂ€ren und theoretisch zu fassen, haben die einzigartigen Merkmale natĂŒrlicher Sprache kaum Eingang in linguistische Modelle gefunden. Einer der GrĂŒnde, warum Variation in natĂŒrlicher Sprache zu beobachten ist, liegt in der Tendenz der Sprecher, Sprache zu reduzieren. Dies kann die phonetische Gestalt von Wörtern drastisch beeinflussen. Obwohl hierdurch massiv Information durch Reduktion verloren geht, sind Hörer oft in der Lage Spontansprache zu verstehen, sogar, wenn HintergrundgerĂ€usche dies erschweren. Diese Dissertation untersucht zwei Reduktionsprozesse: Regressive Assimilation des Artikulationsortes ĂŒber Wortgrenzen hinweg und Massive Reduktion. Es werden neue Daten prĂ€sentiert, die durch die Analysen von Sprachkorpora gewonnen wurden. Außerdem stehen experimentelle Ergebnisse von Perzeptionsstudien im Mittelpunkt, die helfen sollen, besser zu verstehen, wie Menschen mit natĂŒrlicher Sprache umgehen. Die Dissertation zeigt die Erfolge und Probleme von zwei Modellen im Umgang mit Daten von natĂŒrlicher Sprache auf: Das FUL-Modell (Featurally Underspecified Lexicon , Lahiri & Reetz, 2002), und X-MOD (ein episodisches Modell, Johnson, 1997). Aufgrund unterschiedlicher Annahmen machen die zwei Modelle verschiedene Vorhersagen fĂŒr die beiden Reduktionsprozesse, die in dieser Dissertation untersucht werden. Es werden Art und Auswirkungen der beiden Prozesse fĂŒr Sprachproduktion untersucht und die Konsequenzen fĂŒr das Sprachverstehen beleuchtet. Was die Sprachproduktion betrifft, so zeigt eine Korpusanalyse von natĂŒrlich gesprochenem Deutsch, dass der Reduktionsprozess regressive Assimilation des Artikulationsortes nicht obligatorisch statt findet. Gleichzeitig wird eine hervorstechende Asymmetrie deutlich: Abgesehen von einigen wenigen Ausnahmen werden ausschließlich [koronale] Segmente assimiliert, [labiale] und [dorsale] Segmente normalerweise nicht. Außerdem, so legen die Produktionsdaten nahe, gibt es FĂ€lle, in denen die Assimilation des Artikulationsortes an den Artikulationsort des Folgesegmentes komplett ist, also eine vollstĂ€ndige Neutralisierung der Merkmalskontraste vom Sprecher vorgenommen wurde. Phonetische Analysen bestĂ€tigen dieses Resultat. Im Fall von Löschungen und massiven Reduktion demonstrieren die Ergebnisse eindeutig, dass phonologische Regeln – im klassischen generativen Sinne – nicht in der Lage sind, die Reduktionsmuster zu beschreiben, die in Spontansprache vorkommen. Alles in allem zeigen die Analysen von massiven Reduktionen und Löschungen keine eindeutigen Muster auf. Um einzelne Faktoren, die Reduktionen beeinflussen, genauer untersuchen zu können, wurde die Löschung von (Wort) finalem /t/ anhand eines neuen, fĂŒr diesen Zweck kreierten Korpus durchgefĂŒhrt. Die Analyse dieses Korpus unterstreicht, dass, obwohl phonologischer Kontext eine gewichtigen Einfluss darauf hat, ob Segmente (d.h. /t/) gelöscht werden, dieser Einfluss eher als Tendenz verstanden werden muss, nicht als absolute Bedingung. Dieses Resultat trifft auch auf andere Löschungsprozesse zu. Beide Modelle (X-MOD und FUL), die in dieser Dissertation untersucht werden, gehen im Kern der Frage nach, wie Hörer Sprache verstehen. FĂŒnf Experimente untersuchen, wie reduzierte Sprache von menschlichen Hörern wahrgenommen wird. Ergebnisse von zwei Studien zeigen, dass Assimilationen von deutschen Hörern durchaus als komplett neutralisiert wahrgenommen werden. Was die Perzeption von massiv reduzierten Wörtern betrifft, belegen die Resultate von Transkriptionsstudien und Priming-Experimenten, dass solche Wörter nicht als Wortkandidaten fĂŒr die korrekten lexikalischen EintrĂ€ge akzeptiert werden, wenn sie ohne ihren Satz-Kontext dargeboten werden. Insgesamt ist das abstraktionistische FUL-Modell besser in der Lage, die Daten zu erklĂ€ren, die in dieser Dissertation prĂ€sentiert werden. Auf den ersten Blick scheint X-MOD zwar etwas besser geeignet, die Produktionsdaten zu erklĂ€ren, hauptsĂ€chlich jedoch, weil Variation als Grundannahme im Modell verankert ist. FUL ist klar ĂŒberlegen, was die Perzeptionsseite betrifft. Ein weiteres wichtiges Ergebnis dieser Dissertation ist die Rolle, die Phonologie und Phonetik im Allgemeinen zugedacht werden kann. Die Resultate, die hier vorgestellt werden, liefern starke Argumente fĂŒr Modelle – wie z.B. FUL – in denen Phonologie und Phonetik auf verschiedenen Ebenen des mentalen Lexikons aktiv sind und nicht in einem integriert sind. Die Befunde legen nahe, dass phonetische Variation nicht Teil der ReprĂ€sentation im mentalen Lexikon ist
    corecore