64,667 research outputs found

    Testing Strategies For Bridging Time-To-Content In Spoken Dialogue Systems

    Get PDF
    Lopez Gambino MS, Zarrieß S, Schlangen D. Testing Strategies For Bridging Time-To-Content In Spoken Dialogue Systems. In: Proceedings of the Ninth International Workshop on Spoken Dialogue Systems Technology (IWSDS 2018). 2018

    A Differentiable Generative Adversarial Network for Open Domain Dialogue

    Get PDF
    Paper presented at the IWSDS 2019: International Workshop on Spoken Dialogue Systems Technology, Siracusa, Italy, April 24-26, 2019This work presents a novel methodology to train open domain neural dialogue systems within the framework of Generative Adversarial Networks with gradient-based optimization methods. We avoid the non-differentiability related to text-generating networks approximating the word vector corresponding to each generated token via a top-k softmax. We show that a weighted average of the word vectors of the most probable tokens computed from the probabilities resulting of the top-k softmax leads to a good approximation of the word vector of the generated token. Finally we demonstrate through a human evaluation process that training a neural dialogue system via adversarial learning with this method successfully discourages it from producing generic responses. Instead it tends to produce more informative and variate ones.This work has been partially funded by the Basque Government under grant PRE_2017_1_0357, by the University of the Basque Country UPV/EHU under grant PIF17/310, and by the H2020 RIA EMPATHIC (Grant N: 769872)

    Towards Understanding Egyptian Arabic Dialogues

    Full text link
    Labelling of user's utterances to understanding his attends which called Dialogue Act (DA) classification, it is considered the key player for dialogue language understanding layer in automatic dialogue systems. In this paper, we proposed a novel approach to user's utterances labeling for Egyptian spontaneous dialogues and Instant Messages using Machine Learning (ML) approach without relying on any special lexicons, cues, or rules. Due to the lack of Egyptian dialect dialogue corpus, the system evaluated by multi-genre corpus includes 4725 utterances for three domains, which are collected and annotated manually from Egyptian call-centers. The system achieves F1 scores of 70. 36% overall domains.Comment: arXiv admin note: substantial text overlap with arXiv:1505.0308

    Active learning for dialogue act labelling

    Full text link
    Active learning is a useful technique that allows for a considerably reduction of the amount of data we need to manually label in order to reach a good performance of a statistical model. In order to apply active learning to a particular task we need to previously define an effective selection criteria, that picks out the most informative samples at each iteration of active learning process. This is still an open problem that we are going to face in this work, in the task of dialogue annotation at dialogue act level. We present two different criteria, weighted number of hypothesis and entropy, that we have applied to the Sample Selection Algorithm for the task of dialogue act labelling, that retrieved appreciably improvements in our experimental approach. © 2011 Springer-Verlag.Work supported by the EC (FEDER/FSE) and the Spanish MEC/MICINN under the MIPRCV “Consolider Ingenio 2010” program (CSD2007-00018), MITTRAL (TIN2009-14633-C03-01) projects and the FPI scholarship (BES-2009-028965). Also supported by the Generalitat Valenciana under grant Prometeo/2009/014 and GV/2010/067Ghigi, F.; Tamarit Ballester, V.; Martínez-Hinarejos, C.; Benedí Ruiz, JM. (2011). Active learning for dialogue act labelling. En Lecture Notes in Computer Science. Springer Verlag (Germany). 6669:652-659. https://doi.org/10.1007/978-3-642-21257-4_81S6526596669Alcácer, N., Benedí, J.M., Blat, F., Granell, R., Martínez, C.D., Torres, F.: Acquisition and Labelling of a Spontaneous Speech Dialogue Corpus. In: SPECOM, Greece, pp. 583–586 (2005)Benedí, J.M., Lleida, E., Varona, A., Castro, M.J., Galiano, I., Justo, R., López, I., Miguel, A.: Design and acquisition of a telephone spontaneous speech dialogue corpus in spanish: DIHANA. In: Fifth LREC, Genova, Italy, pp. 1636–1639 (2006)Bunt, H.: Context and dialogue control. THINK Quarterly 3 (1994)Casacuberta, F., Vidal, E., Picó, D.: Inference of finite-state transducers from regular languages. Pat. Recognition 38(9), 1431–1443 (2005)Dybkjær, L., Minker, W. (eds.): Recent Trends in Discourse and Dialogue. Text, Speech and Language Technology, vol. 39. Springer, Dordrecht (2008)Gorin, A., Riccardi, G., Wright, J.: How may I help you? Speech Comm. 23, 113–127 (1997)Hwa, R.: Sample selection for statistical grammar induction. In: Proceedings of the 2000 Joint SIGDAT, pp. 45–52. Association for Computational Linguistics, Morristown (2000)Lavie, A., Levin, L., Zhan, P., Taboada, M., Gates, D., Lapata, M.M., Clark, C., Broadhead, M., Waibel, A.: Expanding the domain of a multi-lingual speech-to-speech translation system. In: Proceedings of the Workshop on Spoken Language Translation, ACL/EACL 1997 (1997)Martínez-Hinarejos, C.D., Tamarit, V., Benedí, J.M.: Improving unsegmented dialogue turns annotation with N-gram transducers. In: Proceedings of the 23rd Pacific Asia Conference on Language, Information and Computation (PACLIC23), vol. 1, pp. 345–354 (2009)Robinson, D.W.: Entropy and uncertainty, vol. 10, pp. 493–506 (2008)Stolcke, A., Coccaro, N., Bates, R., Taylor, P., van Ess-Dykema, C., Ries, K., Shriberg, E., Jurafsky, D., Martin, R., Meteer, M.: Dialogue act modelling for automatic tagging and recognition of conversational speech. Computational Linguistics 26(3), 1–34 (2000)Tamarit, V., Benedí, J., Martínez-Hinarejos, C.: Estimating the number of segments for improving dialogue act labelling. In: Proceedings of the First International Workshop of Spoken Dialog Systems Technology (2009)Young, S.: Probabilistic methods in spoken dialogue systems. Philosophical Trans. Royal Society (Series A) 358(1769), 1389–1402 (2000

    Survey on Evaluation Methods for Dialogue Systems

    Get PDF
    In this paper we survey the methods and concepts developed for the evaluation of dialogue systems. Evaluation is a crucial part during the development process. Often, dialogue systems are evaluated by means of human evaluations and questionnaires. However, this tends to be very cost and time intensive. Thus, much work has been put into finding methods, which allow to reduce the involvement of human labour. In this survey, we present the main concepts and methods. For this, we differentiate between the various classes of dialogue systems (task-oriented dialogue systems, conversational dialogue systems, and question-answering dialogue systems). We cover each class by introducing the main technologies developed for the dialogue systems and then by presenting the evaluation methods regarding this class

    The BURCHAK corpus: a Challenge Data Set for Interactive Learning of Visually Grounded Word Meanings

    Full text link
    We motivate and describe a new freely available human-human dialogue dataset for interactive learning of visually grounded word meanings through ostensive definition by a tutor to a learner. The data has been collected using a novel, character-by-character variant of the DiET chat tool (Healey et al., 2003; Mills and Healey, submitted) with a novel task, where a Learner needs to learn invented visual attribute words (such as " burchak " for square) from a tutor. As such, the text-based interactions closely resemble face-to-face conversation and thus contain many of the linguistic phenomena encountered in natural, spontaneous dialogue. These include self-and other-correction, mid-sentence continuations, interruptions, overlaps, fillers, and hedges. We also present a generic n-gram framework for building user (i.e. tutor) simulations from this type of incremental data, which is freely available to researchers. We show that the simulations produce outputs that are similar to the original data (e.g. 78% turn match similarity). Finally, we train and evaluate a Reinforcement Learning dialogue control agent for learning visually grounded word meanings, trained from the BURCHAK corpus. The learned policy shows comparable performance to a rule-based system built previously.Comment: 10 pages, THE 6TH WORKSHOP ON VISION AND LANGUAGE (VL'17
    corecore