510 research outputs found

    Modelling Users, Intentions, and Structure in Spoken Dialog

    Full text link
    We outline how utterances in dialogs can be interpreted using a partial first order logic. We exploit the capability of this logic to talk about the truth status of formulae to define a notion of coherence between utterances and explain how this coherence relation can serve for the construction of AND/OR trees that represent the segmentation of the dialog. In a BDI model we formalize basic assumptions about dialog and cooperative behaviour of participants. These assumptions provide a basis for inferring speech acts from coherence relations between utterances and attitudes of dialog participants. Speech acts prove to be useful for determining dialog segments defined on the notion of completing expectations of dialog participants. Finally, we sketch how explicit segmentation signalled by cue phrases and performatives is covered by our dialog model.Comment: 17 page

    HUMAN-ROBOT INTERACTION: LANGUAGE ACQUISITION WITH NEURAL NETWORK

    Get PDF
    ABSTRACT The paper gives an overview about the process between two language processing methods towards Human-robot interaction. In this paper, Echo State Networks and Stochastic-learning grammar are explored in order to get an idea about generating human’s natural language and the possibilities of integrating these methods to make the communication process between robot to robot or robot to human to be more natural in dialogic syntactic language game. The methods integration could give several benefits such as improving the communicative efficiency and producing the more natural communication sentence.   ABSTRAK Tulisan ini memberikan penjabaran mengenai dua metode pemrosesan bahasa alami pada interaksi Manusia dan Robot. Echo State Networks adalah salah satu arsitektur dari Jaringan Syaraf Tiruan yang berdasarkan prinsip Supervised Learning untuk Recurrent Neural Network, dieksplorasi bersama Stochastic-learning Grammar yaitu salah satu framework tata bahasa dengan konsep probabilistik yang bertujuan untuk mendapatkan ide bagaimana proses bahasa alami dari manusia dan kemungkinannya mengintegrasikan dua metode tersebut untuk membuat proses komunikasi antara robot dengan robot atau robot dengan manusia menjadi lebih natural dalam dialogic syntactic language game. Metode integrasi dapat memberikan beberapa keuntungan seperti meningkatkan komunikasi yang efisien dan dapat membuat konstruksi kalimat saat komunikasi menjadi lebih natural. How To Cite : Fazrie, A.R. (2018). HUMAN-ROBOT INTERACTION: LANGUAGE ACQUISITION WITH NEURAL NETWORK. Jurnal Teknik Informatika, 11(1), 75-84.  doi 10.15408/jti.v11i1.6093 Permalink/DOI: http://dx.doi.org/10.15408/jti.v11i1.6093

    Learning to Translate in Real-time with Neural Machine Translation

    Get PDF
    Translating in real-time, a.k.a. simultaneous translation, outputs translation words before the input sentence ends, which is a challenging problem for conventional machine translation methods. We propose a neural machine translation (NMT) framework for simultaneous translation in which an agent learns to make decisions on when to translate from the interaction with a pre-trained NMT environment. To trade off quality and delay, we extensively explore various targets for delay and design a method for beam-search applicable in the simultaneous MT setting. Experiments against state-of-the-art baselines on two language pairs demonstrate the efficacy of the proposed framework both quantitatively and qualitatively.Comment: 10 pages, camera read

    JBendge: An Object-Oriented System for Solving, Estimating and Selecting Nonlinear Dynamic Models

    Get PDF
    We present an object-oriented software framework allowing to specify, solve, and estimate nonlinear dynamic general equilibrium (DSGE) models. The imple- mented solution methods for nding the unknown policy function are the standard linearization around the deterministic steady state, and a function iterator using a multivariate global Chebyshev polynomial approximation with the Smolyak op- erator to overcome the course of dimensionality. The operator is also useful for numerical integration and we use it for the integrals arising in rational expecta- tions and in nonlinear state space lters. The estimation step is done by a parallel Metropolis-Hastings (MH) algorithm, using a linear or nonlinear lter. Implemented are the Kalman, Extended Kalman, Particle, Smolyak Kalman, Smolyak Sum, and Smolyak Kalman Particle lters. The MH sampling step can be interactively moni- tored and controlled by sequence and statistics plots. The number of parallel threads can be adjusted to benet from multiprocessor environments. JBendge is based on the framework JStatCom, which provides a standardized ap- plication interface. All tasks are supported by an elaborate multi-threaded graphical user interface (GUI) with project management and data handling facilities.Dynamic Stochastic General Equilibrium (DSGE) Models, Bayesian Time Series Econometrics, Java, Software Development

    A Bi-Encoder LSTM Model for Learning Unstructured Dialogs

    Get PDF
    Creating a data-driven model that is trained on a large dataset of unstructured dialogs is a crucial step in developing a Retrieval-based Chatbot systems. This thesis presents a Long Short Term Memory (LSTM) based Recurrent Neural Network architecture that learns unstructured multi-turn dialogs and provides implementation results on the task of selecting the best response from a collection of given responses. Ubuntu Dialog Corpus Version 2 (UDCv2) was used as the corpus for training. Ryan et al. (2015) explored learning models such as TF-IDF (Term Frequency-Inverse Document Frequency), Recurrent Neural Network (RNN) and a Dual Encoder (DE) based on Long Short Term Memory (LSTM) model suitable to learn from the Ubuntu Dialog Corpus Version 1 (UDCv1). We use this same architecture but on UDCv2 as a benchmark and introduce a new LSTM based architecture called the Bi-Encoder LSTM model (BE) that achieves 0.8%, 1.0% and 0.3% higher accuracy for Recall@1, Recall@2 and Recall@5 respectively than the DE model. In contrast to the DE model, the proposed BE model has separate encodings for utterances and responses. The BE model also has a different similarity measure for utterance and response matching than that of the benchmark model. We further explore the BE model by performing various experiments. We also show results on experiments performed by using several similarity functions, model hyper-parameters and word embeddings on the proposed architecture
    • 

    corecore