198 research outputs found

    Personalized Dialogue Generation with Diversified Traits

    Full text link
    Endowing a dialogue system with particular personality traits is essential to deliver more human-like conversations. However, due to the challenge of embodying personality via language expression and the lack of large-scale persona-labeled dialogue data, this research problem is still far from well-studied. In this paper, we investigate the problem of incorporating explicit personality traits in dialogue generation to deliver personalized dialogues. To this end, firstly, we construct PersonalDialog, a large-scale multi-turn dialogue dataset containing various traits from a large number of speakers. The dataset consists of 20.83M sessions and 56.25M utterances from 8.47M speakers. Each utterance is associated with a speaker who is marked with traits like Age, Gender, Location, Interest Tags, etc. Several anonymization schemes are designed to protect the privacy of each speaker. This large-scale dataset will facilitate not only the study of personalized dialogue generation, but also other researches on sociolinguistics or social science. Secondly, to study how personality traits can be captured and addressed in dialogue generation, we propose persona-aware dialogue generation models within the sequence to sequence learning framework. Explicit personality traits (structured by key-value pairs) are embedded using a trait fusion module. During the decoding process, two techniques, namely persona-aware attention and persona-aware bias, are devised to capture and address trait-related information. Experiments demonstrate that our model is able to address proper traits in different contexts. Case studies also show interesting results for this challenging research problem.Comment: Please contact [zhengyinhe1 at 163 dot com] for the PersonalDialog datase

    Out-of-domain Detection for Natural Language Understanding in Dialog Systems

    Full text link
    Natural Language Understanding (NLU) is a vital component of dialogue systems, and its ability to detect Out-of-Domain (OOD) inputs is critical in practical applications, since the acceptance of the OOD input that is unsupported by the current system may lead to catastrophic failure. However, most existing OOD detection methods rely heavily on manually labeled OOD samples and cannot take full advantage of unlabeled data. This limits the feasibility of these models in practical applications. In this paper, we propose a novel model to generate high-quality pseudo OOD samples that are akin to IN-Domain (IND) input utterances, and thereby improves the performance of OOD detection. To this end, an autoencoder is trained to map an input utterance into a latent code. and the codes of IND and OOD samples are trained to be indistinguishable by utilizing a generative adversarial network. To provide more supervision signals, an auxiliary classifier is introduced to regularize the generated OOD samples to have indistinguishable intent labels. Experiments show that these pseudo OOD samples generated by our model can be used to effectively improve OOD detection in NLU. Besides, we also demonstrate that the effectiveness of these pseudo OOD data can be further improved by efficiently utilizing unlabeled data.Comment: Accepted by TALS

    Modelling Pro-drop with the Rational Speech Acts Model

    Get PDF
    Publisher PD

    Highly Efficient Knowledge Graph Embedding Learning with Orthogonal Procrustes Analysis

    Get PDF
    Knowledge Graph Embeddings (KGEs) have been intensively explored in recent years due to their promise for a wide range of applications. However, existing studies focus on improving the final model performance without acknowledging the computational cost of the proposed approaches, in terms of execution time and environmental impact. This paper proposes a simple yet effective KGE framework which can reduce the training time and carbon footprint by orders of magnitudes compared with state-of-the-art approaches, while producing competitive performance. We highlight three technical innovations: full batch learning via relational matrices, closed-form Orthogonal Procrustes Analysis for KGEs, and non-negative-sampling training. In addition, as the first KGE method whose entity embeddings also store full relation information, our trained models encode rich semantics and are highly interpretable. Comprehensive experiments and ablation studies involving 13 strong baselines and two standard datasets verify the effectiveness and efficiency of our algorithm.Comment: To appear at NAACL 202

    DrivingBeacon : Driving Behaviour Change Support System Considering Mobile Use and Geo-information

    Get PDF
    Publisher PD

    BASIC:A Comprehensive Model for so <sub>x</sub>Formation Mechanism and Optimization in Municipal Solid Waste (MSW) Combustion

    Get PDF
    [Image: see text] Municipal solid waste (MSW) incineration is one of the main techniques currently used for waste to energy (WTE) conversion in China. Although the sulfur content in MSW is lower than that in coal, its emission cannot be neglected due to environmental pollution, malodor, health problems, and global climate change. Therefore, it is particularly important to effectively predict and control the sulfur pollutants. In this study, a comprehensive model was developed and coupled with the full combustion process bed model bulk accumulated solids incineration code (BASIC) to investigate the formation and transformation processes of sulfur in MSW incineration. The submodels of the four stages in the MSW combustion processes; governing equations of mass, momentum, and energy conservation; and various chemical reactions were included in the model. Based on this model, the effects of different parameters on the formation of sulfur pollutants during the incineration process were studied under different operating conditions. The study finds that for SO(X) formation, initial temperature, primary air volume, and material particle size have significant impacts, whereas pressure shows a less significant effect. This article also considers H(2)S, COS, and CS(2) formation under different conditions. An optimization study was performed to reduce SO(X) pollutants

    Improving Variational Autoencoder for Text Modelling with Timestep-Wise Regularisation

    Get PDF
    Accepted by COLING 2020, final camera ready versionPreprin

    A Simple Algorithm for Online Decision Making

    Full text link
    Motivated by recent progress on online linear programming (OLP), we study the online decision making problem (ODMP) as a natural generalization of OLP. In ODMP, there exists a single decision maker who makes a series of decisions spread out over a total of TT time stages. At each time stage, the decision maker makes a decision based on information obtained up to that point without seeing into the future. The task of the decision maker is to maximize the accumulated reward while overall meeting some predetermined mm-dimensional long-term goal (linking) constraints. ODMP significantly broadens the modeling framework of OLP by allowing more general feasible regions (for local and goal constraints) potentially involving both discreteness and nonlinearity in each local decision making problem. We propose a Fenchel dual-based online algorithm for ODMP. At each time stage, the proposed algorithm requires solving a potentially nonconvex optimization problem over the local feasible set and a convex optimization problem over the goal set. Under the uniform random permutation model, we show that our algorithm achieves O(mT)O(\sqrt{mT}) constraint violation deterministically in meeting the long-term goals, and O(mlogโกmT)O(\sqrt{m\log m}\sqrt{T}) competitive difference in expected reward with respect to the optimal offline decisions. We also extend our results to the grouped random permutation model
    • โ€ฆ
    corecore