101,684 research outputs found

    Putting the Horse Before the Cart:A Generator-Evaluator Framework for Question Generation from Text

    Full text link
    Automatic question generation (QG) is a useful yet challenging task in NLP. Recent neural network-based approaches represent the state-of-the-art in this task. In this work, we attempt to strengthen them significantly by adopting a holistic and novel generator-evaluator framework that directly optimizes objectives that reward semantics and structure. The {\it generator} is a sequence-to-sequence model that incorporates the {\it structure} and {\it semantics} of the question being generated. The generator predicts an answer in the passage that the question can pivot on. Employing the copy and coverage mechanisms, it also acknowledges other contextually important (and possibly rare) keywords in the passage that the question needs to conform to, while not redundantly repeating words. The {\it evaluator} model evaluates and assigns a reward to each predicted question based on its conformity to the {\it structure} of ground-truth questions. We propose two novel QG-specific reward functions for text conformity and answer conformity of the generated question. The evaluator also employs structure-sensitive rewards based on evaluation measures such as BLEU, GLEU, and ROUGE-L, which are suitable for QG. In contrast, most of the previous works only optimize the cross-entropy loss, which can induce inconsistencies between training (objective) and testing (evaluation) measures. Our evaluation shows that our approach significantly outperforms state-of-the-art systems on the widely-used SQuAD benchmark as per both automatic and human evaluation.Comment: 10 pages, The SIGNLL Conference on Computational Natural Language Learning (CoNLL 2019

    Controllable Neural Story Plot Generation via Reinforcement Learning

    Full text link
    Language-modeling--based approaches to story plot generation attempt to construct a plot by sampling from a language model (LM) to predict the next character, word, or sentence to add to the story. LM techniques lack the ability to receive guidance from the user to achieve a specific goal, resulting in stories that don't have a clear sense of progression and lack coherence. We present a reward-shaping technique that analyzes a story corpus and produces intermediate rewards that are backpropagated into a pre-trained LM in order to guide the model towards a given goal. Automated evaluations show our technique can create a model that generates story plots which consistently achieve a specified goal. Human-subject studies show that the generated stories have more plausible event ordering than baseline plot generation techniques.Comment: Published in IJCAI 201

    Human Resource Management, Service Quality, and Economic Performance in Call Centers

    Get PDF
    This paper examines the relationship between human resource practices, operational outcomes, and economic performance in call centers. The study draws on a sample of 64 call centers serving the mass market in a large telecommunications services company. Surveys of 1,243 employees in the 64 centers were aggregated to the call center level and matched to archival data on service process quality, as measured by customer surveys; call handling time, revenues per call, and net revenues per call. Our path analysis shows that human resource practices emphasizing employee training, discretion, and rewards lead to higher service quality, higher revenues per call, and higher net revenues per call. In addition, service quality mediates the relationship between human resource practices and these economic outcomes. There is no significant relationship between HR practices and labor efficiency, as measured by call handling time; and labor efficiency is inversely related to revenue generation
    • …
    corecore