1,317 research outputs found

    Supporting Online Customer Feedback Management with Automatic Review Response Generation

    Get PDF
    The growing amount of online reviews plays a significant role in a business' image and performance. Businesses in the hospitality industry often lack necessary resources to organize and manage online customer feedback and are therefore likely to search for alternative ways to handle this. AI-based technologies may offer valuable solutions. However, there is currently little research on if and how AI solutions may support the process of responding to online customer feedback in the hospitality industry. This paper presents and evaluates a concept for assisting customer feedback management with automatically generated responses to online reviews. Our solution contributes to ongoing investigations into text generation applications for supporting human authors and also proposes new approaches and potential business models for managing online customer feedback

    Generating and Evaluating Tests for K-12 Students with Language Model Simulations: A Case Study on Sentence Reading Efficiency

    Full text link
    Developing an educational test can be expensive and time-consuming, as each item must be written by experts and then evaluated by collecting hundreds of student responses. Moreover, many tests require multiple distinct sets of questions administered throughout the school year to closely monitor students' progress, known as parallel tests. In this study, we focus on tests of silent sentence reading efficiency, used to assess students' reading ability over time. To generate high-quality parallel tests, we propose to fine-tune large language models (LLMs) to simulate how previous students would have responded to unseen items. With these simulated responses, we can estimate each item's difficulty and ambiguity. We first use GPT-4 to generate new test items following a list of expert-developed rules and then apply a fine-tuned LLM to filter the items based on criteria from psychological measurements. We also propose an optimal-transport-inspired technique for generating parallel tests and show the generated tests closely correspond to the original test's difficulty and reliability based on crowdworker responses. Our evaluation of a generated test with 234 students from grades 2 to 8 produces test scores highly correlated (r=0.93) to those of a standard test form written by human experts and evaluated across thousands of K-12 students.Comment: Accepted to EMNLP 2023 (Main
    • …
    corecore