69,224 research outputs found

    Comprehension and trust in crises: investigating the impact of machine translation and post-editing

    Get PDF
    We conducted a survey to understand the impact of machine translation and postediting awareness on comprehension of and trust in messages disseminated to prepare the public for a weather-related crisis, i.e. flooding. The translation direction was English–Italian. Sixty-one participants—all native Italian speakers with different English proficiency levels— answered our survey. Each participant read and evaluated between three and six crisis messages using ratings and openended questions on comprehensibility and trust. The messages were in English and Italian. All the Italian messages had been machine translated and post-edited. Nevertheless, participants were told that only half had been post-edited, so that we could test the impact of post-editing awareness. We could not draw firm conclusions when comparing the scores for trust and comprehensibility assigned to the three types of messages—English, post-edits, and purported raw outputs. However, when scores were triangulated with open-ended answers, stronger patterns were observed, such as the impact of fluency of the translations on their comprehensibility and trustworthiness. We found correlations between comprehensibility and trustworthiness, and identified other factors influencing these aspects, such as the clarity and soundness of the messages. We conclude by outlining implications for crisis preparedness, limitations, and areas for future research

    RACE: Large-scale ReAding Comprehension Dataset From Examinations

    Full text link
    We present RACE, a new dataset for benchmark evaluation of methods in the reading comprehension task. Collected from the English exams for middle and high school Chinese students in the age range between 12 to 18, RACE consists of near 28,000 passages and near 100,000 questions generated by human experts (English instructors), and covers a variety of topics which are carefully designed for evaluating the students' ability in understanding and reasoning. In particular, the proportion of questions that requires reasoning is much larger in RACE than that in other benchmark datasets for reading comprehension, and there is a significant gap between the performance of the state-of-the-art models (43%) and the ceiling human performance (95%). We hope this new dataset can serve as a valuable resource for research and evaluation in machine comprehension. The dataset is freely available at http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at https://github.com/qizhex/RACE_AR_baselines.Comment: EMNLP 201
    • 

    corecore