20,560 research outputs found

    Using dialogue to learn math in the LeActiveMath project

    Get PDF
    We describe a tutorial dialogue system under development that assists students in learning how to differentiate equations. The system uses deep natural language understanding and generation to both interpret students ’ utterances and automatically generate a response that is both mathematically correct and adapted pedagogically and linguistically to the local dialogue context. A domain reasoner provides the necessary knowledge about how students should approach math problems as well as their (in)correctness, while a dialogue manager directs pedagogical strategies and keeps track of what needs to be done to keep the dialogue moving along.

    Towards predicting post-editing productivity

    Get PDF
    Machine translation (MT) quality is generally measured via automatic metrics, producing scores that have no meaning for translators who are required to post-edit MT output or for project managers who have to plan and budget for transla- tion projects. This paper investigates correlations between two such automatic metrics (general text matcher and translation edit rate) and post-editing productivity. For the purposes of this paper, productivity is measured via processing speed and cognitive measures of effort using eye tracking as a tool. Processing speed, average fixation time and count are found to correlate well with the scores for groups of segments. Segments with high GTM and TER scores require substantially less time and cognitive effort than medium or low-scoring segments. Future research involving score thresholds and confidence estimation is suggested

    Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation

    Get PDF
    This paper surveys the current state of the art in Natural Language Generation (NLG), defined as the task of generating text or speech from non-linguistic input. A survey of NLG is timely in view of the changes that the field has undergone over the past decade or so, especially in relation to new (usually data-driven) methods, as well as new applications of NLG technology. This survey therefore aims to (a) give an up-to-date synthesis of research on the core tasks in NLG and the architectures adopted in which such tasks are organised; (b) highlight a number of relatively recent research topics that have arisen partly as a result of growing synergies between NLG and other areas of artificial intelligence; (c) draw attention to the challenges in NLG evaluation, relating them to similar challenges faced in other areas of Natural Language Processing, with an emphasis on different evaluation methods and the relationships between them.Comment: Published in Journal of AI Research (JAIR), volume 61, pp 75-170. 118 pages, 8 figures, 1 tabl

    Students' drafting strategies and text quality

    Get PDF
    The study reports an analysis of the drafts produced by two groups of students during an exam. Drafts were categorized as a function of some of their graphic features (e.g. their length), and of their different planning strategies used for their production (e.g. note draft, organized draft, composed draft). Grades obtained by the students on their essays related to the different categories of drafts. Results show that 2/3 of both groups of students made some kinds of draft. Drafts mostly consisted of note drafts or long composed drafts. Very few consisted of organized drafts. However, students that wrote these latter drafts obtained the best ratings. Drafting strategy was homogeneous for half of the students who used one category. The other half successively used two drafting modes. In that case they mostly associated writing with jotting down notes or with some marks of organization. Here, again, students who organized, even partially, their drafts obtained the highest grades. Very few corrections were brought to the long drafts and they concerned the surface (spelling or lexicon), not the content or the plan. This research shows that only a limited number of students used an efficient drafting (organized draft) even though such a strategy is generally associated with the highest ratings

    Automatic case acquisition from texts for process-oriented case-based reasoning

    Get PDF
    This paper introduces a method for the automatic acquisition of a rich case representation from free text for process-oriented case-based reasoning. Case engineering is among the most complicated and costly tasks in implementing a case-based reasoning system. This is especially so for process-oriented case-based reasoning, where more expressive case representations are generally used and, in our opinion, actually required for satisfactory case adaptation. In this context, the ability to acquire cases automatically from procedural texts is a major step forward in order to reason on processes. We therefore detail a methodology that makes case acquisition from processes described as free text possible, with special attention given to assembly instruction texts. This methodology extends the techniques we used to extract actions from cooking recipes. We argue that techniques taken from natural language processing are required for this task, and that they give satisfactory results. An evaluation based on our implemented prototype extracting workflows from recipe texts is provided.Comment: Sous presse, publication pr\'evue en 201

    Generating indicative-informative summaries with SumUM

    Get PDF
    We present and evaluate SumUM, a text summarization system that takes a raw technical text as input and produces an indicative informative summary. The indicative part of the summary identifies the topics of the document, and the informative part elaborates on some of these topics according to the reader's interest. SumUM motivates the topics, describes entities, and defines concepts. It is a first step for exploring the issue of dynamic summarization. This is accomplished through a process of shallow syntactic and semantic analysis, concept identification, and text regeneration. Our method was developed through the study of a corpus of abstracts written by professional abstractors. Relying on human judgment, we have evaluated indicativeness, informativeness, and text acceptability of the automatic summaries. The results thus far indicate good performance when compared with other summarization technologies
    • 

    corecore