27,726 research outputs found

    The use of implicit evidence for relevance feedback in web retrieval

    Get PDF
    In this paper we report on the application of two contrasting types of relevance feedback for web retrieval. We compare two systems; one using explicit relevance feedback (where searchers explicitly have to mark documents relevant) and one using implicit relevance feedback (where the system endeavours to estimate relevance by mining the searcher's interaction). The feedback is used to update the display according to the user's interaction. Our research focuses on the degree to which implicit evidence of document relevance can be substituted for explicit evidence. We examine the two variations in terms of both user opinion and search effectiveness

    Capturing Ambiguity in Crowdsourcing Frame Disambiguation

    Full text link
    FrameNet is a computational linguistics resource composed of semantic frames, high-level concepts that represent the meanings of words. In this paper, we present an approach to gather frame disambiguation annotations in sentences using a crowdsourcing approach with multiple workers per sentence to capture inter-annotator disagreement. We perform an experiment over a set of 433 sentences annotated with frames from the FrameNet corpus, and show that the aggregated crowd annotations achieve an F1 score greater than 0.67 as compared to expert linguists. We highlight cases where the crowd annotation was correct even though the expert is in disagreement, arguing for the need to have multiple annotators per sentence. Most importantly, we examine cases in which crowd workers could not agree, and demonstrate that these cases exhibit ambiguity, either in the sentence, frame, or the task itself, and argue that collapsing such cases to a single, discrete truth value (i.e. correct or incorrect) is inappropriate, creating arbitrary targets for machine learning.Comment: in publication at the sixth AAAI Conference on Human Computation and Crowdsourcing (HCOMP) 201

    Using wikis for online group projects: student and tutor perspectives

    Get PDF
    This paper presents a study of the use of wikis to support online group projects in two courses at the UK Open University. The research aimed to investigate the effectiveness of a wiki in supporting (i) student collaboration and (ii) tutors’ marking of the students’ collaborative work. The paper uses the main factors previously identified by the technology acceptance model (TAM) as a starting point to examine and discuss the experiences of these two very different user groups: students and tutors. Data was gathered from students via a survey and from tutors via a range of methods. The findings suggest that, when used in tandem with an online forum, the wiki was a valuable tool for groups of students developing a shared resource. As previous studies using the TAM have shown, usefulness and ease of use were both important to students’ acceptance of the wiki. However, the use of a wiki in this context was less well-received by tutors, because it led to an increase in their workload in assessing the quality of students’ collaborative processes. It was possible to reduce the tutor workload by introducing a greater degree of structure in the students’ tasks. We conclude that when introducing collaborative technologies to support assessed group projects, the perceptions and needs of both students and tutors should be carefully considered

    Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation

    Get PDF
    This paper surveys the current state of the art in Natural Language Generation (NLG), defined as the task of generating text or speech from non-linguistic input. A survey of NLG is timely in view of the changes that the field has undergone over the past decade or so, especially in relation to new (usually data-driven) methods, as well as new applications of NLG technology. This survey therefore aims to (a) give an up-to-date synthesis of research on the core tasks in NLG and the architectures adopted in which such tasks are organised; (b) highlight a number of relatively recent research topics that have arisen partly as a result of growing synergies between NLG and other areas of artificial intelligence; (c) draw attention to the challenges in NLG evaluation, relating them to similar challenges faced in other areas of Natural Language Processing, with an emphasis on different evaluation methods and the relationships between them.Comment: Published in Journal of AI Research (JAIR), volume 61, pp 75-170. 118 pages, 8 figures, 1 tabl
    • …
    corecore