144,219 research outputs found

    Much Ado About Time: Exhaustive Annotation of Temporal Data

    Full text link
    Large-scale annotated datasets allow AI systems to learn from and build upon the knowledge of the crowd. Many crowdsourcing techniques have been developed for collecting image annotations. These techniques often implicitly rely on the fact that a new input image takes a negligible amount of time to perceive. In contrast, we investigate and determine the most cost-effective way of obtaining high-quality multi-label annotations for temporal data such as videos. Watching even a short 30-second video clip requires a significant time investment from a crowd worker; thus, requesting multiple annotations following a single viewing is an important cost-saving strategy. But how many questions should we ask per video? We conclude that the optimal strategy is to ask as many questions as possible in a HIT (up to 52 binary questions after watching a 30-second video clip in our experiments). We demonstrate that while workers may not correctly answer all questions, the cost-benefit analysis nevertheless favors consensus from multiple such cheap-yet-imperfect iterations over more complex alternatives. When compared with a one-question-per-video baseline, our method is able to achieve a 10% improvement in recall 76.7% ours versus 66.7% baseline) at comparable precision (83.8% ours versus 83.0% baseline) in about half the annotation time (3.8 minutes ours compared to 7.1 minutes baseline). We demonstrate the effectiveness of our method by collecting multi-label annotations of 157 human activities on 1,815 videos.Comment: HCOMP 2016 Camera Read

    Working out a common task: design and evaluation of user-intelligent system collaboration

    Get PDF
    This paper describes the design and user evaluation of an intelligent user interface intended to mediate between users and an Adaptive Information Extraction (AIE) system. The design goal was to support a synergistic and cooperative work. Laboratory tests showed the approach was efficient and effective; focus groups were run to assess its ease of use. Logs, user satisfaction questionnaires, and interviews were exploited to investigate the interaction experience. We found that user’ attitude is mainly hierarchical with the user wishing to control and check the system’s initiatives. However when confidence in the system capabilities rises, a more cooperative interaction is adopted

    Addictive links: The motivational value of adaptive link annotation

    Get PDF
    Adaptive link annotation is a popular adaptive navigation support technology. Empirical studies of adaptive annotation in the educational context have demonstrated that it can help students to acquire knowledge faster, improve learning outcomes, reduce navigational overhead, and encourage non-sequential navigation. In this paper, we present our exploration of a lesser known effect of adaptive annotation, its ability to significantly increase students' motivation to work with non-mandatory educational content. We explored this effect and confirmed its significance in the context of two different adaptive hypermedia systems. The paper presents and discusses the results of our work

    Semantic Tagging on Historical Maps

    Full text link
    Tags assigned by users to shared content can be ambiguous. As a possible solution, we propose semantic tagging as a collaborative process in which a user selects and associates Web resources drawn from a knowledge context. We applied this general technique in the specific context of online historical maps and allowed users to annotate and tag them. To study the effects of semantic tagging on tag production, the types and categories of obtained tags, and user task load, we conducted an in-lab within-subject experiment with 24 participants who annotated and tagged two distinct maps. We found that the semantic tagging implementation does not affect these parameters, while providing tagging relationships to well-defined concept definitions. Compared to label-based tagging, our technique also gathers positive and negative tagging relationships. We believe that our findings carry implications for designers who want to adopt semantic tagging in other contexts and systems on the Web.Comment: 10 page

    A dataset of continuous affect annotations and physiological signals for emotion analysis

    Get PDF
    From a computational viewpoint, emotions continue to be intriguingly hard to understand. In research, direct, real-time inspection in realistic settings is not possible. Discrete, indirect, post-hoc recordings are therefore the norm. As a result, proper emotion assessment remains a problematic issue. The Continuously Annotated Signals of Emotion (CASE) dataset provides a solution as it focusses on real-time continuous annotation of emotions, as experienced by the participants, while watching various videos. For this purpose, a novel, intuitive joystick-based annotation interface was developed, that allowed for simultaneous reporting of valence and arousal, that are instead often annotated independently. In parallel, eight high quality, synchronized physiological recordings (1000 Hz, 16-bit ADC) were made of ECG, BVP, EMG (3x), GSR (or EDA), respiration and skin temperature. The dataset consists of the physiological and annotation data from 30 participants, 15 male and 15 female, who watched several validated video-stimuli. The validity of the emotion induction, as exemplified by the annotation and physiological data, is also presented.Comment: Dataset available at: https://rmc.dlr.de/download/CASE_dataset/CASE_dataset.zi

    The Impact of Concept Representation in Interactive Concept Validation (ICV)

    Get PDF
    Large scale ideation has developed as a promising new way of obtaining large numbers of highly diverse ideas for a given challenge. However, due to the scale of these challenges, algorithmic support based on a computational understanding of the ideas is a crucial component in these systems. One promising solution is the use of knowledge graphs to provide meaning. A significant obstacle lies in word-sense disambiguation, which cannot be solved by automatic approaches. In previous work, we introduce \textit{Interactive Concept Validation} (ICV) as an approach that enables ideators to disambiguate terms used in their ideas. To test the impact of different ways of representing concepts (should we show images of concepts, or only explanatory texts), we conducted experiments comparing three representations. The results show that while the impact on ideation metrics was marginal, time/click effort was lowest in the images only condition, while data quality was highest in the both condition

    Empirical Methodology for Crowdsourcing Ground Truth

    Full text link
    The process of gathering ground truth data through human annotation is a major bottleneck in the use of information extraction methods for populating the Semantic Web. Crowdsourcing-based approaches are gaining popularity in the attempt to solve the issues related to volume of data and lack of annotators. Typically these practices use inter-annotator agreement as a measure of quality. However, in many domains, such as event detection, there is ambiguity in the data, as well as a multitude of perspectives of the information examples. We present an empirically derived methodology for efficiently gathering of ground truth data in a diverse set of use cases covering a variety of domains and annotation tasks. Central to our approach is the use of CrowdTruth metrics that capture inter-annotator disagreement. We show that measuring disagreement is essential for acquiring a high quality ground truth. We achieve this by comparing the quality of the data aggregated with CrowdTruth metrics with majority vote, over a set of diverse crowdsourcing tasks: Medical Relation Extraction, Twitter Event Identification, News Event Extraction and Sound Interpretation. We also show that an increased number of crowd workers leads to growth and stabilization in the quality of annotations, going against the usual practice of employing a small number of annotators.Comment: in publication at the Semantic Web Journa
    • …
    corecore