5,877 research outputs found

    SemEval-2016 Task 13: Taxonomy Extraction Evaluation (TExEval-2)

    Get PDF
    This paper describes the second edition of the shared task on Taxonomy Extraction Evaluation organised as part of SemEval 2016. This task aims to extract hypernym-hyponym relations between a given list of domain-specific terms and then to construct a domain taxonomy based on them. TExEval-2 introduced a multilingual setting for this task, covering four different languages including English, Dutch, Italian and French from domains as diverse as environment, food and science. A total of 62 runs submitted by 5 different teams were evaluated using structural measures, by comparison with gold standard taxonomies and by manual quality assessment of novel relations.Science Foundation Ireland (SFI) under Grant Number SFI/12/RC/2289 (INSIGHT

    Mind the Gap: From Desktop to App

    Get PDF
    In this article we present a new mobile game, edugames4all MicrobeQuest!, that covers core learning objectives from the European curriculum on microbe transmission, food and hand hygiene, and responsible antibiotic use. The game is aimed at 9 to 12 year olds and it is based on the desktop version of the edugames4all platform games. We discuss the challenges and lessons learned transitioning from a desktop based game to a mobile app. We also present the seamless evaluation obtained by integrating the assessment of educa- tional impact of the game into the game mechanics

    Usability Evaluation in Virtual Environments: Classification and Comparison of Methods

    Get PDF
    Virtual environments (VEs) are a relatively new type of human-computer interface in which users perceive and act in a three-dimensional world. The designers of such systems cannot rely solely on design guidelines for traditional two-dimensional interfaces, so usability evaluation is crucial for VEs. We present an overview of VE usability evaluation. First, we discuss some of the issues that differentiate VE usability evaluation from evaluation of traditional user interfaces such as GUIs. We also present a review of VE evaluation methods currently in use, and discuss a simple classification space for VE usability evaluation methods. This classification space provides a structured means for comparing evaluation methods according to three key characteristics: involvement of representative users, context of evaluation, and types of results produced. To illustrate these concepts, we compare two existing evaluation approaches: testbed evaluation [Bowman, Johnson, & Hodges, 1999], and sequential evaluation [Gabbard, Hix, & Swan, 1999]. We conclude by presenting novel ways to effectively link these two approaches to VE usability evaluation

    Driving tasks and new information technologies

    Get PDF

    Evaluation of two interaction techniques for visualization of dynamic graphs

    Full text link
    Several techniques for visualization of dynamic graphs are based on different spatial arrangements of a temporal sequence of node-link diagrams. Many studies in the literature have investigated the importance of maintaining the user's mental map across this temporal sequence, but usually each layout is considered as a static graph drawing and the effect of user interaction is disregarded. We conducted a task-based controlled experiment to assess the effectiveness of two basic interaction techniques: the adjustment of the layout stability and the highlighting of adjacent nodes and edges. We found that generally both interaction techniques increase accuracy, sometimes at the cost of longer completion times, and that the highlighting outclasses the stability adjustment for many tasks except the most complex ones.Comment: Appears in the Proceedings of the 24th International Symposium on Graph Drawing and Network Visualization (GD 2016

    Evaluation criteria of software visualization systems used for program comprehension

    Get PDF
    The program understanding task is usually very time and effort consuming. In a traditional way the code is inspected line by line by the user without any kind of help. But this becomes impossible for larger systems. Some software systems were created in order to generate automatically explanations, metrics, statistics and visualizations to describe the syntax and the semantics of programs. This kind of tools are called Program Comprehension Systems. One of the most important feature used in this kind of tool is the software visualization. We feel that it would be very useful to define criteria for evaluating visualization systems that are used for program comprehension. The main objective of this paper is to present a set of parameters to characterize Program Comprehension-Oriented Software Visualization Systems. We also propose new parameters to improve the current taxonomies in order to cover the visualization of the Problem Domain.FC

    Mapping Tasks to Interactions for Graph Exploration and Graph Editing on Interactive Surfaces

    Full text link
    Graph exploration and editing are still mostly considered independently and systems to work with are not designed for todays interactive surfaces like smartphones, tablets or tabletops. When developing a system for those modern devices that supports both graph exploration and graph editing, it is necessary to 1) identify what basic tasks need to be supported, 2) what interactions can be used, and 3) how to map these tasks and interactions. This technical report provides a list of basic interaction tasks for graph exploration and editing as a result of an extensive system review. Moreover, different interaction modalities of interactive surfaces are reviewed according to their interaction vocabulary and further degrees of freedom that can be used to make interactions distinguishable are discussed. Beyond the scope of graph exploration and editing, we provide an approach for finding and evaluating a mapping from tasks to interactions, that is generally applicable. Thus, this work acts as a guideline for developing a system for graph exploration and editing that is specifically designed for interactive surfaces.Comment: 21 pages, minor corrections (typos etc.

    Human performance prediction in man-machine systems. Volume 1 - A technical review

    Get PDF
    Tests and test techniques for human performance prediction in man-machine systems task
    corecore