17 research outputs found

    Double marking revisited

    Get PDF
    In 2002, the Qualifications and Curriculum Authority (QCA) published the report of an independent panel of experts into maintaining standards at Advanced Level (A-Level). One of its recommendations was for: ‘limited experimental double marking of scripts in subjects such as English to determine whether the strategy would signi-ficantly reduce errors of measurement’ (p. 24). This recommendation provided the impetus for this paper which reviews the all but forgotten literature on double marking and considers its relevance now

    Predicting the understandability of OWL inferences

    Get PDF
    In this paper, we describe a method for predicting the understandability level of inferences with OWL. Specifically, we present a model for measuring the understandability of a multiple-step inference based on the measurement of the understandability of individual inference steps. We also present an evaluation study which confirms that our model works relatively well for two-step inferences with OWL. This model has been applied in our research on generating accessible explanations for an entailment of OWL ontologies, to determine the most understandable inference among alternatives, from which the final explanation is generated

    The usability of description logics: understanding the cognitive difficulties presented by description logics

    Get PDF
    Description Logics have been extensively studied from the viewpoint of decidability and computational tractability. Less attention has been given to their usability and the cognitive difficulties they present, in particular for those who are not specialists in logic. This paper reports on a study into the difficulties associated with the most commonly used Description Logic features. Psychological theories are used to take account of these. Whilst most of the features presented no difficulty to participants, the comprehension of some was affected by commonly occurring misconceptions. The paper proposes explanations and remedies for some of these difficulties. In addition, the time to confirm stated inferences was found to depend both on the maximum complexity of the relations involved and the number of steps in the argument

    The role of context and functionality in the interpretation of quantifiers

    No full text
    The aim of the three experiments that are reported was to investigate the role of context, especially size and functionality, in the interpretation of quantifiers. The studies all used a task in which participants rated the appropriateness of quantifiers describing the number of balls in a bowl. The size of the balls was found to have an effect: Identical numbers of balls were given different ratings depending on ball size. It was also found that quantifiers were rated as more appropriate when the balls were in their natural functional relationship with the bowl (i.e., contained within the bowl), than when the functional relationship was breached (i.e., the balls overflowed the bowl). Tilting the bowl had surprising effects in that it led to some quantifiers being rated as more appropriate. The results are interpreted as indicating that quantifiers carry little specific meaning in themselves but instead derive their meaning from the context in which they occur

    Automatically assessing graph-based diagrams

    No full text
    To date there has been very little work on the machine understanding of imprecise diagrams, such as diagrams drawn by students in response to assessment questions. Imprecise diagrams exhibit faults such as missing, extraneous and incorrectly formed elements. The semantics of imprecise diagrams are difficult to determine. While there have been successful attempts at assessing text (essays) automatically, little success with diagrams has been reported. In this paper, we explain an approach to the automatic interpretation of graph-based diagrams based on a five-stage framework. The paper describes our approach to automatically grading graph-based diagrams and reports on some experiments into the automatic grading of student diagrams. The diagrams were produced under examination conditions and the output of the automatic marker was compared with the original human marks across a large number of diagrams. The experiments show good agreement between the performance of the automatic marker and the human markers. The paper also describes how the automatic marking algorithm has been incorporated into a variety of software teaching and learning tools. One tool supports the human grading of entity-relationship diagrams (ERDs). Another tool is for student use during the revision of ERDs. This tool automatically marks student answers in real-time and provides dynamically created feedback to help guide the student's progress
    corecore