35 research outputs found

    Evidence-Based Dialogue Maps as a research tool to evaluate the quality of school pupils’ scientific argumentation

    Get PDF
    This pilot study focuses on the potential of Evidence-based Dialogue Mapping as a participatory action research tool to investigate young teenagers’ scientific argumentation. Evidence-based Dialogue Mapping is a technique for representing graphically an argumentative dialogue through Questions, Ideas, Pros, Cons and Data. Our research objective is to better understand the usage of Compendium, a Dialogue Mapping software tool, as both (1) a learning strategy to scaffold school pupils’ argumentation and (2) as a method to investigate the quality of their argumentative essays. The participants were a science teacher-researcher, a knowledge mapping researcher and 20 pupils, 12-13 years old, in a summer science course for “gifted and talented” children in the UK. This study draws on multiple data sources: discussion forum, science teacher-researcher’s and pupils’ Dialogue Maps, pupil essays, and reflective comments about the uses of mapping for writing. Through qualitative analysis of two case studies, we examine the role of Evidence-based Dialogue Maps as a mediating tool in scientific reasoning: as conceptual bridges for linking and making knowledge intelligible; as support for the linearisation task of generating a coherent document outline; as a reflective aid to rethinking reasoning in response to teacher feedback; and as a visual language for making arguments tangible via cartographic conventions

    Modelling naturalistic argumentation in research literatures: representation and interaction design issues

    Get PDF
    This paper characterises key weaknesses in the ability of current digital libraries to support scholarly inquiry, and as a way to address these, proposes computational services grounded in semiformal models of the naturalistic argumentation commonly found in research lteratures. It is argued that a design priority is to balance formal expressiveness with usability, making it critical to co-evolve the modelling scheme with appropriate user interfaces for argument construction and analysis. We specify the requirements for an argument modelling scheme for use by untrained researchers, describe the resulting ontology, contrasting it with other domain modelling and semantic web approaches, before discussing passive and intelligent user interfaces designed to support analysts in the construction, navigation and analysis of scholarly argument structures in a Web-based environment

    Modelling discourse in contested domains: A semiotic and cognitive framework

    Get PDF
    This paper examines the representational requirements for interactive, collaborative systems intended to support sensemaking and argumentation over contested issues. We argue that a perspective supported by semiotic and cognitively oriented discourse analyses offers both theoretical insights and motivates representational requirements for the semantics of tools for contesting meaning. We introduce our semiotic approach, highlighting its implications for discourse representation, before describing a research system (ClaiMaker) designed to support the construction of scholarly argumentation by allowing analysts to publish and contest 'claims' about scientific contributions. We show how ClaiMaker's representational scheme is grounded in specific assumptions concerning the nature of explicit modelling, and the evolution of meaning within a discourse community. These characteristics allow the system to represent scholarly discourse as a dynamic process, in the form of continuously evolving structures. A cognitively oriented discourse analysis then shows how the use of a small set of cognitive relational primitives in the underlying ontology opens possibilities for offering users advanced forms of computational service for analysing collectively constructed argumentation networks

    Contested Collective Intelligence: rationale, technologies, and a human-machine annotation study

    Get PDF
    We propose the concept of Contested Collective Intelligence (CCI) as a distinctive subset of the broader Collective Intelligence design space. CCI is relevant to the many organizational contexts in which it is important to work with contested knowledge, for instance, due to different intellectual traditions, competing organizational objectives, information overload or ambiguous environmental signals. The CCI challenge is to design sociotechnical infrastructures to augment such organizational capability. Since documents are often the starting points for contested discourse, and discourse markers provide a powerful cue to the presence of claims, contrasting ideas and argumentation, discourse and rhetoric provide an annotation focus in our approach to CCI. Research in sensemaking, computer-supported discourse and rhetorical text analysis motivate a conceptual framework for the combined human and machine annotation of texts with this specific focus. This conception is explored through two tools: a social-semantic web application for human annotation and knowledge mapping (Cohere), plus the discourse analysis component in a textual analysis software tool (Xerox Incremental Parser: XIP). As a step towards an integrated platform, we report a case study in which a document corpus underwent independent human and machine analysis, providing quantitative and qualitative insight into their respective contributions. A promising finding is that significant contributions were signalled by authors via explicit rhetorical moves, which both human analysts and XIP could readily identify. Since working with contested knowledge is at the heart of CCI, the evidence that automatic detection of contrasting ideas in texts is possible through rhetorical discourse analysis is progress towards the effective use of automatic discourse analysis in the CCI framework

    Aligning the Goals of Learning Analytics with its Research Scholarship: An Open Peer Commentary Approach

    Get PDF
    To promote cross-community dialogue on matters of significance within the field of learning analytics (LA), we as editors-in-chief of the Journal of Learning Analytics (JLA) have introduced a section for papers that are open to peer commentary. An invitation to submit proposals for commentaries on the paper was released, and 12 of these proposals were accepted. The 26 authors of the accepted commentaries are based in Europe, North America, and Australia. They range in experience from PhD students and early-career researchers to some of the longest-standing, most senior members of the learning analytics community. This paper brings those commentaries together, and we recommend reading it as a companion piece to the original paper by Motz et al. (2023), which also appears in this issu
    corecore