29,885 research outputs found

    Analytic frameworks for assessing dialogic argumentation in online learning environments

    Get PDF
    Over the last decade, researchers have developed sophisticated online learning environments to support students engaging in argumentation. This review first considers the range of functionalities incorporated within these online environments. The review then presents five categories of analytic frameworks focusing on (1) formal argumentation structure, (2) normative quality, (3) nature and function of contributions within the dialog, (4) epistemic nature of reasoning, and (5) patterns and trajectories of participant interaction. Example analytic frameworks from each category are presented in detail rich enough to illustrate their nature and structure. This rich detail is intended to facilitate researchers’ identification of possible frameworks to draw upon in developing or adopting analytic methods for their own work. Each framework is applied to a shared segment of student dialog to facilitate this illustration and comparison process. Synthetic discussions of each category consider the frameworks in light of the underlying theoretical perspectives on argumentation, pedagogical goals, and online environmental structures. Ultimately the review underscores the diversity of perspectives represented in this research, the importance of clearly specifying theoretical and environmental commitments throughout the process of developing or adopting an analytic framework, and the role of analytic frameworks in the future development of online learning environments for argumentation

    Responsible research and innovation in science education: insights from evaluating the impact of using digital media and arts-based methods on RRI values

    Get PDF
    The European Commission policy approach of Responsible Research and Innovation (RRI) is gaining momentum in European research planning and development as a strategy to align scientific and technological progress with socially desirable and acceptable ends. One of the RRI agendas is science education, aiming to foster future generations' acquisition of skills and values needed to engage in society responsibly. To this end, it is argued that RRI-based science education can benefit from more interdisciplinary methods such as those based on arts and digital technologies. However, the evidence existing on the impact of science education activities using digital media and arts-based methods on RRI values remains underexplored. This article comparatively reviews previous evidence on the evaluation of these activities, from primary to higher education, to examine whether and how RRI-related learning outcomes are evaluated and how these activities impact on students' learning. Forty academic publications were selected and its content analysed according to five RRI values: creative and critical thinking, engagement, inclusiveness, gender equality and integration of ethical issues. When evaluating the impact of digital and arts-based methods in science education activities, creative and critical thinking, engagement and partly inclusiveness are the RRI values mainly addressed. In contrast, gender equality and ethics integration are neglected. Digital-based methods seem to be more focused on students' questioning and inquiry skills, whereas those using arts often examine imagination, curiosity and autonomy. Differences in the evaluation focus between studies on digital media and those on arts partly explain differences in their impact on RRI values, but also result in non-documented outcomes and undermine their potential. Further developments in interdisciplinary approaches to science education following the RRI policy agenda should reinforce the design of the activities as well as procedural aspects of the evaluation research

    Slave to the Algorithm? Why a \u27Right to an Explanation\u27 Is Probably Not the Remedy You Are Looking For

    Get PDF
    Algorithms, particularly machine learning (ML) algorithms, are increasingly important to individuals’ lives, but have caused a range of concerns revolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively promises to open the algorithmic “black box” to promote challenge, redress, and hopefully heightened accountability. Amidst the general furore over algorithmic bias we describe, any remedy in a storm has looked attractive. However, we argue that a right to an explanation in the EU General Data Protection Regulation (GDPR) is unlikely to present a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. Firstly, the law is restrictive, unclear, or even paradoxical concerning when any explanation-related right can be triggered. Secondly, even navigating this, the legal conception of explanations as “meaningful information about the logic of processing” may not be provided by the kind of ML “explanations” computer scientists have developed, partially in response. ML explanations are restricted both by the type of explanation sought, the dimensionality of the domain and the type of user seeking an explanation. However, “subject-centric explanations (SCEs) focussing on particular regions of a model around a query show promise for interactive exploration, as do explanation systems based on learning a model from outside rather than taking it apart (pedagogical versus decompositional explanations) in dodging developers\u27 worries of intellectual property or trade secrets disclosure. Based on our analysis, we fear that the search for a “right to an explanation” in the GDPR may be at best distracting, and at worst nurture a new kind of “transparency fallacy.” But all is not lost. We argue that other parts of the GDPR related (i) to the right to erasure ( right to be forgotten ) and the right to data portability; and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to make algorithms more responsible, explicable, and human-centered

    Fostering reflection in the training of speech-receptive action

    Get PDF
    Dieser Aufsatz erörtert Möglichkeiten und Probleme der Förderung kommunikativer Fertigkeiten durch die Unterstützung der Reflexion eigenen sprachrezeptiven Handelns und des Einsatzes von computerunterstützten Lernumgebungen für dessen Förderung. Kommunikationstrainings widmen sich meistens der Förderung des beobachtbaren sprachproduktiven Handelns (Sprechen). Die individuellen kognitiven Prozesse, die dem sprachrezeptiven Handeln (Hören und Verstehen) zugrunde liegen, werden häufig vernachlässigt. Dies wird dadurch begründet, dass sprachrezeptives Handeln in einer kommunikativen Situation nur schwer zugänglich und die Förderung der individuellen Prozesse sprachrezeptiven Handelns sehr zeitaufwändig ist. Das zentrale Lernprinzip - die Reflexion des eigenen sprachlich-kommunikativen Handelns - wird aus verschiedenen Perspektiven diskutiert. Vor dem Hintergrund der Reflexionsmodelle wird die computerunterstützte Lernumgebung CaiMan© vorgestellt und beschrieben. Daran anschließend werden sieben Erfolgsfaktoren aus der empirischen Forschung zur Lernumgebung CaiMan© abgeleitet. Der Artikel endet mit der Vorstellung von zwei empirischen Studien, die Möglichkeiten der Reflexionsunterstützung untersucheThis article discusses the training of communicative skills by fostering the reflection of speech-receptive action and the opportunities for using software for this purpose. Most frameworks for the training of communicative behavior focus on fostering the observable speech-productive action (i.e. speaking); the individual cognitive processes underlying speech-receptive action (hearing and understanding utterances) are often neglected. Computer-supported learning environments employed as cognitive tools can help to foster speech-receptive action. Seven success factors for the integration of software into the training of soft skills have been derived from empirical research. The computer-supported learning environment CaiMan© based on these ideas is presented. One central learning principle in this learning environment reflection of one's own action will be discussed from different perspectives. The article concludes with two empirical studies examining opportunities to foster reflecti

    Quantifying critical thinking: Development and validation of the Physics Lab Inventory of Critical thinking (PLIC)

    Full text link
    Introductory physics lab instruction is undergoing a transformation, with increasing emphasis on developing experimentation and critical thinking skills. These changes present a need for standardized assessment instruments to determine the degree to which students develop these skills through instructional labs. In this article, we present the development and validation of the Physics Lab Inventory of Critical thinking (PLIC). We define critical thinking as the ability to use data and evidence to decide what to trust and what to do. The PLIC is a 10-question, closed-response assessment that probes student critical thinking skills in the context of physics experimentation. Using interviews and data from 5584 students at 29 institutions, we demonstrate, through qualitative and quantitative means, the validity and reliability of the instrument at measuring student critical thinking skills. This establishes a valuable new assessment instrument for instructional labs.Comment: 16 pages, 4 figure

    Analyzing collaborative learning processes automatically

    Get PDF
    In this article we describe the emerging area of text classification research focused on the problem of collaborative learning process analysis both from a broad perspective and more specifically in terms of a publicly available tool set called TagHelper tools. Analyzing the variety of pedagogically valuable facets of learners’ interactions is a time consuming and effortful process. Improving automated analyses of such highly valued processes of collaborative learning by adapting and applying recent text classification technologies would make it a less arduous task to obtain insights from corpus data. This endeavor also holds the potential for enabling substantially improved on-line instruction both by providing teachers and facilitators with reports about the groups they are moderating and by triggering context sensitive collaborative learning support on an as-needed basis. In this article, we report on an interdisciplinary research project, which has been investigating the effectiveness of applying text classification technology to a large CSCL corpus that has been analyzed by human coders using a theory-based multidimensional coding scheme. We report promising results and include an in-depth discussion of important issues such as reliability, validity, and efficiency that should be considered when deciding on the appropriateness of adopting a new technology such as TagHelper tools. One major technical contribution of this work is a demonstration that an important piece of the work towards making text classification technology effective for this purpose is designing and building linguistic pattern detectors, otherwise known as features, that can be extracted reliably from texts and that have high predictive power for the categories of discourse actions that the CSCL community is interested in
    corecore