10 research outputs found

    Teachers' trust in AI-powered educational technology and a professional development program to improve it

    Get PDF
    Evidence from various domains underlines the critical role that human factors, and especially trust, play in adopting technology by practitioners. In the case of Artificial Intelligence (AI) powered tools, the issue is even more complex due to practitioners' AI-specific misconceptions, myths and fears (e.g., mass unemployment and privacy violations). In recent years, AI has been incorporated increasingly into K-12 education. However, little research has been conducted on the trust and attitudes of K-12 teachers towards the use and adoption of AI-powered Educational Technology (AI-EdTech). This paper sheds light on teachers' trust in AI-EdTech and presents effective professional development strategies to increase teachers' trust and willingness to apply AI-EdTech in their classrooms. Our experiments with K-12 science teachers were conducted around their interactions with a specific AI-powered assessment tool (termed AI-Grader) using both synthetic and real data. The results indicate that presenting teachers with some explanations of (i) how AI makes decisions, particularly compared to the human experts, and (ii) how AI can complement and give additional strengths to teachers, rather than replacing them, can reduce teachers' concerns and improve their trust in AI-EdTech. The contribution of this research is threefold. First, it emphasizes the importance of increasing teachers' theoretical and practical knowledge about AI in educational settings to gain their trust in AI-EdTech in K-12 education. Second, it presents a teacher professional development program (PDP), as well as the discourse analysis of teachers who completed it. Third, based on the results observed, it presents clear suggestions for future PDPs aiming to improve teachers' trust in AI-EdTech

    Confirmation bias and trust: Human factors that influence teachers' attitudes towards AI-based educational technology

    Get PDF
    Evidence from various domains underlines the key role that human factors, and especially, trust, play in the adoption of AI-based technology by professionals. As AI-based educational technology is increasingly entering K-12 education, it is expected that issues of trust would influence the acceptance of such technology by educators as well, but little is known about this matter. In this work, we bring the opinions and attitudes of science teachers that interacted with several types of AI-based technology for K-12. Among other things, our findings indicate that teachers are reluctant to accept AI-based recommendations when it contradicts their previous knowledge about their students, and that they expect AI to be absolutely correct even in situations that absolute truth may not exist (e.g., grading open-ended questions). The purpose of this paper is to provide initial findings and start mapping the terrain of this aspect of teacher-AI interaction, which is critical for the wide and effective deployment of AIED technologies in K-12 education

    Evaluating the Robustness of Learning Analytics Results Against Fake Learners

    Get PDF
    Massive Open Online Courses (MOOCs) collect large amounts of rich data. A primary objective of Learning Analytics (LA) research is studying these data in order to improve the pedagogy of interactive learning environments. Most studies make the underlying assumption that the data represent truthful and honest learning activity. However, previous studies showed that MOOCs can have large cohorts of users that break this assumption and achieve high performance through behaviors such as Cheating Using Multiple Accounts or unauthorized collaboration, and we therefore denote them fake learners. Because of their aberrant behavior, fake learners can bias the results of Learning Analytics (LA) models. The goal of this study is to evaluate the robustness of LA results when the data contain a considerable number of fake learners. Our methodology follows the rationale of ‘replication research’. We challenge the results reported in a well-known, and one of the first LA/PedagogicEfficacy MOOC papers, by replicating its results with and without the fake learners (identified using machine learning algorithms). The results show that fake learners exhibit very different behavior compared to true learners. However, even though they are a significant portion of the student population (∼15%), their effect on the results is not dramatic (does not change trends). We conclude that the LA study that we challenged was robust against fake learners. While these results carry an optimistic message on the trustworthiness of LA research, they rely on data from one MOOC. We believe that this issue should receive more attention within the LA research community, and can explain some ‘surprising’ research results in MOOCs. Keywords: Learning Analytics, Educational Data Mining, MOOCs, Fake Learners, Reliability, IR

    Scenario-Based Programming: Reducing the Cognitive Load, Fostering Abstract Thinking ∗

    No full text
    We examine how students work in scenario-based and objectoriented programming (OOP) languages, and qualitatively analyze the use of abstraction through the prism of the differences between the paradigms. The findings indicate that when working in a scenario-based language, programmers think on a higher level of abstraction than when working with OOP languages. This is explained by other findings, which suggest how the declarative, incremental nature of scenario-based programming facilitates separation of concerns, and how it supports a kind of programming that allows programmers to work with a less detailed mental model of the system they develop. The findings shed light on how declarative approaches can reduce the cognitive load involved in programming, and how scenario-based programming might solve some of the difficulties involved in the use of declarative languages. This is applicable to the design of learning materials, and to the design of programming languages and tools

    Kinetic algorithms via self-adjusting computation

    No full text
    Abstract. Define a static algorithm as an algorithm that computes some combinatorial property of its input consisting of static, i.e., non-moving, objects. In this paper, we describe a technique for syntactically transforming static algorithms into kinetic algorithms, which compute properties of moving objects. The technique offers capabilities for composing kinetic algorithms, for integrating dynamic and kinetic changes, and for ensuring robustness even with fixed-precision floating-point arithmetic. To evaluate the effectiveness of the approach, we implement a library for performing the transformation, transform a number of algorithms, and give an experimental evaluation. The results show that the technique performs well in practice.
    corecore