5 research outputs found
Balancing Test Accuracy and Security in Computerized Adaptive Testing
Computerized adaptive testing (CAT) is a form of personalized testing that
accurately measures students' knowledge levels while reducing test length.
Bilevel optimization-based CAT (BOBCAT) is a recent framework that learns a
data-driven question selection algorithm to effectively reduce test length and
improve test accuracy. However, it suffers from high question exposure and test
overlap rates, which potentially affects test security. This paper introduces a
constrained version of BOBCAT to address these problems by changing its
optimization setup and enabling us to trade off test accuracy for question
exposure and test overlap rates. We show that C-BOBCAT is effective through
extensive experiments on two real-world adult testing datasets.Comment: The 24th International Conference on Artificial Intelligence in
Education (AIED 2023
Catalyzing Equity in STEM Teams: Harnessing Generative AI for Inclusion and Diversity
Collaboration is key to STEM, where multidisciplinary team research can solve
complex problems. However, inequality in STEM fields hinders their full
potential, due to persistent psychological barriers in underrepresented
students' experience. This paper documents teamwork in STEM and explores the
transformative potential of computational modeling and generative AI in
promoting STEM-team diversity and inclusion. Leveraging generative AI, this
paper outlines two primary areas for advancing diversity, equity, and
inclusion. First, formalizing collaboration assessment with inclusive analytics
can capture fine-grained learner behavior. Second, adaptive, personalized AI
systems can support diversity and inclusion in STEM teams. Four policy
recommendations highlight AI's capacity: formalized collaborative skill
assessment, inclusive analytics, funding for socio-cognitive research, human-AI
teaming for inclusion training. Researchers, educators, policymakers can build
an equitable STEM ecosystem. This roadmap advances AI-enhanced collaboration,
offering a vision for the future of STEM where diverse voices are actively
encouraged and heard within collaborative scientific endeavors.Comment: 21 pages, 0 figure, to be published in Policy Insights from
Behavioral and Brain Science
A Trustworthy Automated Short-Answer Scoring System Using a New Dataset and Hybrid Transfer Learning Method
To measure the quality of student learning, teachers must conduct evaluations. One of the most efficient modes of evaluation is the short answer question. However, there can be inconsistencies in teacher-performed manual evaluations due to an excessive number of students, time demands, fatigue, etc. Consequently, teachers require a trustworthy system capable of autonomously and accurately evaluating student answers. Using hybrid transfer learning and student answer dataset, we aim to create a reliable automated short answer scoring system called Hybrid Transfer Learning for Automated Short Answer Scoring (HTL-ASAS). HTL-ASAS combines multiple tokenizers from a pretrained model with the bidirectional encoder representations from transformers. Based on our evaluation of the training model, we determined that HTL-ASAS has a higher evaluation accuracy than models used in previous studies. The accuracy of HTL-ASAS for datasets containing responses to questions pertaining to introductory information technology courses reaches 99.6%. With an accuracy close to one hundred percent, the developed model can undoubtedly serve as the foundation for a trustworthy ASAS system
âDie Maschinen werden zu einer einzigen Maschine".: Eine technikphilosophische Reflexion auf âComputational Thinkingâ, KĂŒnstliche Intelligenz und Medienbildung
Technikinduzierte Diskurse um âLernenâ und âWissenâ in der zweiten HĂ€lfte des letzten Jahrhunderts sowie zu Beginn des 21. Jahrhunderts sind durch markante Strukturanalogien gekennzeichnet, vornehmlich wenn man auf hegemoniale Imperative und governmentale Diskurspraxen abstellt. GestĂŒtzt auf eine pragmatisch-systemische Technikphilosophie analysiert Christian Filk markante Eigendynamiken von Algorithmik, Robotik und KĂŒnstlicher Intelligenz (KI) und identifiziert altbekannte Dilemmata des Mensch-Maschine-Gegensatzes. Ausgehend vom genuinen Hiatus zwischen menschlichem und maschinellem âLernenâ zum einen, zwischen Konstruktivismus und Behaviorismus zum anderen reflektiert Christian Filk auf das BeziehungsgefĂŒge von Computational Thinking und MedienpĂ€dagogik. SchlieĂlich exploriert er wichtige theoretische und praktische Problemstellungen und Zielkonflikte â gerade hinsichtlich Artificial Intelligence (in Education) â, denen sich eine kritische Medienbildungsforschung vordringlich zu widmen hat.Technology-induced discourses on âlearningâ and âknowledgeâ in the second half of the last century and at the beginning of the 21st century are characterized by striking structural analogies, especially if one focuses on hegemonic imperatives and governmental discourse practices. Based on a pragmatic-systemic philosophy of technology, Christian Filk analyzes distinctive dynamics of Algorithmics, Robotics and Artificial Intelligence (AI) and identifies well-known dilemmas of the human-machine contrast. Starting from the genuine hiatus between human and machine âlearningâ on the one hand, between constructivism and behaviorism on the other hand, Christian Filk reflects on the relationship between computational thinking and media education. Finally, he explores important theoretical and practical problems and conflicting goals â especially with regard to Artificial Intelligence (in Education) â which critical media education research must address as a priority