221 research outputs found

    UniCraft: Exploring the impact of asynchronous multiplayer game elements in gamification

    Get PDF
    This paper describes the development and evaluation of UniCraft: a gamified mobile app designed to increase the engagement of undergraduate students with the content and delivery of their course. Gamification projects rely on extrinsic motivators to encourage participants to engage, such as compulsory participation or real-world rewards. UniCraft incorporates an asynchronous multiplayer battle game that uses constructive competition to motivate students, without using motivational levers that may reduce intrinsic motivation. The novel battle game employed by UniCraft employs Player vs Environment (Shafer, 2012) and Player Matching (Jennings, 2014) to ensure students work together in similarly ranked small groups as a team against a shared enemy. A study was undertaken which examined students' long-term engagement with UniCraft within the context of a 12-week long undergraduate programming course. The app was initially provided with the battle feature disabled, so that the effect on motivation and engagement could be studied when it was introduced during the intervention. Detailed interaction data recorded by the app was augmented by semi-structured interviews in order to provide a richer perspective on its effect at an individual and group level. The interaction data revealed convincing evidence for the increased motivational power of the battle feature, and this was supported by the interview data. Although no direct negative effects of competition were observed, interviews revealed that cheating was prevalent and this could in turn have unintended negative side-effects on motivation. Full results are presented and case studies are described for three of the participants, giving an insight into the different styles of interaction and motivation experienced by students in this study

    Melhorando as taxas de graduação: Práticas legítimas e estratégias de jogo

    Get PDF
    Accountability pressures faced by teachers and leaders may lead well-intentioned educators to engage in strategic reporting and operational practices to increase test scores, graduation rates, and other indicators of student success. Such practices are referred to as gaming behaviors. School district personnel attending a Georgia educational conference (N=146) reported a significant prevalence of two such practices – purging data for students enrolled for a short period of time and fabricating withdrawal forms in case of audit. Exploratory factor analysis yielded three categories of strategies employed by school districts to improve reported graduation rates: a) practices that directly contradict the rules governing ethical reporting of data (Factor1); b) legitimate educational practices aiming to enhance student learning (Factor2); and c) possible gaming strategies aiming to exclude low performing students from the computation of graduation rates (Factor3). Latent profile analysis distinguished a) a group with average scores on all factors (N=120); and b) a group with significantly higher scores on Factor1 and Factor3 (N=26). The second group included a significantly larger proportion of individuals from districts with 5,000 – 10,000 students; districts of this size may have the expertise in-house to understand calculations and take strategic action with their data reporting practices.Presiones sobre rendición de cuentas pueden llevar a educadores bien intencionados a comprometerse en informes estratégicos y prácticas operativas para aumentar los resultados de las pruebas, las tasas de graduación y otros indicadores de éxito del alumno. Estas prácticas se denominan comportamientos de juego. El personal del distrito escolar en Georgia (N= 146) relató una prevalencia significativa de dos de estas prácticas - expurgando los datos para los alumnos matriculados por un corto período de tiempo y fabricando formularios de retiro en caso de auditoría. El análisis factorial exploratorio resultó en tres categorías de estrategias empleadas por los distritos escolares para mejorar las tasas de graduación relatadas: a) prácticas que contradice directamente las reglas que rigen el relato ético de datos; b) prácticas educativas legítimas destinadas a mejorar el aprendizaje de los alumnos; y c) posibles estrategias de juego con el objetivo de excluir a los alumnos con bajo desempeño del cálculo de las tasas de graduación. Este análisis distinguió un grupo con puntuaciones medias en todos los factores y un grupo con puntuaciones significativamente más altas en la primera y tercera categorías. El segundo grupo incluyó una proporción significativamente mayor de individuos de distritos con 5.000 a 10.000 alumnos; los distritos de este porte pueden tener la experiencia para entender los cálculos y tomar acciones estratégicas con sus prácticas de informes de datos. Pressões sobre prestação de contas podem levar educadores bem-intencionados a se engajarem em relatórios estratégicos e práticas operacionais para aumentar os resultados dos testes, taxas de graduação e outros indicadores de sucesso do aluno. Tais práticas são referidas como comportamentos de jogo. O pessoal do distrito escolar na Geórgia (N = 146) relatou uma prevalência significativa de duas dessas práticas - expurgando os dados para os alunos matriculados por um curto período de tempo e fabricando formulários de saque em caso de auditoria. A análise fatorial exploratória resultou em três categorias de estratégias empregadas pelos distritos escolares para melhorar as taxas de graduação relatadas: a) práticas que contradizem diretamente as regras que regem o relato ético de dados; b) práticas educativas legítimas que visam melhorar a aprendizagem dos alunos; e c) possíveis estratégias de jogo com o objetivo de excluir os alunos com baixo desempenho do cálculo das taxas de graduação. Esta análise distinguiu um grupo com pontuações médias em todos os fatores e um grupo com pontuações significativamente mais altas na primeira e terceira categorias. O segundo grupo incluiu uma proporção significativamente maior de indivíduos de distritos com 5.000 a 10.000 alunos; distritos deste porte podem ter a expertise para entender os cálculos e tomar ações estratégicas com suas práticas de relatórios de dados

    Evaluating the Robustness of Learning Analytics Results Against Fake Learners

    Get PDF
    Massive Open Online Courses (MOOCs) collect large amounts of rich data. A primary objective of Learning Analytics (LA) research is studying these data in order to improve the pedagogy of interactive learning environments. Most studies make the underlying assumption that the data represent truthful and honest learning activity. However, previous studies showed that MOOCs can have large cohorts of users that break this assumption and achieve high performance through behaviors such as Cheating Using Multiple Accounts or unauthorized collaboration, and we therefore denote them fake learners. Because of their aberrant behavior, fake learners can bias the results of Learning Analytics (LA) models. The goal of this study is to evaluate the robustness of LA results when the data contain a considerable number of fake learners. Our methodology follows the rationale of ‘replication research’. We challenge the results reported in a well-known, and one of the first LA/PedagogicEfficacy MOOC papers, by replicating its results with and without the fake learners (identified using machine learning algorithms). The results show that fake learners exhibit very different behavior compared to true learners. However, even though they are a significant portion of the student population (∼15%), their effect on the results is not dramatic (does not change trends). We conclude that the LA study that we challenged was robust against fake learners. While these results carry an optimistic message on the trustworthiness of LA research, they rely on data from one MOOC. We believe that this issue should receive more attention within the LA research community, and can explain some ‘surprising’ research results in MOOCs. Keywords: Learning Analytics, Educational Data Mining, MOOCs, Fake Learners, Reliability, IR

    Detecting students who are conducting inquiry Without Thinking Fastidiously (WTF) in the Context of Microworld Learning Environments

    Get PDF
    In recent years, there has been increased interest and research on identifying the various ways that students can deviate from expected or desired patterns while using educational software. This includes research on gaming the system, player transformation, haphazard inquiry, and failure to use key features of the learning system. Detection of these sorts of behaviors has helped researchers to better understand these behaviors, thus allowing software designers to develop interventions that can remediate them and/or reduce their negative impacts on student learning. This work addresses two types of student disengagement: carelessness and a behavior we term WTF (“Without Thinking Fastidiously”) behavior. Carelessness is defined as not demonstrating a skill despite knowing it; we measured carelessness using a machine learned model. In WTF behavior, the student is interacting with the software, but their actions appear to have no relationship to the intended learning task. We discuss the detector development process, validate the detectors with human labels of the behavior, and discuss implications for understanding how and why students conduct inquiry without thinking fastidiously while learning in science inquiry microworlds. Following this work we explore the relationship between student learner characteristics and the aforementioned disengaged behaviors carelessness and WTF. Our goal was to develop a deeper understanding of which learner characteristics correlate to carelessness or WTF behavior. Our work examines three alternative methods for predicting carelessness and WTF behaviors from learner characteristics: simple correlations, k-means clustering, and decision tree rule learners

    AI-Enhanced Auto-Correction of Programming Exercises: How Effective is GPT-3.5?

    Get PDF
    Timely formative feedback is considered as one of the most important drivers for effective learning. Delivering timely and individualized feedback is particularly challenging in large classes in higher education. Recently Large Language Models such as GPT-3 became available to the public that showed promising results on various tasks such as code generation and code explanation. This paper investigates the potential of AI in providing personalized code correction and generating feedback. Based on existing student submissions of two different real-world assignments, the correctness of the AI-aided e-assessment as well as the characteristics such as fault localization, correctness of hints, and code style suggestions of the generated feedback are investigated. The results show that 73% of the submissions were correctly identified as either correct or incorrect. In 59% of these cases, GPT-3.5 also successfully generated effective and high-quality feedback. Additionally, GPT-3.5 exhibited weaknesses in its evaluation, including localization of errors that were not the actual errors, or even hallucinated errors. Implications and potential new usage scenarios are discussed

    Examining the Impact of Student-Generated Screencasts on Middle School Science Students’ Interactive Modeling Behaviors, Inquiry Learning, and Conceptual Development

    Get PDF
    Student activities involving screencast production can serve as scaffolds to enhance inquiry behavior, heighten explanation development, and encourage the connection of conceptual ideas developed by eighth grade science students engaged in interactive computer modeling. Screencast recordings enabled students to simultaneously combine their narrative explanations with a visual record of their computer modeling activity. Students (n=210) generated numerous screencasts and written explanations during an online exploration regarding global climate change. The quasi-experimental design used in this study prompted student groups in four classrooms to screencast their final explanations concerning their modeling activity, while groups in the four control classrooms used a text entry tool to provide their explanations. Results indicated that student groups constructing screencast explanations spent 72% more time with the model (t=7.13, p<.001, d=2.23) and spoke an average of 131 words compared to the 44 written by control classroom groups (t=3.15, p=.002, d=0.99). Screencast groups were 42% more likely to describe their inquiry behavior activity when prompted by two design components developed to measure on-task behavior (t=2.89, p=.003, d=0.90). Knowledge integration was also heightened as 24% of the screencast groups provided scientifically normative ideas to support their explanations compared to less than 5% of the text entry groups

    Perceptions of subject difficulty and subject choices : are the two linked, and if so, how?

    Get PDF
    corecore