95 research outputs found

    Student Modeling within a Computer Tutor for Mathematics: Using Bayesian Networks and Tabling Methods

    Get PDF
    Intelligent tutoring systems rely on student modeling to understand student behavior. The result of student modeling can provide assessment for student knowledge, estimation of student¡¯s current affective states (ie boredom, confusion, concentration, frustration, etc), prediction of student performance, and suggestion of the next tutoring steps. There are three focuses of this dissertation. The first focus is on better predicting student performance by adding more information, such as student identity and information about how many assistance students needed. The second focus is to analyze different performance and feature set for modeling student short-term knowledge and longer-term knowledge. The third focus is on improving the affect detectors by adding more features. In this dissertation I make contributions to the field of data mining as well as educational research. I demonstrate novel Bayesian networks for student modeling, and also compared them with each other. This work contributes to educational research by broadening the task of analyzing student knowledge to student knowledge retention, which is a much more important and interesting question for researchers to look at. Additionally, I showed a set of new useful features as well as how to effectively use these features in real models. For instance, in Chapter 5, I showed that the feature of the number of different days a students has worked on a skill is a more predictive feature for knowledge retention. These features themselves are not a contribution to data mining so much as they are to education research more broadly, which can used by other educational researchers or tutoring systems

    Can a computer adaptive assessment system determine, better than traditional methods, whether students know mathematics skills?

    Get PDF
    Schools use commercial systems specifically for mathematics benchmarking and longitudinal assessment. However these systems are expensive and their results often fail to indicate a clear path for teachers to differentiate instruction based on students’ individual strengths and weaknesses in specific skills. ASSISTments is a web-based Intelligent Tutoring System used by educators to drive real-time, formative assessment in their classrooms. The software is used primarily by mathematics teachers to deliver homework, classwork and exams to their students. We have developed a computer adaptive test called PLACEments as an extension of ASSISTments to allow teachers to perform individual student assessment and by extension school-wide benchmarking. PLACEments uses a form of graph-based knowledge representation by which the exam results identify the specific mathematics skills that each student lacks. The system additionally provides differentiated practice determined by the students’ performance on the adaptive test. In this project, we describe the design and implementation of PLACEments as a skill assessment method and evaluate it in comparison with a fixed-item benchmark

    Student Modeling From Different Aspects

    Get PDF
    With the wide usage of online tutoring systems, researchers become interested in mining data from logged files of these systems, so as to get better understanding of students. Varieties of aspects of students’ learning have become focus of studies, such as modeling students’ mastery status and affects. On the other hand, Randomized Controlled Trial (RCT), which is an unbiased method for getting insights of education, finds its way in Intelligent Tutoring System. Firstly, people are curious about what kind of settings would work better. Secondly, such a tutoring system, with lots of students and teachers using it, provides an opportunity for building a RCT infrastructure underlying the system. With the increasing interest in Data mining and RCTs, the thesis focuses on these two aspects. In the first part, we focus on analyzing and mining data from ASSISTments, an online tutoring system run by a team in Worcester Polytechnic Institute. Through the data, we try to answer several questions from different aspects of students learning. The first question we try to answer is what matters more to student modeling, skill information or student information. The second question is whether it is necessary to model students’ learning at different opportunity count. The third question is about the benefits of using partial credit, rather than binary credit as measurement of students’ learning in RCTs. The fourth question focuses on the amount that students spent Wheel Spinning in the tutoring system. The fifth questions studies the tradeoff between the mastery threshold and the time spent in the tutoring system. By answering the five questions, we both propose machine learning methodology that can be applied in educational data mining, and present findings from analyzing and mining the data. In the second part, we focused on RCTs within ASSISTments. Firstly, we looked at a pilot study of reassessment and relearning, which suggested a better system setting to improve students’ robust learning. Secondly, we proposed the idea to build an infrastructure of learning within ASSISTments, which provides the opportunities to improve the whole educational environment

    A Foundation For Educational Research at Scale: Evolution and Application

    Get PDF
    The complexities of how people learn have plagued researchers for centuries. A range of experimental and non-experimental methodologies have been used to isolate and implement positive interventions for students\u27 cognitive, meta-cognitive, behavioral, and socio-emotional successes in learning. But the face of learning is changing in the digital age. The value of accrued knowledge, popular throughout the industrial age, is being overpowered by the value of curiosity and the ability to ask critical questions. Most students can access the largest free collection of human knowledge (and cat videos) with ease using their phones or laptops and omnipresent cellular and Wi-Fi networks. Viewing this new-age capacity for connection as an opportunity, educational stakeholders have delegated many traditional learning tasks to online environments. With this influx of online learning, student errors can be corrected with immediacy, student data is more prevalent and actionable, and teachers can intervene with efficiency and efficacy. As such, endeavors in educational data mining, learning analytics, and authentic educational research at scale have grown popular in recent years; fields afforded by the luxuries of technology and driven by the age-old goal of understanding how people learn. This dissertation explores the evolution and application of ASSISTments Research, an approach to authentic educational research at scale that leverages ASSISTments, a popular online learning platform, to better understand how people learn. Part I details the evolution and advocacy of two tools that form the research arm of ASSISTments: the ASSISTments TestBed and the Assessment of Learning Infrastructure (ALI). An NSF funded Data Infrastructure Building Blocks grant (#1724889, $494,644 2017-2020), outlines goals for the new age of ASSISTments Research as a result of lessons learned in recent years. Part II details a personal application of these research tools with a focus on the framework of Self Determination Theory. The primary facets of this theory, thought to positively affect learning and intrinsic motivation, are investigated in depth through randomized controlled trials targeting Autonomy, Belonging, and Competence. Finally, a synthesis chapter highlights important connections between Parts I & II, offering lessons learned regarding ASSISTments Research and suggesting additional guidance for its future development, while broadly defining contributions to the Learning Sciences community

    EDM 2011: 4th international conference on educational data mining : Eindhoven, July 6-8, 2011 : proceedings

    Get PDF

    The Irish question: an investigation into Irish language self-efficacy beliefs in adults

    Get PDF
    The vast majority of adults that have received their education in Ireland undertake compulsory Irish for around 13 years. However, over 60% of adults claim to have no Irish speaking ability (CSO, 2018). This study seeks to assess the relationship between Irish language self-efficacy beliefs and performance on an Irish language proficiency test. Self-efficacy represents a task-specific, self-assessment of skills in a specific domain. Utilising a quasi-experimental, quantitative research design, an Irish language proficiency test and suite of self-efficacy scales were created and administered via an online survey platform. 1,501 participants completed the full manipulation study. Based on results at phase 1, participants were auto assigned to groups and an intervention was administered. Performers with low results were provided false, inflated results and efficacy-raising feedback. High performers were provided false, deflated results and efficacy-lowering feedback. A control group was presented with actual results. Phase 2 testing revealed that sources of self-efficacy could be manipulated to significantly affect Irish language performance with low performers improving average performance by almost 30%. Self-efficacy ratings, were significantly reduced in the high performing group upon receiving the negative intervention. Self-efficacy revealed itself as a more robust predictor of performance than a single Irish skills-based question such as that employed in the Census of Population.N

    Characterizing Productive Perseverance Using Sensor-Free Detectors of Student Knowledge, Behavior, and Affect

    Get PDF
    Failure is a necessary step in the process of learning. For this reason, there has been a myriad of research dedicated to the study of student perseverance in the presence of failure, leading to several commonly-cited theories and frameworks to characterize productive and unproductive representations of the construct of persistence. While researchers are in agreement that it is important for students to persist when struggling to learn new material, there can be both positive and negative aspects of persistence. What is it, then, that separates productive from unproductive persistence? The purpose of this work is to address this question through the development, extension, and study of data-driven models of student affect, behavior, and knowledge. The increased adoption of computer-based learning platforms in real classrooms has led to unique opportunities to study student learning at both fine levels of granularity and longitudinally at scale. Prior work has leveraged machine learning methods, existing learning theory, and previous education research to explore various aspects of student learning. These include the development of sensor-free detectors that utilize only the student interaction data collected through such learning platforms. Building off of the considerable amount of prior research, this work employs state-of-the-art machine learning methods in conjunction with the large scale granular data collected by computer-based learning platforms in alignment with three goals. First, this work focuses on the development of student models that study learning through the use of advancements in student modeling and deep learning methodologies. Second, this dissertation explores the development of tools that incorporate such models to support teachers in taking action in real classrooms to promote productive approaches to learning. Finally, this work aims to complete the loop in utilizing these detector models to better understand the underlying constructs that are being measured through their application and their connection to productive perseverance and commonly-observed learning outcomes
    • …
    corecore