1,078 research outputs found

    Learner Modeling for Integration Skills in Programming

    Get PDF
    Mastery development requires not only acquiring component skills, but also practicing their integration into more complex skills. When learning programming, an example is to first learn += and loops, then learn how to combine them into a loop that sums a sequence of numbers. The existence of integration skills has been supported by cognitive science research, yet it has rarely been considered in learner modeling, the key component for adaptive assistance in an intelligent tutoring system (ITS). Without this, early assertions of mastery in ITSs after only basic component skill practice or practice in limited contexts may be merely indicating shallow learning. My dissertation introduces integration skills, widely acknowledged by cognitive science research, into learner modeling. To demonstrate this, I chose program comprehension with a complex integrative nature. To provide grounds for skill modeling, I applied a Difficulty Factors Assessment (DFA) approach (from cognitive science) and identified integration skills along with generalizable integration difficulty factors in common basic programming patterns. I used the DFA data to inform the construction of the learner model, CKM-HI, which incorporates integration skills in a hierarchical structure in a Bayesian network (BN). Compared with other machine learning approaches, BN naturally utilizes domain knowledge and maintains interpretable knowledge states for adaptation decisions. To address the limitation of prediction metrics to evaluate such multi-skill learner models, I proposed and applied a multifaceted evaluation framework. Data-driven evaluations on a real-world dataset show that CKM-HI is superior to two popular multi-skill learner models, CKM and WKT, regarding predictive performance, parameter plausibility, and expected instructional effectiveness. To evaluate its real-world impact, I built a program comprehension ITS driven by learner modeling and a classroom study deploying this system suggests that CKM-HI could lead to better learning than the CKM model. My dissertation work is the first to systematically demonstrate the value of integration skill modeling, and offers novel integration-level learner modeling and multifaceted evaluation approaches applicable to a broader context. Further, my work contributes recent ITS infrastructure and techniques to programming education, and also contributes an example of taking an interdisciplinary approach to ITS research

    EDM 2011: 4th international conference on educational data mining : Eindhoven, July 6-8, 2011 : proceedings

    Get PDF

    Psychometrics in Practice at RCEC

    Get PDF
    A broad range of topics is dealt with in this volume: from combining the psychometric generalizability and item response theories to the ideas for an integrated formative use of data-driven decision making, assessment for learning and diagnostic testing. A number of chapters pay attention to computerized (adaptive) and classification testing. Other chapters treat the quality of testing in a general sense, but for topics like maintaining standards or the testing of writing ability, the quality of testing is dealt with more specifically.\ud All authors are connected to RCEC as researchers. They present one of their current research topics and provide some insight into the focus of RCEC. The selection of the topics and the editing intends that the book should be of special interest to educational researchers, psychometricians and practitioners in educational assessment

    Dissecting Poisson based prediction models in association football: A comprehensive look at methodology, assumptions, and accuracy using data from the main European Leagues (2011 – 2022)

    Get PDF
    As the access to broader and better data increases, data analytics, statistical modeling, and data science generally find ever-growing interest in sports analytics, including association football. It is no secret that both clubs and even higher governing bodies in the sport implement data-driven strategies to give them insights and a competitive advantage in play. Recognizing the importance of the sport as a fan and from the point of view of an analyst, this work seeks to contribute to the current body of literature by offering a thorough investigation of one of the most elegant approaches to sports analytics in association football; The Poisson goal model. Based on the simple and intuitive idea that goals in football are rare discrete events that follow the Poisson distribution while conditional on team performance, the concept has been appealing to many researchers. At the same time, a simplistic idea at its core, its application to realworld data, has been met with much discussion regarding underlying assumptions and methodology. Much of the discussion in the last 40 years since the idea was formalized concerns addressing assumptions such as the applicability of the Poisson distribution, score interdependence, overdispersion, and parameter stability. In the present work, we take a step back and reexamine the idea, methodology, and assumptions in the light of the most recent data from Europe’s major leagues. Furthermore, we examine sone novel concept such as considering xG (expected goals). Overall, some changing dynamics are revealed and some of the propositions made for the model do not hold given the recent developments in the sport.nhhma

    構造化データに対する予測手法:グラフ,順序,時系列

    Get PDF
    京都大学新制・課程博士博士(情報学)甲第23439号情博第769号新制||情||131(附属図書館)京都大学大学院情報学研究科知能情報学専攻(主査)教授 鹿島 久嗣, 教授 山本 章博, 教授 阿久津 達也学位規則第4条第1項該当Doctor of InformaticsKyoto UniversityDFA

    A framework for structuring prerequisite relations between concepts in educational textbooks

    Get PDF
    In our age we are experiencing an increasing availability of digital educational resources and self-regulated learning. In this scenario, the development of automatic strategies for organizing the knowledge embodied in educational resources has a tremendous potential for building personalized learning paths and applications such as intelligent textbooks and recommender systems of learning materials. To this aim, a straightforward approach consists in enriching the educational materials with a concept graph, i.a. a knowledge structure where key concepts of the subject matter are represented as nodes and prerequisite dependencies among such concepts are also explicitly represented. This thesis focuses therefore on prerequisite relations in textbooks and it has two main research goals. The first goal is to define a methodology for systematically annotating prerequisite relations in textbooks, which is functional for analysing the prerequisite phenomenon and for evaluating and training automatic methods of extraction. The second goal concerns the automatic extraction of prerequisite relations from textbooks. These two research goals will guide towards the design of PRET, i.e. a comprehensive framework for supporting researchers involved in this research issue. The framework described in the present thesis allows indeed researchers to conduct the following tasks: 1) manual annotation of educational texts, in order to create datasets to be used for machine learning algorithms or for evaluation as gold standards; 2) annotation analysis, for investigating inter-annotator agreement, graph metrics and in-context linguistic features; 3) data visualization, for visually exploring datasets and gaining insights of the problem that may lead to improve algorithms; 4) automatic extraction of prerequisite relations. As for the automatic extraction, we developed a method that is based on burst analysis of concepts in the textbook and we used the gold dataset with PR annotation for its evaluation, comparing the method with other metrics for PR extraction

    Enhancing Software Project Outcomes: Using Machine Learning and Open Source Data to Employ Software Project Performance Determinants

    Get PDF
    Many factors can influence the ongoing management and execution of technology projects. Some of these elements are known a priori during the project planning phase. Others require real-time data gathering and analysis throughout the lifetime of a project. These real-time project data elements are often neglected, misclassified, or otherwise misinterpreted during the project execution phase resulting in increased risk of delays, quality issues, and missed business opportunities. The overarching motivation for this research endeavor is to offer reliable improvements in software technology management and delivery. The primary purpose is to discover and analyze the impact, role, and level of influence of various project related data on the ongoing management of technology projects. The study leverages open source data regarding software performance attributes. The goal is to temper the subjectivity currently used by project managers (PMs) with quantifiable measures when assessing project execution progress. Modern-day PMs who manage software development projects are charged with an arduous task. Often, they obtain their inputs from technical leads who tend to be significantly more technical. When assessing software projects, PMs perform their role subject to the limitations of their capabilities and competencies. PMs are required to contend with the stresses of the business environment, the policies, and procedures dictated by their organizations, and resource constraints. The second purpose of this research study is to propose methods by which conventional project assessment processes can be enhanced using quantitative methods that utilize real-time project execution data. Transferability of academic research to industry application is specifically addressed vis-à-vis a delivery framework to provide meaningful data to industry practitioners
    corecore