4 research outputs found

    A data-assisted approach to supporting instructional interventions in technology enhanced learning environments

    Get PDF
    The design of intelligent learning environments requires significant up-front resources and expertise. These environments generally maintain complex and comprehensive knowledge bases describing pedagogical approaches, learner traits, and content models. This has limited the influence of these technologies in higher education, which instead largely uses learning content management systems in order to deliver non-classroom instruction to learners. This dissertation puts forth a data-assisted approach to embedding intelligence within learning environments. In this approach, instructional experts are provided with summaries of the activities of learners who interact with technology enhanced learning tools. These experts, which may include instructors, instructional designers, educational technologists, and others, use this data to gain insight into the activities of their learners. These insights lead experts to form instructional interventions which can be used to enhance the learning experience. The novel aspect of this approach is that the actions of the intelligent learning environment are now not just those of the learners and software constructs, but also those of the educational experts who may be supporting the learning process. The kinds of insights and interventions that come from application of the data-assisted approach vary with the domain being taught, the epistemology and pedagogical techniques being employed, and the particulars of the cohort being instructed. In this dissertation, three investigations using the data-assisted approach are described. The first of these demonstrates the effects of making available to instructors novel sociogram-based visualizations of online asynchronous discourse. By making instructors aware of the discussion habits of both themselves and learners, the instructors are better able to measure the effect of their teaching practice. This enables them to change their activities in response to the social networks that form between their learners, allowing them to react to deficiencies in the learning environment. Through these visualizations it is demonstrated that instructors can effectively change their pedagogy based on seeing data of their students’ interactions. The second investigation described in this dissertation is the application of unsupervised machine learning to the viewing habits of learners using lecture capture facilities. By clustering learners into groups based on behaviour and correlating groups with academic outcome, a model of positive learning activity can be described. This is particularly useful for instructional designers who are evaluating the role of learning technologies in programs as it contextualizes how technologies enable success in learners. Through this investigation it is demonstrated that the viewership data of learners can be used to assist designers in building higher level models of learning that can be used for evaluating the use of specific tools in blended learning situations. Finally, the results of applying supervised machine learning to the indexing of lecture video is described. Usage data collected from software is increasingly being used by software engineers to make technologies that are more customizable and adaptable. In this dissertation, it is demonstrated that supervised machine learning can provide human-like indexing of lecture videos that is more accurate than current techniques. Further, these indices can be customized for groups of learners, increasing the level of personalization in the learning environment. This investigation demonstrates that the data-assisted approach can also be used by application developers who are building software features for personalization into intelligent learning environments. Through this work, it is shown that a data-assisted approach to supporting instructional interventions in technology enhanced learning environments is both possible and can positively impact the teaching and learning process. By making available to instructional experts the online activities of learners, experts can better understand and react to patterns of use that develop, making for a more effective and personalized learning environment. This approach differs from traditional methods of building intelligent learning environments, which apply learning theories a priori to instructional design, and do not leverage the in situ data collected about learners

    Modeling second language learners' interlanguage and its variability: a computer-based dynamic assessment approach to distinguishing between errors and mistakes

    Get PDF
    Despite a long history, interlanguage variability research is a debatable topic as most paradigms do not distinguish between competence and performance. While interlanguage performance has been proven to be variable, determining whether interlanguage competence is exposed to random and/or systematic variations is complex, given the fact that distinction between competence-dependent errors and performance-related mistakes should be established to best represent the interlanguage competence. This thesis suggests a dynamic assessment model grounded in sociocultural theory to distinguish between errors and mistakes in texts written by learners of French, to then investigate the extent to which interlanguage competence varies across time, text types, and students. The key outcomes include: 1. An expanded model based on dynamic assessment principles to distinguish between errors and mistakes, which also provides the structure to create and observe learners’ zone of proximal development; 2. A method to increase the accuracy of the part-of-speech tagging procedure whose reliability correlates with the number of incorrect words contained in learners’ texts; 3. A sociocultural insight into interlanguage variability research. Results demonstrate that interlanguage competence is as variable as performance. The main finding shows that knowledge over time is subject to not only systematic, but also unsystematic variations

    Assistance Ă  la construction et Ă  la comparaison de techniques de diagnostic des connaissances

    Get PDF
    Comparing and building knowledge diagnostic is a challenge in the field of Technology Enhanced Learning (TEL) systems. Knowledge diagnostic aims to infer the knowledge mastered or not by a student in a given learning domain (like mathematics for high school) using student traces recorded by the TEL system. Knowledge diagnostics are widely used, but they strongly depend on the learning domain and are not well formalized. Thus, there exists no method or tool to build, compare and evaluate different diagnostics applied on a given learning domain. Similarly, using a diagnostic in two different domain usually imply to implementing almost both from scratch. Yet, comparing and reusing knowledge diagnostics can lead to reduce the engineering cost, to reinforce the evaluation and finally help knowledge diagnostic designers to choose a diagnostic. We propose a method, refine in a first platform, to assist knowledge diagnostic designers to build and compare knowledge diagnostics, using a new formalization of the diagnostic and student traces. To help building diagnostics, we used a semi-automatic machine learning algorithm, guided by an ontology of the traces and the knowledge designed by the designer. To help comparing diagnostics, we use a set of comparison criteria (either statistical or specific to the field of TEL systems) applied on the results of each diagnostic on a given set of traces. The main contribution is that our method is generic over diagnostics, meaning that very different diagnostics can be built and compared, unlike previous work on this topic. We evaluated our work though three experiments. The first one was about applying our method on three different domains and set of traces (namely geometry, reading and surgery) to build and compare five different knowledge diagnostics in cross validation. The second experiment was about designing and implementing a new comparison criteria specific to TEL systems: the impact of knowledge diagnostic on a pedagogical decision, the choice of a type of help to give to a student. The last experiment was about designing and adding in our platform a new diagnostic, in collaboration with an expert in didactic.Cette thĂšse aborde la thĂ©matique de la comparaison et de la construction de diagnostics des connaissances dans les Environnements Informatiques pour l'Apprentissage Humain (EIAH). Ces diagnostics sont utilisĂ©s pour dĂ©terminer si les apprenants maĂźtrisent ou non les connaissances ou conceptions du domaine d'apprentissage (par exemple math au collĂšge) Ă  partir des traces collectĂ©es par l'EIAH. Bien que ces diagnostics soient rĂ©currents dans les EIAH, ils sont fortement liĂ©s au domaine et ne sont que peu formalisĂ©s, si bien qu'il n'existe pas de mĂ©thode de comparaison pour les positionner entre eux et les valider. Pour la mĂȘme raison, utiliser un diagnostic dans deux domaines diffĂ©rents implique souvent de le redĂ©velopper en partie ou en totalitĂ©, sans rĂ©elle rĂ©utilisation. Pourtant, pouvoir comparer et rĂ©utiliser des diagnostics apporterait aux concepteurs d'EIAH plus de rigueur pour le choix, l'Ă©valuation et le dĂ©veloppement de ces diagnostics. Nous proposons une mĂ©thode d'assistance Ă  la construction et Ă  la comparaison de diagnostics des connaissances, rĂ©ifiĂ©e dans une premiĂšre plateforme, en se basant sur une formalisation du diagnostic des connaissances en EIAH que nous avons dĂ©fini et sur l'utilisation de traces d'apprenant. L'assistance Ă  la construction se fait via un algorithme d'apprentissage semi-automatique, guidĂ© par le concepteur du diagnostic grĂące Ă  une ontologie dĂ©crivant les traces et les connaissances du domaine d'apprentissage. L'assistance Ă  la comparaison se fait par application d'un ensemble de critĂšres de comparaison (statistiques ou spĂ©cifiques aux EIAH) sur les rĂ©sultats des diffĂ©rents diagnostics construits. La principale contribution au domaine est la gĂ©nĂ©ricitĂ© de notre mĂ©thode, applicable Ă  un ensemble de diagnostics diffĂ©rents pour tout domaine d'apprentissage. Nous Ă©valuons notre travail Ă  travers trois expĂ©rimentations. La premiĂšre porte sur l'application de la mĂ©thode Ă  trois domaines diffĂ©rents (gĂ©omĂ©trie, lecture, chirurgie) en utilisant des jeux de traces en validation croisĂ©e pour construire et appliquer les critĂšres de comparaison sur cinq diagnostics diffĂ©rents. La seconde expĂ©rimentation porte sur la spĂ©cification et l'implĂ©mentation d'un nouveau critĂšre de comparaison spĂ©cifique aux EIAH : la comparaison des diagnostics en fonction de leur impact sur une prise de dĂ©cision de l'EIAH, le choix d'un type d'aide Ă  donner Ă  l'apprenant. La troisiĂšme expĂ©rimentation traite de la spĂ©cification et de l'ajout d'un nouveau diagnostic dans notre plateforme, en collaborant avec une didacticienne
    corecore