48,219 research outputs found

    Analyzing collaborative learning processes automatically

    Get PDF
    In this article we describe the emerging area of text classification research focused on the problem of collaborative learning process analysis both from a broad perspective and more specifically in terms of a publicly available tool set called TagHelper tools. Analyzing the variety of pedagogically valuable facets of learners’ interactions is a time consuming and effortful process. Improving automated analyses of such highly valued processes of collaborative learning by adapting and applying recent text classification technologies would make it a less arduous task to obtain insights from corpus data. This endeavor also holds the potential for enabling substantially improved on-line instruction both by providing teachers and facilitators with reports about the groups they are moderating and by triggering context sensitive collaborative learning support on an as-needed basis. In this article, we report on an interdisciplinary research project, which has been investigating the effectiveness of applying text classification technology to a large CSCL corpus that has been analyzed by human coders using a theory-based multidimensional coding scheme. We report promising results and include an in-depth discussion of important issues such as reliability, validity, and efficiency that should be considered when deciding on the appropriateness of adopting a new technology such as TagHelper tools. One major technical contribution of this work is a demonstration that an important piece of the work towards making text classification technology effective for this purpose is designing and building linguistic pattern detectors, otherwise known as features, that can be extracted reliably from texts and that have high predictive power for the categories of discourse actions that the CSCL community is interested in

    Beyond A/B Testing: Sequential Randomization for Developing Interventions in Scaled Digital Learning Environments

    Full text link
    Randomized experiments ensure robust causal inference that are critical to effective learning analytics research and practice. However, traditional randomized experiments, like A/B tests, are limiting in large scale digital learning environments. While traditional experiments can accurately compare two treatment options, they are less able to inform how to adapt interventions to continually meet learners' diverse needs. In this work, we introduce a trial design for developing adaptive interventions in scaled digital learning environments -- the sequential randomized trial (SRT). With the goal of improving learner experience and developing interventions that benefit all learners at all times, SRTs inform how to sequence, time, and personalize interventions. In this paper, we provide an overview of SRTs, and we illustrate the advantages they hold compared to traditional experiments. We describe a novel SRT run in a large scale data science MOOC. The trial results contextualize how learner engagement can be addressed through inclusive culturally targeted reminder emails. We also provide practical advice for researchers who aim to run their own SRTs to develop adaptive interventions in scaled digital learning environments

    A Methodology for Discovering how to Adaptively Personalize to Users using Experimental Comparisons

    Full text link
    We explain and provide examples of a formalism that supports the methodology of discovering how to adapt and personalize technology by combining randomized experiments with variables associated with user models. We characterize a formal relationship between the use of technology to conduct A/B experiments and use of technology for adaptive personalization. The MOOClet Formalism [11] captures the equivalence between experimentation and personalization in its conceptualization of modular components of a technology. This motivates a unified software design pattern that enables technology components that can be compared in an experiment to also be adapted based on contextual data, or personalized based on user characteristics. With the aid of a concrete use case, we illustrate the potential of the MOOClet formalism for a methodology that uses randomized experiments of alternative micro-designs to discover how to adapt technology based on user characteristics, and then dynamically implements these personalized improvements in real time

    Supporting mediated peer-evaluation to grade answers to open-ended questions

    Get PDF
    We show an approach to semi-automatic grading of answers given by students to open ended questions (open answers). We use both peer-evaluation and teacher evaluation. A learner is modeled by her Knowledge and her assessments quality (Judgment). The data generated by the peer- and teacher- evaluations, and by the learner models is represented by a Bayesian Network, in which the grades of the answers, and the elements of the learner models, are variables, with values in a probability distribution. The initial state of the network is determined by the peer-assessment data. Then, each teacher’s grading of an answer triggers evidence propagation in the network. The framework is implemented in a web-based system. We present also an experimental activity, set to verify the effectiveness of the approach, in terms of correctness of system grading, amount of required teacher's work, and correlation of system outputs with teacher’s grades and student’s final exam grade

    Where does good evidence come from?

    Get PDF
    This paper started as a debate between the two authors. Both authors present a series of propositions about quality standards in education research. Cook’s propositions, as might be expected, concern the importance of experimental trials for establishing the security of causal evidence, but they also include some important practical and acceptable alternatives such as regression discontinuity analysis. Gorard’s propositions, again as might be expected, tend to place experimental trials within a larger mixed method sequence of research activities, treating them as important but without giving them primacy. The paper concludes with a synthesis of these ideas, summarising the many areas of agreement and clarifying the few areas of disagreement. The latter include what proportion of available research funds should be devoted to trials, how urgent the need for more trials is, and whether the call for more truly mixed methods work requires a major shift in the community

    An experiment with ontology mapping using concept similarity

    Get PDF
    This paper describes a system for automatically mapping between concepts in different ontologies. The motivation for the research stems from the Diogene project, in which the project's own ontology covering the ICT domain is mapped to external ontologies, in order that their associated content can automatically be included in the Diogene system. An approach involving measuring the similarity of concepts is introduced, in which standard Information Retrieval indexing techniques are applied to concept descriptions. A matrix representing the similarity of concepts in two ontologies is generated, and a mapping is performed based on two parameters: the domain coverage of the ontologies, and their levels of granularity. Finally, some initial experimentation is presented which suggests that our approach meets the project's unique set of requirements

    Parametric Surfaces for Augmented Architecture representation

    Get PDF
    Augmented Reality (AR) represents a growing communication channel, responding to the need to expand reality with additional information, offering easy and engaging access to digital data. AR for architectural representation allows a simple interaction with 3D models, facilitating spatial understanding of complex volumes and topological relationships between parts, overcoming some limitations related to Virtual Reality. In the last decade different developments in the pipeline process have seen a significant advancement in technological and algorithmic aspects, paying less attention to 3D modeling generation. For this, the article explores the construction of basic geometries for 3D model’s generation, highlighting the relationship between geometry and topology, basic for a consistent normal distribution. Moreover, a critical evaluation about corrective paths of existing 3D models is presented, analysing a complex architectural case study, the virtual model of Villa del Verginese, an emblematic example for topological emerged problems. The final aim of the paper is to refocus attention on 3D model construction, suggesting some "good practices" useful for preventing, minimizing or correcting topological problems, extending the accessibility of AR to people engaged in architectural representation

    Enhancing simulation education with intelligent tutoring systems

    Get PDF
    The demand for education in the area of simulation is in the increase. This paper describes how education in the field of simulation can take advantage of the virtues of intelligent tutoring with respect to enhancing the educational process. For this purpose, this paper gives an overview of what constitutes the objectives and the content of a comprehensive course in discrete event simulation. The architecture of an intelligent tutoring system is presented and it is discussed how these sophisticated learning aids offer individualised student guidance and support within a learning environment. The paper then introduces a prototype intelligent tutoring system, the simulation tutor, and suggests how the system might be developed to enhance education in simulation
    • …
    corecore