7,263 research outputs found

    Students´ language in computer-assisted tutoring of mathematical proofs

    Get PDF
    Truth and proof are central to mathematics. Proving (or disproving) seemingly simple statements often turns out to be one of the hardest mathematical tasks. Yet, doing proofs is rarely taught in the classroom. Studies on cognitive difficulties in learning to do proofs have shown that pupils and students not only often do not understand or cannot apply basic formal reasoning techniques and do not know how to use formal mathematical language, but, at a far more fundamental level, they also do not understand what it means to prove a statement or even do not see the purpose of proof at all. Since insight into the importance of proof and doing proofs as such cannot be learnt other than by practice, learning support through individualised tutoring is in demand. This volume presents a part of an interdisciplinary project, set at the intersection of pedagogical science, artificial intelligence, and (computational) linguistics, which investigated issues involved in provisioning computer-based tutoring of mathematical proofs through dialogue in natural language. The ultimate goal in this context, addressing the above-mentioned need for learning support, is to build intelligent automated tutoring systems for mathematical proofs. The research presented here has been focused on the language that students use while interacting with such a system: its linguistic propeties and computational modelling. Contribution is made at three levels: first, an analysis of language phenomena found in students´ input to a (simulated) proof tutoring system is conducted and the variety of students´ verbalisations is quantitatively assessed, second, a general computational processing strategy for informal mathematical language and methods of modelling prominent language phenomena are proposed, and third, the prospects for natural language as an input modality for proof tutoring systems is evaluated based on collected corpora

    Interactive-Constructive-Active-Passive: The Relative Effectiveness of Differentiated Activities on Students' Learning

    Get PDF
    abstract: From the instructional perspective, the scope of "active learning" in the literature is very broad and includes all sorts of classroom activities that engage students with the learning experience. However, classifying all classroom activities as a mode of "active learning" simply ignores the unique cognitive processes associated with the type of activity. The lack of an extensive framework and taxonomy regarding the relative effectiveness of these "active" activities makes it difficult to compare and contrast the value of conditions in different studies in terms of student learning. Recently, Chi (2009) proposed a framework of differentiated overt learning activities (DOLA) as active, constructive, and interactive based on their underlying cognitive principles and their effectiveness on students' learning outcomes. The motivating question behind this framework is whether some types of engagement affect learning outcomes more than the others. This work evaluated the effectiveness and applicability of the DOLA framework to learning activities for STEM classes. After classification of overt learning activities as being active, constructive or interactive, I then tested the ICAP hypothesis, which states that student learning is more effective in interactive activities than constructive activities, which are more effective than active activities, which are more effective than passive activities. I conducted two studies (Study 1 and Study 2) to determine how and to what degree differentiated activities affected students' learning outcomes. For both studies, I measured students' knowledge of materials science and engineering concepts. Results for Study 1 showed that students scored higher on all post-class quiz questions after participating in interactive and constructive activities than after the active activities. However, student scores on more difficult, inference questions suggested that interactive activities provided significantly deeper learning than either constructive or active activities. Results for Study 2 showed that students' learning, in terms of gain scores, increased systematically from passive to active to constructive to interactive, as predicted by ICAP. All the increases, from condition to condition, were significant. Verbal analysis of the students' dialogue in interactive condition indicated a strong correlation between the co-construction of knowledge and learning gains. When the statements and responses of each student build upon those of the other, both students benefit from the collaboration. Also, the linear combination of discourse moves was significantly related to the adjusted gain scores with a very high correlation coefficient. Specifically, the elaborate type discourse moves were positively correlated with learning outcomes; whereas the accept type moves were negatively correlated with learning outcomes. Analyses of authentic activities in a STEM classroom showed that they fit within the taxonomy of the DOLA framework. The results of the two studies provided evidence to support the predictions of the ICAP hypothesis.Dissertation/ThesisPh.D. Curriculum and Instruction 201

    Deeper Understanding of Tutorial Dialogues and Student Assessment

    Get PDF
    Bloom (1984) reported two standard deviation improvement with human tutoring which inspired many researchers to develop Intelligent Tutoring Systems (ITSs) that are as effective as human tutoring. However, recent studies suggest that the 2-sigma result was misleading and that current ITSs are as good as human tutors. Nevertheless, we can think of 2 standard deviations as the benchmark for tutoring effectiveness of ideal expert tutors. In the case of ITSs, there is still the possibility that ITSs could be better than humans.One way to improve the ITSs would be identifying, understanding, and then successfully implementing effective tutorial strategies that lead to learning gains. Another step towards improving the effectiveness of ITSs is an accurate assessment of student responses. However, evaluating student answers in tutorial dialogues is challenging. The student answers often refer to the entities in the previous dialogue turns and problem description. Therefore, the student answers should be evaluated by taking dialogue context into account. Moreover, the system should explain which parts of the student answer are correct and which are incorrect. Such explanation capability allows the ITSs to provide targeted feedback to help students reflect upon and correct their knowledge deficits. Furthermore, targeted feedback increases learners\u27 engagement, enabling them to persist in solving the instructional task at hand on their own. In this dissertation, we describe our approach to discover and understand effective tutorial strategies employed by effective human tutors while interacting with learners. We also present various approaches to automatically assess students\u27 contributions using general methods that we developed for semantic analysis of short texts. We explain our work using generic semantic similarity approaches to evaluate the semantic similarity between individual learner contributions and ideal answers provided by experts for target instructional tasks. We also describe our method to assess student performance based on tutorial dialogue context, accounting for linguistic phenomena such as ellipsis and pronouns. We then propose an approach to provide an explanatory capability for assessing student responses. Finally, we recommend a novel method based on concept maps for jointly evaluating and interpreting the correctness of student responses

    How Domain Differences Impact the Mode Structure of Expert Tutoring Dialogue

    Get PDF
    Whitney Layne Cade granted permission for the digitization of her paper. It was submitted by CDWhile human-to-human dialogue in tutoring sessions has received considerable attention in the last 25 years, there exists a paucity of work examining the pedagogical and motivational strategies of expert human tutors. An established trend in the tutorial dialogue community is to study tutorial dialogues in a very fine-grained manner, at the level of the speech act or dialogue move. The present work offers a coding scheme that examines larger, pedagogically distinct phases as the unit of analysis, referred to as “modes”, which exist in expert tutoring and provide the context needed to understand patterns of dialogue moves. The eight modes identified by this coding scheme are the Introduction, Lecture, Modeling, Scaffolding, Fading, Highlighting, Off Topic, and Conclusion mode, and each mode was reliably identified at or above the .8 kappa level. After determining how often modes occur and the amount of dialogue devoted to them in expert tutoring sessions, differences between the domains of math and science were investigated. Significant variance between the domains was revealed using this largergrained coding scheme, particularly in how Lecture and Scaffolding are used in expert tutoring. While these two modes tend to dominate most tutorial dialogue in this sample regardless of domain, the differences in their frequency and the amount of dialogue devoted to each mode suggest diverse tutoring goals associated with each domain. Other subtle differences in mode distributions draw attention both to the complexities of expert tutoring and the danger of generalizing tutorial structures across domains.This honors paper was approved by Dr. Natalie Person, Dr. Chris Wetzel, and Dr. Andrew Olne

    Analyzing collaborative learning processes automatically

    Get PDF
    In this article we describe the emerging area of text classification research focused on the problem of collaborative learning process analysis both from a broad perspective and more specifically in terms of a publicly available tool set called TagHelper tools. Analyzing the variety of pedagogically valuable facets of learners’ interactions is a time consuming and effortful process. Improving automated analyses of such highly valued processes of collaborative learning by adapting and applying recent text classification technologies would make it a less arduous task to obtain insights from corpus data. This endeavor also holds the potential for enabling substantially improved on-line instruction both by providing teachers and facilitators with reports about the groups they are moderating and by triggering context sensitive collaborative learning support on an as-needed basis. In this article, we report on an interdisciplinary research project, which has been investigating the effectiveness of applying text classification technology to a large CSCL corpus that has been analyzed by human coders using a theory-based multidimensional coding scheme. We report promising results and include an in-depth discussion of important issues such as reliability, validity, and efficiency that should be considered when deciding on the appropriateness of adopting a new technology such as TagHelper tools. One major technical contribution of this work is a demonstration that an important piece of the work towards making text classification technology effective for this purpose is designing and building linguistic pattern detectors, otherwise known as features, that can be extracted reliably from texts and that have high predictive power for the categories of discourse actions that the CSCL community is interested in

    Interventions to Regulate Confusion during Learning

    Get PDF
    Confusion provides opportunities to learn at deeper levels. However, learners must put forth the necessary effort to resolve their confusion to convert this opportunity into actual learning gains. Learning occurs when learners engage in cognitive activities beneficial to learning (e.g., reflection, deliberation, problem solving) during the process of confusion resolution. Unfortunately, learners are not always able to resolve their confusion on their own. The inability to resolve confusion can be due to a lack of knowledge, motivation, or skills. The present dissertation explored methods to aid confusion resolution and ultimately promote learning through a multi-pronged approach. First, a survey revealed that learners prefer more information and feedback when confused and that they preferred different interventions for confusion compared to boredom and frustration. Second, expert human tutors were found to most frequently handle learner confusion by providing direct instruction and responded differently to learner confusion compared to anxiety, frustration, and happiness. Finally, two experiments were conducted to test the effectiveness of pedagogical and motivational confusion regulation interventions. Both types of interventions were investigated within a learning environment that experimentally induced confusion via the presentation of contradictory information by two animated agents (tutor and peer student agents). Results showed across both studies that learner effort during the confusion regulation task impacted confusion resolution and that learning occurred when the intervention provided the opportunity for learners to stop, think, and deliberate about the concept being discussed. Implications for building more effective affect-sensitive learning environments are discussed

    New measurement paradigms

    Get PDF
    This collection of New Measurement Paradigms papers represents a snapshot of the variety of measurement methods in use at the time of writing across several projects funded by the National Science Foundation (US) through its REESE and DR K–12 programs. All of the projects are developing and testing intelligent learning environments that seek to carefully measure and promote student learning, and the purpose of this collection of papers is to describe and illustrate the use of several measurement methods employed to achieve this. The papers are deliberately short because they are designed to introduce the methods in use and not to be a textbook chapter on each method. The New Measurement Paradigms collection is designed to serve as a reference point for researchers who are working in projects that are creating e-learning environments in which there is a need to make judgments about students’ levels of knowledge and skills, or for those interested in this but who have not yet delved into these methods
    corecore