17 research outputs found

    Combined inner and outer loop feedback in an intelligent tutoring system for statistics in higher education

    Get PDF
    Intelligent tutoring systems (ITSs) can provide inner loop feedback about steps within tasks, and outer loop feedback about performance on multiple tasks.While research typically addresses these feedback types separately, many ITSs offer them simultaneously. This study evaluates the effects of providing combined inner and outer loop feedback on social sciences students' learning process and performance in a first-year university statistics course. In a 2 x 2 factorial design (elaborate inner loop vs. minimal inner loop and outer loop vs. no outer loop feedback) with 521 participants, the effects of both feedback types and their combination were assessed through multiple linear regression models. Results showed mixed effects, depending on students' prior knowledge and experience, and no overall effects on course performance. Students tended to use outer loop feedback less when also receiving elaborate inner loop feedback. We therefore recommend introducing feedback types one by one and offering them for substantial periods of time

    Using student models to generate feedback in a university course on statistical sampling

    No full text
    Due to the complexity of the topic and a lack of individual guidance, introductory statistics courses at university are often challenging. Automated feedback might help to address this issue. In this study, we explore the use of student models to provide feedback. The research question is how student models can be used to generate feedback to university freshman in an online course on statistical sampling. An online activity was designed and delivered to 40 Biology freshmen. Instruments for generating student models were designed and student models were generated. Four students were interviewed about the generated models, and about the differences with their own estimation of their understanding. Results show that it is possible to generate individual feedback from student work in an online learning activity and suggest that discussing differences between own estimations and generated student models can be a fruitful teaching strategy

    Intelligent feedback on hypothesis testing

    Get PDF
    Hypothesis testing involves a complex stepwise procedure that is challenging for many students in introductory university statistics courses. In this paper we assess how feedback from an Intelligent Tutoring System can address the logic of hypothesis testing and whether such feedback contributes to first-year social sciences students’ proficiency in carrying out hypothesis tests. Feedback design combined elements of the model-tracing and constraint-based modeling paradigms, to address both the individual steps as well as the relations between steps. To evaluate the feedback, students in an experimental group (N = 163) received the designed intelligent feedback in six hypothesis-testing construction tasks, while students in a control group (N = 151) only received stepwise verification feedback in these tasks. Results showed that students receiving intelligent feedback spent more time on the tasks, solved more tasks and made fewer errors than students receiving only verification feedback. These positive results did not transfer to follow-up tasks, which might be a consequence of the isolated nature of these tasks. We conclude that the designed feedback may support students in learning to solve hypothesis-testing construction tasks independently and that it facilitates the creation of more hypothesis-testing construction tasks

    Intelligent feedback on hypothesis testing

    Get PDF
    Hypothesis testing involves a complex stepwise procedure that is challenging for many students in introductory university statistics courses. In this paper we assess how feedback from an Intelligent Tutoring System can address the logic of hypothesis testing and whether such feedback contributes to first-year social sciences students’ proficiency in carrying out hypothesis tests. Feedback design combined elements of the model-tracing and constraint-based modeling paradigms, to address both the individual steps as well as the relations between steps. To evaluate the feedback, students in an experimental group (N = 163) received the designed intelligent feedback in six hypothesis-testing construction tasks, while students in a control group (N = 151) only received stepwise verification feedback in these tasks. Results showed that students receiving intelligent feedback spent more time on the tasks, solved more tasks and made fewer errors than students receiving only verification feedback. These positive results did not transfer to follow-up tasks, which might be a consequence of the isolated nature of these tasks. We conclude that the designed feedback may support students in learning to solve hypothesis-testing construction tasks independently and that it facilitates the creation of more hypothesis-testing construction tasks

    DocentPraktijken in ICT-rijk wiskundeonderwijs

    No full text

    Enhancing learning with inspectable student models: worth the effort?

    No full text
    In electronic learning environments, information about a student's performance can be provided to the student in the form of an inspectable student model. While relatively easy to implement, little is known about whether students use the feedback provided by such models and whether they benefit from it. In this study, the use of inspectable student models in an introductory university statistics course by 599 first-year social science students was monitored. Research questions focused on whether students sought feedback from the student models, which decisions for subsequent study steps they made, and how this feedback seeking and decision making related to results on their statistics exams. Results showed a large variety among students in feedback-seeking and decision-making behavior. Lower student model scores seemed to encourage students to practice more on the same topic and higher scores seemed to evoke the decision to move to a different topic. Viewing frequency and amount of variety in decision making were positively related to exam results, even when controlling for total time students worked. These findings imply that inspectable student models can be a valuable addition to electronic learning environments and suggest that more intensive use of inspectable student models may contribute to learning

    The Interplay between Inspectable Student Models and Didactics of Statistics

    No full text
    Statistics is a challenging subject for many university students. In addition to dedicated methods of didactics of statistics, adaptive educational technologies can also offer a promising approach to target this challenge. Inspectable student models provide students with information about their mastery of the domain, thus triggering reflection and supporting the planning of subsequent study steps. In this article, we investigate the question of whether insights from didactics of statistics can be combined with inspectable student models and examine if the two can reinforce each other. Five inspectable student models were implemented within five didactically grounded online statistics modules, which were offered to 160 Social Sciences students as part of their first-year university statistics course. The student models were evaluated using several methods. Learning curve analysis and predictive validity analysis examined the quality of the student models from the technical point of view, while a questionnaire and a task analysis provided a didactical perspective. The results suggest that students appreciated the overall design, but the learning curve analysis revealed several weaknesses in the implemented domain structure. The task analysis revealed four underlying problems that help to explain these weaknesses. Addressing these problems improved both the predictive validity of the adjusted student models and the quality of the instructional modules themselves. These results provide insight into how inspectable student models and didactics of statistics can augment each other in the design of rich instructional modules for statistics

    Digital resources inviting changes in mid-adopting teachers’ practices and orchestrations

    No full text
    Digital resources offer opportunities to improve mathematics teaching and learning, but meanwhile may question teachers’ practices. This process of changing teaching practices is challenging for teachers who are not familiar with digital resources. The issue, therefore, is what teaching practices such so-called ‘mid-adopting’ mathematics teachers develop in their teaching with digital resources, and what skills and knowledge they need for this. To address this question, a theoretical framework including notions of instrumental orchestration and the TPACK model for teachers’ technological pedagogical content knowledge underpins the setting-up of a project with twelve mathematics teachers, novice in the field of integrating technology in teaching. Technology-rich teaching resources are provided, as well as support through face-to-face group meetings and virtual communication. Data include lesson observations and questionnaires. The results include a taxonomy of orchestrations, an inventory of skills and knowledge needed, and an overview of the relationships between them. During the project, teachers do change their orchestrations and acquire skills. On a theoretical level, the articulation of the instrumental orchestration model and the TPACK model seems promising

    The Interplay between Inspectable Student Models and Didactics of Statistics

    No full text
    Statistics is a challenging subject for many university students. In addition to dedicated methods of didactics of statistics, adaptive educational technologies can also offer a promising approach to target this challenge. Inspectable student models provide students with information about their mastery of the domain, thus triggering reflection and supporting the planning of subsequent study steps. In this article, we investigate the question of whether insights from didactics of statistics can be combined with inspectable student models and examine if the two can reinforce each other. Five inspectable student models were implemented within five didactically grounded online statistics modules, which were offered to 160 Social Sciences students as part of their first-year university statistics course. The student models were evaluated using several methods. Learning curve analysis and predictive validity analysis examined the quality of the student models from the technical point of view, while a questionnaire and a task analysis provided a didactical perspective. The results suggest that students appreciated the overall design, but the learning curve analysis revealed several weaknesses in the implemented domain structure. The task analysis revealed four underlying problems that help to explain these weaknesses. Addressing these problems improved both the predictive validity of the adjusted student models and the quality of the instructional modules themselves. These results provide insight into how inspectable student models and didactics of statistics can augment each other in the design of rich instructional modules for statistics

    Enhancing learning with inspectable student models:Worth the effort?

    Get PDF
    In electronic learning environments, information about a student's performance can be provided to the student in the form of an inspectable student model. While relatively easy to implement, little is known about whether students use the feedback provided by such models and whether they benefit from it. In this study, the use of inspectable student models in an introductory university statistics course by 599 first-year social science students was monitored. Research questions focused on whether students sought feedback from the student models, which decisions for subsequent study steps they made, and how this feedback seeking and decision making related to results on their statistics exams. Results showed a large variety among students in feedback-seeking and decision-making behavior. Lower student model scores seemed to encourage students to practice more on the same topic and higher scores seemed to evoke the decision to move to a different topic. Viewing frequency and amount of variety in decision making were positively related to exam results, even when controlling for total time students worked. These findings imply that inspectable student models can be a valuable addition to electronic learning environments and suggest that more intensive use of inspectable student models may contribute to learning
    corecore