83 research outputs found
Examples, Practice Problems, or Both? Effects on Motivation and Learning in Shorter and Longer Sequences
Research suggests some sequences of examples and problems (i.e., EE, EP) are more effective (higher test performance) and efficient (attained with equal/less mental effort) than others (PP, sometimes also PE). Recent findings suggest this is due to motivational variables (i.e., self-efficacy), but did not test this during the training phase. Moreover, prior research used only short task sequences. Therefore, we investigated effects on motivational variables, effectiveness, and efficiency in a short (Experiment 1; 4 learning tasks; n=157) and longer task sequence (Experiment 2; 8 tasks; n=105). With short sequences, all example conditions were more effective, efficient, and motivating than PP. With longer sequences, all example conditions were more motivating and efficient than PP, but only EE was more effective than PP. Moreover, EE was most efficient during training, regardless of sequence length. These results suggest that example study (only) is more effective, efficient and more motivating than PP
Testing After Worked Example Study Does Not Enhance Delayed Problem-Solving Performance Compared to Restudy
Four experiments investigated whether the testing effect also applies to the acquisition of problem-solving skills from worked examples. Experiment 1 (n = 120) showed no beneficial effects of testing consisting of isomorphic problem solving or example recall on final test performance, which consisted of isomorphic problem solving, compared to continued study of isomorphic examples. Experiment 2 (n = 124) showed no beneficial effects of testing consisting of identical problem solving compared to restudying an identical example. Interestingly, participants who took both an immediate and a delayed final test outperformed those taking only a delayed test. This finding suggested that testing might become beneficial for retention but only after a certain level of schema acquisition has taken place through restudying several examples. However, experiment 2 had no control condition restudying examples instead of taking the immediate test. Experiment 3 (n = 129) included such a restudy condition, and there was no evidence that testing after studying four examples was more effective for final delayed test performance than restudying, regardless of whether restudied/tested problems were isomorphic or identical. Experiment 4 (n = 75) used a similar design as experiment 3 (i.e., testing/restudy after four examples), but with examples on a different topic and with a different participant population. Again, no evidence of a testing effect was found. Thus, across four experiments, with different types of initial tests, different problem-solving domains, and different participant populations, we found no evidence that testing enhanced delayed test performance compared to restudy. These findings suggest that the testing effect might not apply to acquiring problem-solving skills from worked examples
Learning-by-Teaching Without Audience Presence or Interaction: When and Why Does it Work?
Teaching the contents of study materials by providing explanations to fellow students can be a beneficial instructional activity. A learning-by-teaching effect can also occur when students provide explanations to a real, remote, or even fictitious audience that cannot be interacted with. It is unclear, however, which underlying mechanisms drive learning by non-interactive teaching effects and why several recent studies did not replicate this effect. This literature review aims to shed light on when and why learning by non-interactive teaching works. First, we review the empirical literature to comment on the different mechanisms that have been proposed to explain why learning by non-interactive teaching may be effective. Second, we discuss the available evidence regarding potential boundary conditions of the non-interactive teaching effect. We then synthesize the available empirical evidence on processes and boundary conditions to provide a preliminary theoretical model of when and why non-interactive teaching is effective. Finally, based on our model of learning by non-interactive teaching, we outline several promising directions for future research and recommendations for educational practice
Shifting online: 12 tips for online teaching derived from contemporary educational psychology research
Background: As a result of the COVID-19 pandemic, many teachers found themselves making a rapid and often challenging shift from in-person classroom teaching to teaching in an online environment. As teachers continue to learn about working in this new environment, research in cognitive and learning sciences, specifically findings from cognitive load theory and related areas, can provide meaningful strategies for teaching in this ‘new normal’. Objectives: This paper describes 12 tips derived from contemporary research in educational psychology, focusing particularly on empirically supported strategies that teachers may apply in their online classroom to ensure that learning is optimized. Implications for Practice: These strategies are generalizable across age groups and learning areas, and are categorized into one of two themes: approaches to optimize the design of online learning materials, and instructional strategies to support student learning. A discussion follows, outlining how teachers may apply these strategies in different contexts, with a brief overview of emerging efforts that aim to bridge cognitive load theory and self-regulated learning research
Comparing Mental Effort, Difficulty, and Confidence Appraisals in Problem-Solving: A Metacognitive Perspective
It is well established in educational research that metacognitive monitoring of performance assessed by self-reports, for instance, asking students to report their confidence in provided answers, is based on heuristic cues rather than on actual success in the task. Subjective self-reports are also used in educational research on cognitive load, where they refer to the perceived amount of mental effort invested in or difficulty of each task item. In the present study, we examined the potential underlying bases and the predictive value of mental effort and difficulty appraisals compared to confidence appraisals by applying metacognitive concepts and paradigms. In three experiments, participants faced verbal logic problems or one of two non-verbal reasoning tasks. In a between-participants design, each task item was followed by either mental effort, difficulty, or confidence appraisals. We examined the associations between the various appraisals, response time, and success rates. Consistently across all experiments, we found that mental effort and difficulty appraisals were associated more strongly than confidence with response time. Further, while all appraisals were highly predictive of solving success, the strength of this association was stronger for difficulty and confidence appraisals (which were similar) than for mental effort appraisals. We conclude that mental effort and difficulty appraisals are prone to misleading cues like other metacognitive judgments and are based on unique underlying processes. These findings challenge the accepted notion that mental effort appraisals can serve as reliable reflections of cognitive load
Observationeel leren van videovoorbeelden
Observationeel leren, dat wil zeggen, leren door te kijken naar het goede
voorbeeld van anderen, is een natuurlijke manier van leren die jonge
kinderen spontaan gebruiken. Alles zelf door eigen ervaring moeten leren
zou niet alleen zeer tijdrovend maar vaak ook gevaarlijk zijn. Gelukkig
kunnen we leren van het goede voorbeeld van anderen. Observationeel
leren van voorbeelden noemen we dit
Testing After Worked Example Study Does Not Enhance Delayed Problem-Solving Performance Compared to Restudy
Four experiments investigated whether the testing effect also applies to the acquisition of problem-solving skills from worked examples. Experiment 1 (n = 120) showed no beneficial effects of testing consisting of isomorphic problem solving or example recall on final test performance, which consisted of isomorphic problem solving, compared to continued study of isomorphic examples. Experiment 2 (n = 124) showed no beneficial effects of testing consisting of identical problem solving compared to restudying an identical example. Interestingly, participants who took both an immediate and a delayed final test outperformed those taking only a delayed test. This finding suggested that testing might become beneficial for retention but only after a certain level of schema acquisition has taken place through restudying several examples. However, experiment 2 had no control condition restudying examples instead of taking the immediate test. Experiment 3 (n = 129) included such a restudy condition, and there was no evidence that testing after studying four examples was more effective for
Instructing students on effective sequences of examples and problems: Does self-regulated learning improve from knowing what works and why?
Nowadays, students often practice problem-solving skills in online learning environments with the help of examples and problems. This requires them to self-regulate their learning. It is questionable how novices self-regulate their learning from examples and problems and whether they need support. The present study investigated the open questions (1) to what extent students' (novices) task selections align with instructional design principles and (2) whether informing them about these principles would improve their task selections, learning outcomes, and motivation. Higher education students (N = 150) learned a problem-solving procedure by fixed sequences of examples and problems (FS-condition), or by self-regulated learning (SRL). The SRL participants selected tasks from a database, varying in format, complexity, and cover story, either with (ISRL-condition) or without (SRL-condition) watching a video detailing the instructional design principles. Students' task-selection patterns in both SRL conditions largely corresponded to the principles, although tasks were built up in complexity more often in the ISRL-condition than in the SRL-condition. Moreover, there was still room for improvement in students' task selections after solving practice problems. The video instruction helped students to better apply certain principles, but did not enhance learning and motivation. Finally, there were no test performance or motivational differences among conditions. Although these findings might suggest it is relatively ‘safe’ to allow students to independently start learning new problems-solving tasks using examples and problems, caution is warranted: It is unclear whether these findings generalize to other student populations, as the students participating in this study have had some experience with similar tasks or learning with examples. Moreover, as there was still room for improvement in students' task selections, follow-up research should investigate how we can further improve self-regulated learning from examples and practice problems
Training task-selection skills: The effect of prompts and explicit instruction on transfer
For effective self-regulated learning with problem-solving tasks, students must accurately assess their performance and select a suitable next learning task. However, most students struggle with this. Recent research shows that self-assessment and task-selection skills can be trained through video modeling examples (SATS-training). However, the limited research available suggests that students struggle to transfer trained task-selection skills to other problem-solving contexts. We investigated whether guidance in the form of prompts (stating that the task-selection procedure can be adapted and used) or explicit instruction (on how the procedure can be adapted) would improve task-selection accuracy on transfer tasks with this guidance available and on later, unguided transfer tasks. Explicit instruction significantly enhanced task-selection accuracy compared to prompts and a no-guidance control condition on guided transfer tasks, but not on unguided transfer tasks. Thus, it remains a question how to lastingly improve transfer of task-selection skills also in the absence of guidance
- …