219 research outputs found
Putting the Testing Effect to the Test. Why and When is Testing effective for Learning in Secondary School
Dirkx, K. J. H. (2014, 11 April). Putting the testing effect to the test. Why and when is testing effective for learning in secondary school. Unpublished doctoral dissertation. Heerlen: Open University of the NetherlandsIn this doctoral thesis the testing effect is investigated among secondary school students. It includes 5 different studies.NWO the Netherlands organization for scientific research. Project no. 451.07.00
Tussentijds toetsen leidt tot gerichter leren
Een kort artikel met de belangrijkste bevindingen uit het proefschrift "Putting the testing-effect to the test. When and why is testing effective for learning in secondary school?
Recommended from our members
Developing Rubrics to Assess Complex (Generic) Skills in the Classroom: How to Distinguish Skills’ Mastery Levels?
Many schools use analytic rubrics to (formatively) assess complex, generic or transversal (21st century) skills, such as collaborating and presenting. In rubrics, performance indicators on different levels of mastering a skill (e.g., novice, practiced, advanced, talented) are described. However, the dimensions used to describe the different mastery levels vary within and across rubrics and are in many cases not consistent, concise and often trivial, thereby hampering the quality of rubrics used to learn and assess complex skills. In this study we reviewed 600 rubrics available in three international databases (Rubistar, For All Rubrics, i-rubrics) and analyzed the dimensions found within 12 strictly selected rubrics that are currently used to distinguish mastery levels and describe performance indicators for the skill \u27collaboration\u27 at secondary schools. These dimensions were subsequently defined and categorized. This resulted in 13 different dimensions, clustered in 6 categories, feasible for defining skills’ mastery levels in rubrics. The identified dimensions can specifically support both teachers and researchers to construct, review and investigate performance indicators for each mastery level of a complex skill. On a more general level, they can support analysis of the overall quality of analytic rubrics to (formatively) assess complex skills. Accessed 2,884 times on https://pareonline.net from December 11, 2017 to December 31, 2019. For downloads from January 1, 2020 forward, please click on the PlumX Metrics link to the right
Recommended from our members
Developing Rubrics to Assess Complex (Generic) Skills in the Classroom: How to Distinguish Skills’ Mastery Levels?
Many schools use analytic rubrics to (formatively) assess complex, generic or transversal (21st century) skills, such as collaborating and presenting. In rubrics, performance indicators on different levels of mastering a skill (e.g., novice, practiced, advanced, talented) are described. However, the dimensions used to describe the different mastery levels vary within and across rubrics and are in many cases not consistent, concise and often trivial, thereby hampering the quality of rubrics used to learn and assess complex skills. In this study we reviewed 600 rubrics available in three international databases (Rubistar, For All Rubrics, i-rubrics) and analyzed the dimensions found within 12 strictly selected rubrics that are currently used to distinguish mastery levels and describe performance indicators for the skill \u27collaboration\u27 at secondary schools. These dimensions were subsequently defined and categorized. This resulted in 13 different dimensions, clustered in 6 categories, feasible for defining skills’ mastery levels in rubrics. The identified dimensions can specifically support both teachers and researchers to construct, review and investigate performance indicators for each mastery level of a complex skill. On a more general level, they can support analysis of the overall quality of analytic rubrics to (formatively) assess complex skills. Accessed 2,884 times on https://pareonline.net from December 11, 2017 to December 31, 2019. For downloads from January 1, 2020 forward, please click on the PlumX Metrics link to the right
The Effects of Different Testing Methods on Retention and Comprehension of Expository
Dirkx, K. J. H., Kester, L., & Kirschner, P. A. (2011). The Effects of Different Testing Methods on Retention and Comprehension of Expository. In N. Mansour (Ed.), Programme book JURE 2011. Junior Researchers of EARLI (pp. 125). Exeter, UK: University of Exeter.In the present research the effects of different testing methods (i.e., free recall, concept
mapping, and summarizing) versus restudy on retention and comprehension were
investigated.This research was financially supported by the Netherlands Organization of Scientific Research (NWO 451.07.007)
The Testing Effect for Learning Principles and Procedures from Texts
The authors explored whether a testing effect occurs not only for retention of facts but also for application of principles and procedures. For that purpose, 38 high school students either repeatedly studied a text on probability calculations or studied the text, took a test on the content, restudied the text, and finally took the test a second time. Results show that testing not only leads to better retention of facts than restudying, but also to better application of acquired knowledge (i.e., principles and procedures) in high school statistics. In other words, testing seems not only to benefit fact retention, but also positively affects deeper learning
Learning from Questions During a Museum Visit
Dirkx, K. J. H., Kester, L., & Kirschner, P. A. (2013, 30 August). Learning from Questions During a Museum Visit. Paper presented at the meeting of the European Association for Research on Learning and Instruction, Munich, Germany.This paper describes the results of a museum study involving worksheets
- …