9 research outputs found
Recommended from our members
Bloom’s dichotomous key: a new tool for evaluating the cognitive difficulty of assessments
ONE OF THE MORE WIDELY USED TOOLS to both inform course design and measure expert-like skills is Bloom’s taxonomy of educational objectives for the cognitive domain (2, 13, 22). This tool divides assessment of cognitive skills into six different levels: knowledge/remember, comprehension/understand, application/apply, analysis/analyze, synthesis/create, and evaluation/evaluate (2, 6). The first two levels are generally considered to represent lower levels of mastery (lower-order cognitive skills) and the last three represent higher-order levels of mastery involving critical thinking (higher-order cognitive skills) with apply-level questions often bridging the gap between the two (e.g., Refs. 5, 8, 10, 11, 23, and 24). While Bloom’s taxonomy is widely used by science educators, learning and mastering the concepts of the cognitive domain to categorize educational materials into the six levels identified in Bloom’s taxonomy are not trivial tasks. As with any complex task, experts and novices differ in the key abilities needed to cue into and evaluate information (4, 7, 9). Across disciplines, novices are less adept at noticing salient features and meaningful patterns, recognizing the context of applicability of concepts, and using organized conceptual knowledge rather than superficial cues to guide their decisions. Newer users of Bloom’s taxonomy demonstrate similar difficulties as they work to gain expertise, leading to inconsistencies in Bloom’s ratings (1, 8, 15) (see BDK Development for examples)
Recommended from our members
Redesigning a course to help students achieve higher-order cognitive thinking skills: from goals and mechanics to student outcomes
Here we describe a 4-yr course reform and its outcomes. The upper-division neurophysiology course gradually transformed from a traditional lecture in 2004 to a more student-centered course in 2008, through the addition of evidence-based active learning practices, such as deliberate problem-solving practice on homework and peer learning structures, both inside and outside of class. Due to the incremental nature of the reforms and absence of pre-reform learning assessments, we needed a way to retrospectively assess the effectiveness of our efforts. To do this, we first looked at performance on 12 conserved exam questions. Students performed significantly higher post-reform on questions requiring lower-level cognitive skills and those requiring higher-level cognitive skills. Furthermore, student performance on conserved questions was higher post-reform in both the top and bottom quartiles of students, although lower-quartile student performance did not improve until after the first exam. To examine student learning more broadly, we also used Bloom's taxonomy to quantify a significant increase in the Bloom's level of exams, with students performing equally well post-reform on exams that had over twice as many questions at higher cognitive skill levels. Finally, we believe that four factors provided critical contributions to the success of the course reform, including: transformation efforts across multiple course components, alignment between formative and evaluative course materials, student buy-in to course instruction, and instructional support. This reform demonstrates both the effectiveness of incorporating student-centered, active learning into our course, and the utility of using Bloom's level as a metric to assess course reform