116,567 research outputs found

    “I try my very best and then I send it to the wizards, who make up numbers”: Science students’ perceptions of (in)effective assessment and feedback practices

    Get PDF
    Assessment and feedback are key concerns for tertiary students, as evidenced by university and national student experience surveys (QILT, 2020). While these large surveys convey the general student sentiment, literature recommends approaches other than surveys to deepen understanding of students' experiences in individual faculties, courses etc. (Berk, 2018). This is particularly important when planning any changes in assessment practices. Following on from an initial study into students’ assessment and feedback literacy (Wills et al., 2022), we present the second stage of our project aiming to understand students’ experiences and perceptions of assessment and feedback at the University of New South Wales. From a thematic analysis of semi-structured student interviews, we present several case studies of what science students consider to be effective assessment and feedback in their program. Some identified themes such as linked assessments, worked answers, and annotated submissions, were found to be effective practices across board. However, for other themes such as the usefulness of formative assessment, rubrics, and positive feedback, students were not in agreement. Resoundingly, students condemned the lack of closure around final exams. These and other findings will be presented before student suggestions for improvement are discussed, as well as looking to a future assessment co-design with students. Feedback on final exams: “
about final exams, it it's like a black box. You know, you answer and you might get, I don't know, 70%. But that means there's 30% you've got wrong and you still want to know why that is
” Effectiveness of formative assessment: “I think that often, they're just one or two questions that are about a detail that was unimportant. And the lecture isn't
 the lecture content isn’t tested properly.” REFERENCES Berk, R. (2018). Beyond Student Rating: Fourteen Other Sources of Evidence to Evaluate Teaching. In E. Roger & H. Elaine (Eds.), Handbook of quality assurance for university teaching (pp. 317–344). London: Routledge. QILT. (2020). Student Experience Survey. Social Research Centre. https://www.qilt.edu.au/surveys/student-experience-survey-(ses)#report Wills, S.S, Jackson, K. & Wijenayake, N. (2022). On the same page: Science students' assessment literacy. In Spagnoli, D. & Yeung, A., Proceedings of The Australian Conference on Science and Mathematics Education (pp.76). Perth, Western Australi

    Research to Practice: Leveraging Concept Inventories in Statics Instruction

    Get PDF
    There are many common challenges with classroom assessment, especially in first-year large enrollment courses, including managing high quality assessment within time constraints, and promoting effective study strategies. This paper presents two studies: 1) using the CATS instrument to validate multiple-choice format exams for classroom assessment, and 2) using the CATS instrument as a measure of metacognitive growth over time. The first study focused on validation of instructor generated multiple choice exams because they are easier to administer, grade, and return for timely feedback, especially for large enrollment classes. The limitation of multiple choice exams, however, is that it is very difficult to construct questions to measure higher order content knowledge beyond recalling facts. A correlational study was used to compare multiple choice exam scores with relevant portions of the CATS assessment (taken within a week of one another). The results indicated a strong relationship between student performance on the CATS assessment and instructor generated exams, which infers that both assessments were measuring similar content areas. The second study focused on a metacognition, more specifically, on students’ ability to self-assess the extent of their own knowledge. In this study students were asked to rank their confidence for each CATS item on a 1 (not at all confident) to 4 (very confident) Likert-type scale. With the 4-point scale, there was no neutral option provided; students were forced to identify some degree of confident or not confident. A regression analysis was used to compare the relationship between performance and confidence for pre, post, and delayed-post assessments. Results suggested that the students’ self-knowledge of their performance improved over time

    Development of the Exams Data Analysis Spreadsheet as a Tool To Help Instructors Conduct Customizable Analyses of Student ACS Exam Data

    Get PDF
    The American Chemical Society Examinations Institute (ACS-EI) has recently developed the Exams Data Analysis Spread (EDAS) as a tool to help instructors conduct customizable analyses of their student data from ACS exams. The EDAS calculations allow instructors to analyze their students’ performances both at the total score and individual item levels, while also providing national normative results that can be used for comparison. Additionally, instructors can analyze results based on subsets of items of their choosing or items based on the “big ideas” from the Anchoring Concepts Content Map (ACCM). In order to evaluate the utility and usability of the EDAS for instructors, the EDAS went through trial testing with 10 chemistry instructors from across the country. The instructor feedback confirmed that the EDAS has multiple implications for the classroom and departmental assessment, but some additional revisions were needed to increase its usability. This feedback was also used to make a video user-guide that will help instructors through specific difficulties described during trial testing. Currently, an EDAS tool has been developed for the GC12F, GC10S, and GC13 exams

    Traditional vs non-traditional assessment activities as learning indicators of student learning : teachers' perceptions

    Get PDF
    In online settings, some teachers express reservations about relying only on traditional assessments (e.g., tests, assignments, exams, etc.) as trustworthy instruments to evaluate students' understanding of the content accurately. A previous qualitative study revealed that the richness of online environments has allowed teachers to use traditional assessments (anything contributing to the final grade) and non-traditional assessment-based activities (not factored into the final grade but useful in gauging student knowledge) to assess their students' learning status. This study aims to compare the perceived accuracy of both types of assessment activities as indicators of student learning. A total of 124 participants engaged in online teaching completed a self-report instrument. The results revealed a significant difference in teachers' perceptions of the accuracy of traditional assessment activities (M = 3.16; SD =. 442) compared to non-traditional assessment activities (M = 3.05, SD =. 521), t (122) = -2.64, p =. 009 with small effect size (eta =. 02). No significant gender differences were observed in the perceptions of the accuracy of either assessment activities type. The most commonly employed traditional assessment activities were “final exams” (85.5%) and “individual assignments” (83.9%). In comparison, the most common non-traditional assessment methods to evaluate students' knowledge were “questions on previously taught content” (79.8%) and “asking students questions about current content during the lecture” (79%). A one-way analysis of variance revealed no significant differences in perceptions of the accuracy of traditional and non-traditional assessment activities among teachers with varying years of experience (up to 10 years, 11–15 years, and 16+ years). The findings suggest that certain non-traditional assessment activities can also be as accurate as traditional learning activities. Moreover, non-assessment-related activities are perceived to be effective learning indicators. This study has implications for academic institutions and educators interested in supplementing traditional approaches to assessing student learning with non-traditional methods

    The Effect of Curriculum-Based External Exit Exam Systems on Student Achievement

    Get PDF
    [Excerpt] Two presidents, the National Governors Association, and numerous blue-ribbon panels have called for the development of state or national content standards for core subjects and examinations that assess student achievement of these standards. The Competitiveness Policy Council, for example, advocated that external assessments be given to individual students at the secondary level and that the results should be a major but not exclusive factor qualifying for college and better jobs at better wages. It is claimed that curriculum-based external exit exam systems (CBEEESs) based on explicit content standards will improve the teaching and learning of core subjects. What evidence is there for this claim? Outside the United States, such systems are the rule, not the exception. What impacts have such systems had on school policies, teaching, and student learning

    High School Exit Examinations: When Do Learning Effects Generalize?

    Get PDF
    This paper reviews international and domestic evidence on the effects of three types of high school exit exam systems: voluntary curriculum-based external exit exams, universal curriculum-based external exit exam systems and minimum competency tests that must be passed to receive a regular high school diploma. The nations and provinces that use Universal CBEEES (and typically teacher grades as well) to signal student achievement have significantly higher achievement levels and smaller differentials by family background than otherwise comparable jurisdictions that base high stakes decisions on voluntary college admissions tests and/or teacher grades. The introduction of Universal CBEEES in New York and North Carolina during the 1990s was associated with large increases in math achievement on NAEP tests. Research on MCTs and high school accountability tests is less conclusive because these systems are new and have only been implemented in one country. Cross-section studies using a comprehensive set of controls for family background have not found that students in MCT states score higher on audit tests like the NAEP that carry no stakes for the test taker. The analysis reported in table 1 tells us that the five states that introduced MCTs during the 1990s had significantly larger improvements on NAEP tests than states that made no change in their student accountability regime. The gains, however, are smaller than for the states introducing Universal CBEEES. New York and North Carolina. The most positive finding about MCTs is that students in MCT states earn significantly more during the first eight years after graduation than comparable students in other states suggesting that MCTs improve employer perceptions of the quality of the recent graduates of local high schools

    Principles and practice of on-demand testing

    Get PDF

    The Effect of National Standard and Curriculum-Based Exams on Achievement

    Get PDF
    [Excerpt] Two presidents, the National Governors Association and numerous blue ribbon panels have called for the development of state or national content standards for core subjects and examinations that assess the achievement of these standards. The Competitiveness Policy Council, for example, advocates that external assessments be given to individual students at the secondary level and that the results should be a major but not exclusive factor qualifying for college and better jobs at better wages (1993, p. 30). It is claimed that curriculum-based external exit exam systems (CBEEEs) based on world class content standards will improve teaching and learning of core subjects. What evidence is there for this claim? Outside the United States such systems are the rule, not the exception. What impacts have such systems had on school policies, teaching and student learning
    • 

    corecore