116,567 research outputs found
âI try my very best and then I send it to the wizards, who make up numbersâ: Science studentsâ perceptions of (in)effective assessment and feedback practices
Assessment and feedback are key concerns for tertiary students, as evidenced by university and national student experience surveys (QILT, 2020). While these large surveys convey the general student sentiment, literature recommends approaches other than surveys to deepen understanding of students' experiences in individual faculties, courses etc. (Berk, 2018). This is particularly important when planning any changes in assessment practices.
Following on from an initial study into studentsâ assessment and feedback literacy (Wills et al., 2022), we present the second stage of our project aiming to understand studentsâ experiences and perceptions of assessment and feedback at the University of New South Wales.
From a thematic analysis of semi-structured student interviews, we present several case studies of what science students consider to be effective assessment and feedback in their program.
Some identified themes such as linked assessments, worked answers, and annotated submissions, were found to be effective practices across board. However, for other themes such as the usefulness of formative assessment, rubrics, and positive feedback, students were not in agreement. Resoundingly, students condemned the lack of closure around final exams.
These and other findings will be presented before student suggestions for improvement are discussed, as well as looking to a future assessment co-design with students.
Feedback on final exams:
ââŠabout final exams, it it's like a black box. You know, you answer and you might get, I don't know, 70%. But that means there's 30% you've got wrong and you still want to know why that isâŠâ
Effectiveness of formative assessment:
âI think that often, they're just one or two questions that are about a detail that was unimportant. And the lecture isn't⊠the lecture content isnât tested properly.â
REFERENCES
Berk, R. (2018). Beyond Student Rating: Fourteen Other Sources of Evidence to Evaluate Teaching. In E. Roger & H. Elaine (Eds.), Handbook of quality assurance for university teaching (pp. 317â344). London: Routledge.
QILT. (2020). Student Experience Survey. Social Research Centre. https://www.qilt.edu.au/surveys/student-experience-survey-(ses)#report
Wills, S.S, Jackson, K. & Wijenayake, N. (2022). On the same page: Science students' assessment literacy. In Spagnoli, D. & Yeung, A., Proceedings of The Australian Conference on Science and Mathematics Education (pp.76). Perth, Western Australi
Research to Practice: Leveraging Concept Inventories in Statics Instruction
There are many common challenges with classroom assessment, especially in first-year large enrollment courses, including managing high quality assessment within time constraints, and promoting effective study strategies. This paper presents two studies: 1) using the CATS instrument to validate multiple-choice format exams for classroom assessment, and 2) using the CATS instrument as a measure of metacognitive growth over time. The first study focused on validation of instructor generated multiple choice exams because they are easier to administer, grade, and return for timely feedback, especially for large enrollment classes. The limitation of multiple choice exams, however, is that it is very difficult to construct questions to measure higher order content knowledge beyond recalling facts. A correlational study was used to compare multiple choice exam scores with relevant portions of the CATS assessment (taken within a week of one another). The results indicated a strong relationship between student performance on the CATS assessment and instructor generated exams, which infers that both assessments were measuring similar content areas. The second study focused on a metacognition, more specifically, on studentsâ ability to self-assess the extent of their own knowledge. In this study students were asked to rank their confidence for each CATS item on a 1 (not at all confident) to 4 (very confident) Likert-type scale. With the 4-point scale, there was no neutral option provided; students were forced to identify some degree of confident or not confident. A regression analysis was used to compare the relationship between performance and confidence for pre, post, and delayed-post assessments. Results suggested that the studentsâ self-knowledge of their performance improved over time
Development of the Exams Data Analysis Spreadsheet as a Tool To Help Instructors Conduct Customizable Analyses of Student ACS Exam Data
The American Chemical Society Examinations Institute (ACS-EI) has recently developed the Exams Data Analysis Spread (EDAS) as a tool to help instructors conduct customizable analyses of their student data from ACS exams. The EDAS calculations allow instructors to analyze their studentsâ performances both at the total score and individual item levels, while also providing national normative results that can be used for comparison. Additionally, instructors can analyze results based on subsets of items of their choosing or items based on the âbig ideasâ from the Anchoring Concepts Content Map (ACCM). In order to evaluate the utility and usability of the EDAS for instructors, the EDAS went through trial testing with 10 chemistry instructors from across the country. The instructor feedback confirmed that the EDAS has multiple implications for the classroom and departmental assessment, but some additional revisions were needed to increase its usability. This feedback was also used to make a video user-guide that will help instructors through specific difficulties described during trial testing. Currently, an EDAS tool has been developed for the GC12F, GC10S, and GC13 exams
Recommended from our members
Online remote exams in higher education: distance learning students' views
As a result of the Covid-19 pandemic, universities had to re-structure their assessment design, policies and processes. It is clear that the experiment of having exams delivered in an online format has allowed institutions to question what the most appropriate format for the future is (St-Onge et al., 2022). The assessment conversation around the design of the online exams and the technology used aimed at ensuring that student expectations were met while securing assessment standards.
This study aimed to explore student views at a major distance-learning university in the UK about participating in online remote exams. The institution replaced the common pre-covid practice of taking face-to-face exams at local centres appointed by the university with remote open book-style exams.
This work focuses on responses to:
(a) a closed-ended question on whether students had a positive, negative or mixed experience with online exams and
(b) an open-ended exploratory question inviting students to report their previous experiences with online exams, if any. Content analysis was used to make valid inferences from the survey respondentsâ open-ended responses focusing on the meaning in context (Krippendorff, 2018).
The majority of respondents (83%) reported that they never completed an online remote exam at home (or work) as an alternative to their face-to-face exams. The rest (n = 107) completed online remote exams at home in one course (12%) or more than one course (5%). This may have occurred due to pandemic or to accommodate students with disabilities or other circumstances as part of standard processes. The students who completed an exam at home mainly described their experience as positive (76%, n = 81). However, some students had a mixed (19%, n = 20) or negative (5%, n = 5) experience.
Findings from 107 student responses to an online survey on assessment, pointed to positive and negative areas of experience with online exams. Exploring studentsâ comments on positive experiences (n = 76), the area with the largest proportion of positive mentions (36%) was âexam duration and timeâ, followed by âanxiety/pressureâ (26%), âexams at homeâ (20%), âtravel to exam centreâ (16%), âreal-life equivalentâ (8%), and âinvigilationâ(4%). Exploring studentsâ comments on negative experiences (n = 15), the area with the largest proportion of negative mentions (52%) was âequipment and technical issuesâ, followed by âinvigilation and rigorousnessâ (26%), âexam duration and timeâ (9%), âmarksâ (8%), and âdistractions at homeâ (7%).
The evidence from this study suggests that while most survey respondents show a clear preference towards online remote exams, there is no clear âwinnerâ as different groups of students reveal barriers and challenges in assuming a different exam model. This study provides an agenda for universities with temporary and permanent distance learning programmes to develop or improve ways that students or particular groups of students are assessed by providing positive areas of perception.
References:
[1] Krippendorff, K. (2018). Content analysis: An introduction to its methodology. Sage publications.
[2] StâOnge, C., Ouellet, K., Lakhal, S., DubĂ©, T., & Marceau, M. (2022). COVIDâ19 as the tipping point for integrating eâassessment in higher education practices. British Journal of Educational Technology, 53(2), 349-366
Traditional vs non-traditional assessment activities as learning indicators of student learning : teachers' perceptions
In online settings, some teachers express reservations about relying only on traditional assessments (e.g., tests, assignments, exams, etc.) as trustworthy instruments to evaluate students' understanding of the content accurately. A previous qualitative study revealed that the richness of online environments has allowed teachers to use traditional assessments (anything contributing to the final grade) and non-traditional assessment-based activities (not factored into the final grade but useful in gauging student knowledge) to assess their students' learning status. This study aims to compare the perceived accuracy of both types of assessment activities as indicators of student learning. A total of 124 participants engaged in online teaching completed a self-report instrument. The results revealed a significant difference in teachers' perceptions of the accuracy of traditional assessment activities (M = 3.16; SD =. 442) compared to non-traditional assessment activities (M = 3.05, SD =. 521), t (122) = -2.64, p =. 009 with small effect size (eta =. 02). No significant gender differences were observed in the perceptions of the accuracy of either assessment activities type. The most commonly employed traditional assessment activities were âfinal examsâ (85.5%) and âindividual assignmentsâ (83.9%). In comparison, the most common non-traditional assessment methods to evaluate students' knowledge were âquestions on previously taught contentâ (79.8%) and âasking students questions about current content during the lectureâ (79%). A one-way analysis of variance revealed no significant differences in perceptions of the accuracy of traditional and non-traditional assessment activities among teachers with varying years of experience (up to 10 years, 11â15 years, and 16+ years). The findings suggest that certain non-traditional assessment activities can also be as accurate as traditional learning activities. Moreover, non-assessment-related activities are perceived to be effective learning indicators. This study has implications for academic institutions and educators interested in supplementing traditional approaches to assessing student learning with non-traditional methods
The Effect of Curriculum-Based External Exit Exam Systems on Student Achievement
[Excerpt] Two presidents, the National Governors Association, and numerous blue-ribbon panels have called for the development of state or national content standards for core subjects and examinations that assess student achievement of these standards. The Competitiveness Policy Council, for example, advocated that external assessments be given to individual students at the secondary level and that the results should be a major but not exclusive factor qualifying for college and better jobs at better wages. It is claimed that curriculum-based external exit exam systems (CBEEESs) based on explicit content standards will improve the teaching and learning of core subjects. What evidence is there for this claim? Outside the United States, such systems are the rule, not the exception. What impacts have such systems had on school policies, teaching, and student learning
High School Exit Examinations: When Do Learning Effects Generalize?
This paper reviews international and domestic evidence on the effects of three types of high school exit exam systems: voluntary curriculum-based external exit exams, universal curriculum-based external exit exam systems and minimum competency tests that must be passed to receive a regular high school diploma. The nations and provinces that use Universal CBEEES (and typically teacher grades as well) to signal student achievement have significantly higher achievement levels and smaller differentials by family background than otherwise comparable jurisdictions that base high stakes decisions on voluntary college admissions tests and/or teacher grades. The introduction of Universal CBEEES in New York and North Carolina during the 1990s was associated with large increases in math achievement on NAEP tests. Research on MCTs and high school accountability tests is less conclusive because these systems are new and have only been implemented in one country. Cross-section studies using a comprehensive set of controls for family background have not found that students in MCT states score higher on audit tests like the NAEP that carry no stakes for the test taker. The analysis reported in table 1 tells us that the five states that introduced MCTs during the 1990s had significantly larger improvements on NAEP tests than states that made no change in their student accountability regime. The gains, however, are smaller than for the states introducing Universal CBEEES. New York and North Carolina. The most positive finding about MCTs is that students in MCT states earn significantly more during the first eight years after graduation than comparable students in other states suggesting that MCTs improve employer perceptions of the quality of the recent graduates of local high schools
The Effect of National Standard and Curriculum-Based Exams on Achievement
[Excerpt] Two presidents, the National Governors Association and numerous blue ribbon panels have called for the development of state or national content standards for core subjects and examinations that assess the achievement of these standards. The Competitiveness Policy Council, for example, advocates that external assessments be given to individual students at the secondary level and that the results should be a major but not exclusive factor qualifying for college and better jobs at better wages (1993, p. 30). It is claimed that curriculum-based external exit exam systems (CBEEEs) based on world class content standards will improve teaching and learning of core subjects. What evidence is there for this claim? Outside the United States such systems are the rule, not the exception. What impacts have such systems had on school policies, teaching and student learning
- âŠ