130 research outputs found
Video killed the multiple-choice quiz: capturing pharmacy students' literature searching skills using a screencast video assignment
Background: In a flipped, required first-year drug information course, students were taught the systematic approach to answering drug information questions, commonly utilized resources, and literature searching. As co-coordinator, a librarian taught three weeks of the course focused on mobile applications, development of literature searching skills, and practicing in PubMed. Course assignments were redesigned in 2019 based on assessment best practices and replaced
weekly multiple-choice quizzes used in prior iterations of the course.
Case Presentation: Following two weeks of literature searching
instruction, students were assigned a drug information question that would serve as the impetus for the search they conducted. Students (n=66) had one week to practice and record a screencast video of their search in PubMed. Students narrated their video with an explanation of the actions being performed and were assessed using a twenty-point rubric created by the course coordinator and librarian. The librarian also created general feedback videos for each question by recording screencasts while performing the literature searches and clarifying troublesome aspects for students. The librarian spent about twenty-four hours grading and six hours writing scripts, recording, and editing feedback videos.
Conclusion: Most students performed well on the assignment and few experienced technical difficulties. Instructors will use this assignment and feedback method in the future. Screencast videos proved an innovative way to assess student knowledge and to provide feedback on literature searching assignments. This method is transferrable to any medical education setting and could be used across all health professions to improve information literacy skills
DIT Teaching Fellowships Reports 2013-2014
https://arrow.tudublin.ie/tfreports/1002/thumbnail.jp
Supplemental online material: References of the articles analyzed in the scoping review
Supplemental online material of the paper Student-generated teaching materials: A scoping review mapping the research field (Ribosa & Duran, 2022), published in Education in the Knowledge Society, 23 (https://doi.org/10.14201/eks.27443). It contains the references of the articles analyzed in the scoping review
Modes of feedback in ESL writing: Implications of shifting from text to screencast
For second language writing (SLW) instructors, decisions regarding technology-mediated feedback are particularly complex as they must also navigate student language proficiency, which may vary across different areas such as reading or listening. Yet technology-mediated feedback remains an underexplored realm in SLW especially with regard to how modes of technology affect feedback and how students interact with and understand it. With the expanding pervasiveness of video and increased access to screencasting (screen recording), SLW instructors have ever-growing access to video modes for feedback, yet little research to inform their choices. Further, with video potentially requiring substantial investment from institutions through hosting solutions, a research-informed perspective for adoption is advisable. However, few existing studies address SLW feedback given in the target language (common in ESL) or standalone (rather than supplemental) screencast feedback.
This dissertation begins to expand SLW feedback research and fill this void through three investigations of screencast (video) and text (MS Word comments) feedback in ESL writing. The first paper uses a crossover design to investigate student perceptions and use of screencast feedback over four assignments given to 12 students in an intermediate ESL writing class through a combination of a series of surveys, a group interview and screen recorded observations of students working with the feedback. The second paper argues for appraisal an outgrowth of systemic functional linguistics (SFL) focused on evaluative language and interpersonal meaning, as a framework for understanding interpersonal differences in modes of feedback through an analysis of 16 text and 16 video feedback files from Paper 1. Paper 3 applies a more intricate version of the appraisal framework to the analysis of video and text feedback collected in a similar crossover design from three ESL writing instructors.
Paper 1 demonstrates the added insights offered by recording students’ screens and their spoken interactions and shows that students needed to ask for help and switched to the L1 when working with text feedback but not video. The screencast feedback was found to be easier to understand and use, as MS Word comments were seen as being difficult to connect to the text. While students found both types of feedback to be helpful, they championed video feedback for its efficiency, clarity, ease of use and heightened understanding and would greatly prefer it for future feedback. Successful changes were made at similar rates for both types of feedback.
The results of Paper 2 suggest possible variation between the video and text feedback in reviewer positioning and feedback purpose. Specifically, video seems to position the reviewer as holding only one of many possible perspectives with feedback focused on possibility and suggestion while the text seems to position the reviewer as authority with feedback focused on correctness. The findings suggest that appraisal can aid in the understanding of multimodal feedback and identifying differences between feedback modes.
Building on these findings, Paper 3 shows substantial reduction in negative appreciation of the student text overall and for each instructor individually in video feedback as compared to text. Text feedback showed a higher proportion of negative attitude overall and positioned the instructor as a single authority. Video feedback, on the other hand, preserved student autonomy in its balanced use of praise and criticism, offered suggestion and advice and positioned the instructor as one of many possible opinions. Findings held true in sum and for each instructor individually suggesting that interpersonal considerations varied across modes. This study offers future feedback research a way to consider the interpersonal aspects of feedback across multiple modes and situations. It provides standardization procedures for applying and quantifying appraisal analysis in feedback that allow for comparability across studies. Future work applying the framework to other modes, such as audio, and situations, such as instructor conferences, peer review, or tutoring are encouraged. The study also posits the framework as a tool in instructor reflection and teacher training.
Taken together the three studies deepen our understanding of the impact of our technological choices in the context of feedback. Video feedback seems to be a viable replacement for text feedback as it was found to be at least as effective for revision, while being greatly preferred by students for its ease of use and understanding. With the understanding of how students use feedback in different modes, instructors can better craft feedback and training for their students. For instance, instructors must remember to pause after comments in screencast feedback to allow students time to hit pause or revise. Video was also seen to allow for greater student agency in their work and position instructor feedback as suggestions that the student could act upon. These insights can help instructors choose and employ technology in ways that will best support their pedagogical purposes
LEAF (Learning from and Engaging with Assessment and Feedback) Final project report
The LEAF (Learning from and Engaging with Assessment and Feedback) project was funded under the Teaching Fellowship in TU Dublin, city campus for 18 months beginning in January 2018. The project team comprised 18 academics from across the TU Dublin - City Campus and there are representatives from all colleges. Also included were two further members who represented the student voice: the Director of Student Affairs and the Students’ Union Education Officer.
This project sought to address a key issue in third level Teaching and Learning, that of assessment and assessment feedback. Assessment strategies have been shown to have a large impact on shaping how students learn and how they develop key employability skills. Learning from best practice nationally and internationally, and research from staff, students and quality documents, this project has developed a set of recommendations which will enhance practices in, and experiences of, assessments and feedback in TU Dublin
E-learning in higher education: designing for diversity
"This research was conducted to compare methods of e-learning accessibility evaluation that may be applied in a higher education context. Results of ""objective"" accessibility evaluation of e-learning technologies using automated tools were compared to results of ""subjective"" accessibility evaluation with student participants. It was found that objective and subjective accessibility evaluation of e-learning technologies both yield useful, albeit different, information. To further explore subjective accessibility evaluation, results and student perceptions were compared following moderated and unmoderated testing sessions. Neither the efficiency of completing tasks in a sample online course nor the number of accessibility problems detected were deemed significantly affected by the format of the testing session. However, most students preferred to participate in an unmoderated testing session where they felt less self-conscious and as though they could interact more naturally with the technology. Findings from this study point to the importance of considering not only objective accessibility evaluation and accessibility guideline conformance as measures of the accessibility of e-learning technologies, but also the subjective experiences of students as they engage with the technologies. There is also value in taking a holistic approach towards evaluating e-learning accessibility by considering the accessibility of learning outcomes (factoring in the learning context to the evaluation) in addition to the accessibility of individual e-learning technologies. Because accessibility is a variable that is important to all students, and not just students with disabilities, it is critical that institutions of higher education work with a variety of stakeholders to determine not only how best to evaluate e-learning accessibility, but also how to ensure that the results of accessibility evaluation are widely disseminated in a manner that is likely to have a broad impact on enhancing e-learning accessibility for diverse student populations.
Remote access laboratories for preparing STEM teachers: A mixed methods study
Bandura’s self-efficacy theory provided the conceptual framework for this mixed methods investigation of pre-service teachers’ (PSTs) self-efficacy to teach Science, Technology, Engineering and Mathematics (STEM) subjects. The Science Teaching Efficacy Belief Instrument-B (STEBI-B) was modified to create the Technology Teaching Efficacy Belief Instrument (T-TEBI). Pre-test and post-test T-TEBI scores were measured to investigate changes in PSTs’ self-efficacy to teach technology. Interviews and reflections were used to explore the reasons for changes in pre-service teachers’ self-efficacy. This paper reports results from a pilot study using an innovative Remote Access Laboratory system with PSTs
Recommended from our members
In Search of a “Fair Explanation”: Helping Young People to Consider the Possibilities, Limitations, and Risks of Computer- and Data-Mediated Systems
Significant resources have been directed towards K-12 computing and data education over the past ten years, as part of what has come to be known as the CSforAll initiative. This initiative has focused on raising awareness of computing education among parents and students, developing situated learning progressions that resonate with many different interests and pursuits, training teachers, and addressing issues of underrepresentation in computing among females and racial minorities. In this dissertation, I argue that as the CSforAll initiative continues to expand, it is important for the education community to also reflect on the forms of knowledge that are believed to be essential, and the presumed benefits of computing and data education. Specifically, how might the goal of producing citizens with robust computing and data literacies change what is considered to be fundamental to a computing education; as well as the kinds of contexts in which computing and data science are situated?I use the term sociotechnical literacy to name this vision for computing education, which I define as a broad set of social and technical practices, strategies, ideas, and dispositions that can help people to reason about the computer-mediated systems that shape their everyday lives. As the term suggests, I argue that it is important for learners to engage with technical ideas as well as their social applications and implications. To examine what this might mean for teaching and learning, I describe two design experiments that I conducted with young people (ages 14 – 22). Each approach aimed to make the applications of computing primary (rather than treating applications as the backdrop from which the abstractions of computation are motivated), so that learners could examine some of the specific ways in which data and computing might be directed to particular goals, subject to real possibilities and constraints, and in relation to alternative forms of participation. I examine the possibilities and limitations of each approach. I also analyze some of the assumptions that framed the design experiments – which were naïve, but also reflective of a broader ethos that pervades CSforAll. I reflect on what these studies collectively reveal about the possibilities, limitations, and risks of data and computing, as situated in the lives of young people; as well as what this might mean for helping young people develop a robust sociotechnical literacy. There are very real limits to what can be accomplished with computing and data alone. There are also significant benefits and risks associated with the many sociotechnical systems that shape our lives. As such, I argue that rather than positioning computing education as a remedy to various social ills, we instead offer young people a fair explanation of what computing is and is not capable of, grounded within specific contexts involving real people. I conclude with what this fair explanation might include, and how it might be fostered
Western Oregon University 2015-2016 Course Catalog
https://digitalcommons.wou.edu/coursecatalogs/1012/thumbnail.jp
- …