2,199 research outputs found

    Beyond lecture capture: Student-generated podcasts in teacher education.

    Get PDF
    Podcasting in higher education most often takes the form of lecture capture or "coursecasting" as instructors record and disseminate lectures (King & Gura, 2007, p. 181). Studies published within the past five years continue to prioritise podcasting of lectures for the student audience, and to test the effectiveness of such podcasts via traditional pencil and paper assessments covering the material delivered via podcast (Hodges, Stackpole-Hodges, & Cox, 2008). A premise of this article is that in order to enhance learning outcomes via podcasting, it is necessary to move beyond coursecasting, toward podcasting with and by students, and to value key competencies and dispositions as learning outcomes. This article reports on a pilot study undertaken with teacher education students in an online ICT class, where students investigated podcasting and created reflective podcasts. The pilot study aimed to engage students actively in generating podcasts, incorporating a wider view of assessment and learning outcomes. Student-generated podcasts were self-assessed, and shared online in order to invite formative feedback from peers. A range of positive outcomes are reported, whereby students learned about and through podcasting, engaging in reflection, problem solving and interactive formative assessment

    Indexing and retrieval of multimodal lecture recordings from open repositories for personalized access in modern learning settings

    Get PDF
    An increasing number of lecture recordings are available to complement face-to face and the more conventional content-based e-learning approaches. These recordings provide additional channels for remote students and time-independent access to the lectures. Many universities offer even complete series of recordings of hundreds of courses which are available for public access and this service provides added value for users outside the university. The lecture recordings show the use of a great variety of media or modalities (such as video, audiom lecture media, presentation behaviour) and formats. Insofar, none of the existing systems and services have sufficient retrieval functionality or support appropriate interfaces to enable searching for lecture recordings over several repositories. This situation has motivated us to initiate research on a lecture recording indexing and retrieval system for knowledge transfer and learning activities in various settings. This system is built on our former experiences and prototypes developed within the MISTRAL research project. In this paper we outline requirements for an enhanced lecture recording retrieval system, introduce our solution and prototype, and discuss the initial results and findings

    Evaluating intelligent interfaces for post-editing automatic transcriptions of online video lectures

    Full text link
    Video lectures are fast becoming an everyday educational resource in higher education. They are being incorporated into existing university curricula around the world, while also emerging as a key component of the open education movement. In 2007, the Universitat Politècnica de València (UPV) implemented its poliMedia lecture capture system for the creation and publication of quality educational video content and now has a collection of over 10,000 video objects. In 2011, it embarked on the EU-subsidised transLectures project to add automatic subtitles to these videos in both Spanish and other languages. By doing so, it allows access to their educational content by non-native speakers and the deaf and hard-of-hearing, as well as enabling advanced repository management functions. In this paper, following a short introduction to poliMedia, transLectures and Docència en Xarxa (Teaching Online), the UPV s action plan to boost the use of digital resources at the university, we will discuss the three-stage evaluation process carried out with the collaboration of UPV lecturers to find the best interaction protocol for the task of post-editing automatic subtitles.Valor Miró, JD.; Spencer, RN.; Pérez González De Martos, AM.; Garcés Díaz-Munío, GV.; Turró Ribalta, C.; Civera Saiz, J.; Juan Císcar, A. (2014). Evaluating intelligent interfaces for post-editing automatic transcriptions of online video lectures. Open Learning: The Journal of Open and Distance Learning. 29(1):72-85. doi:10.1080/02680513.2014.909722S7285291Fujii, A., Itou, K., & Ishikawa, T. (2006). LODEM: A system for on-demand video lectures. Speech Communication, 48(5), 516-531. doi:10.1016/j.specom.2005.08.006Gilbert, M., Knight, K., & Young, S. (2008). Spoken Language Technology [From the Guest Editors]. IEEE Signal Processing Magazine, 25(3), 15-16. doi:10.1109/msp.2008.918412Leggetter, C. J., & Woodland, P. C. (1995). Maximum likelihood linear regression for speaker adaptation of continuous density hidden Markov models. Computer Speech & Language, 9(2), 171-185. doi:10.1006/csla.1995.0010Proceedings of the 9th ACM SIGCHI New Zealand Chapter’s International Conference on Human-Computer Interaction Design Centered HCI - CHINZ ’08. (2008). doi:10.1145/1496976Martinez-Villaronga, A., del Agua, M. A., Andres-Ferrer, J., & Juan, A. (2013). Language model adaptation for video lectures transcription. 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. doi:10.1109/icassp.2013.6639314Munteanu, C., Baecker, R., & Penn, G. (2008). Collaborative editing for improved usefulness and usability of transcript-enhanced webcasts. Proceeding of the twenty-sixth annual CHI conference on Human factors in computing systems - CHI ’08. doi:10.1145/1357054.1357117Repp, S., Gross, A., & Meinel, C. (2008). Browsing within Lecture Videos Based on the Chain Index of Speech Transcription. IEEE Transactions on Learning Technologies, 1(3), 145-156. doi:10.1109/tlt.2008.22Proceedings of the 2012 ACM international conference on Intelligent User Interfaces - IUI ’12. (2012). doi:10.1145/2166966Serrano, N., Giménez, A., Civera, J., Sanchis, A., & Juan, A. (2013). Interactive handwriting recognition with limited user effort. International Journal on Document Analysis and Recognition (IJDAR), 17(1), 47-59. doi:10.1007/s10032-013-0204-5Torre Toledano, D., Ortega Giménez, A., Teixeira, A., González Rodríguez, J., Hernández Gómez, L., San Segundo Hernández, R., & Ramos Castro, D. (Eds.). (2012). Advances in Speech and Language Technologies for Iberian Languages. Communications in Computer and Information Science. doi:10.1007/978-3-642-35292-8Wald, M. (2006). Creating accessible educational multimedia through editing automatic speech recognition captioning in real time. Interactive Technology and Smart Education, 3(2), 131-141. doi:10.1108/1741565068000005

    Improving Online Interactions: Lessons from an Online Anatomy Course with a Laboratory for Undergraduate Students

    Get PDF
    An online section of a face-to-face (F2F) undergraduate (bachelor\u27s level) anatomy course with a prosection laboratory was offered in 2013-2014. Lectures for F2F students (353) were broadcast to online students (138) using Blackboard Collaborate (BBC) virtual classroom. Online laboratories were offered using BBC and three-dimensional (3D) anatomical computer models. This iteration of the course was modified from the previous year to improve online student-teacher and student-student interactions. Students were divided into laboratory groups that rotated through virtual breakout rooms, giving them the opportunity to interact with three instructors. The objectives were to assess student performance outcomes, perceptions of student-teacher and student-student interactions, methods of peer interaction, and helpfulness of the 3D computer models. Final grades were statistically identical between the online and F2F groups. There were strong, positive correlations between incoming grade average and final anatomy grade in both groups, suggesting prior academic performance, and not delivery format, predicts anatomy grades. Quantitative student perception surveys (273 F2F; 101 online) revealed that both groups agreed they were engaged by teachers, could interact socially with teachers and peers, and ask them questions in both the lecture and laboratory sessions, though agreement was significantly greater for the F2F students in most comparisons. The most common methods of peer communication were texting, Facebook, and meeting F2F. The perceived helpfulness of the 3D computer models improved from the previous year. While virtual breakout rooms can be used to adequately replace traditional prosection laboratories and improve interactions, they are not equivalent to F2F laboratories

    Analyzing Qualitative Data with MAXQDA

    Get PDF
    “To begin at the beginning” is the opening line of the play Under Milk Wood by Welsh poet Dylan Thomas. So, we also want to start here at the beginning and start with some information about the history of the analysis software MAXQDA. This story is quite long; it begins in 1989 with a first version of the software, then just called “MAX,” for the operating system DOS and a book in the German language. The book’s title was Text Analysis Software for the Social Sciences. Introduction to MAX and Textbase Alpha written by Udo Kuckartz, published by Gustav Fischer in 1992. Since then, there have been many changes and innovations: technological, conceptual, and methodological. MAXQDA has its roots in social science methodology; the original name MAX was reference to the sociologist Max Weber, whose methodology combined quantitative and qualitative methods, explanation, and understanding in a way that was unique at the time, the beginning of the twentieth century. Since the first versions, MAX (later named winMAX and MAXQDA) has always been a very innovative analysis software. In 1994, it was one of the first programs with a graphical user interface; since 2001, it has used Rich Text Format with embedded graphics and objects. Later, MAXQDA was the first QDA program (QDA stands for qualitative data analysis) with a special version for Mac computers that included all analytical functions. Since autumn 2015, MAXQDA has been available in almost identical versions for Windows and Mac, so that users can switch between operating systems without having to familiarize themselves with a new interface or changed functionality. This compatibility and feature equality between Mac and Windows versions is unique and greatly facilitates team collaboration. MAXQDA has also come up with numerous innovations in the intervening years: a logically and very intuitively designed user interface, very versatile options for memos and comments, numerous visualization options, the summary grid as a middle level of analysis between primary data and categories, and much more, for instance, transcription, geolinks, weight scores for coding, analysis of PDF files, and Twitter analysis. Last but not least, the mixed methods features are worth mentioning, in which MAXQDA has long played a pioneering role. This list already shows that today MAXQDA is much more than text analysis software: the first chapter of this book contains a representation of the data types that MAXQDA can analyze today (in version 2018) and shows which file formats can be processed. The large variety of data types is contrasted by an even greater number o

    Insight provenance for spatiotemporal visual analytics: Theory, review, and guidelines

    Get PDF
    Research on provenance, which focuses on different ways to describe and record the history of changes and advances made throughout an analysis process, is an integral part of visual analytics. This paper focuses on providing the provenance of insight and rationale through visualizations while emphasizing, first, that this entails a profound understanding of human cognition and reasoning and that, second, the special nature of spatiotemporal data needs to be acknowledged in this process. A recently proposed human reasoning framework for spatiotemporal analysis, and four guidelines for the creation of visualizations that provide the provenance of insight and rationale published in relation to that framework, work as a starting point for this paper. While these guidelines are quite abstract, this paper set out to create a set of more concrete guidelines. On the basis of a review of available provenance solutions, this paper identifies a set of key features that are of relevance when providing the provenance of insight and rationale and, on the basis of these features, produces a new set of complementary guidelines that are more practically oriented than the original ones. Together, these two sets of guidelines provide both a theoretical and practical approach to the problem of providing the provenance of insight and rationale. Providing these kinds of guidelines represents a new approach in provenance research
    • …
    corecore