1,746 research outputs found
Structuring lecture videos for distance learning applications. ISMSE
This paper presents an automatic and novel approach in structuring and indexing lecture videos for distance learning applications. By structuring video content, we can support both topic indexing and semantic querying of multimedia documents. In this paper, our aim is to link the discussion topics extracted from the electronic slides with their associated video and audio segments. Two major techniques in our proposed approach include video text analysis and speech recognition. Initially, a video is partitioned into shots based on slide transitions. For each shot, the embedded video texts are detected, reconstructed and segmented as high-resolution foreground texts for commercial OCR recognition. The recognized texts can then be matched with their associated slides for video indexing. Meanwhile, both phrases (title) and keywords (content) are also extracted from the electronic slides to spot the speech signals. The spotted phrases and keywords are further utilized as queries to retrieve the most similar slide for speech indexing. 1
Synote: Multimedia Annotation ‘Designed for all'
This paper describes the development and evaluation of Synote, a freely available web based application that makes multimedia web resources (e.g. podcasts) easier to access, search, manage, and exploit for all learners, teachers and other users through the creation of notes, bookmarks, tags, links, images and text captions synchronized to any part of the recording. Synote uniquely enables users to easily find, or associate their notes or resources with any part of a podcast or video recording available on the web and the students surveyed would like to be able to access all their lectures through Synot
Language model adaptation for video lectures transcription
Videolectures are currently being digitised all over the world for its enormous value as reference resource. Many of these lectures are accompanied with slides. The slides offer a great opportunity for improving ASR systems performance. We propose a simple yet powerful extension to the linear interpolation of language models for adapting language models with slide information. Two types of slides are considered, correct slides, and slides automatic extracted from the videos with OCR. Furthermore, we compare both time aligned and unaligned slides. Results report an improvement of up to 3.8 % absolute WER points when using correct slides. Surprisingly, when using automatic slides obtained with poor OCR quality, the ASR system still improves up to 2.2 absolute WER points.The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement no 287755 (transLectures). Also supported by the Spanish Government (Plan E, iTrans2 TIN2009-14511).MartÃnez-Villaronga, A.; Del Agua Teba, MA.; Andrés Ferrer, J.; Juan CÃscar, A. (2013). Language model adaptation for video lectures transcription. En Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on. IInstitute of Electrical and Electronics Engineers (IEEE). 8450-8454. https://doi.org/10.1109/ICASSP.2013.6639314S8450845
PechaKucha Presentations to Develop Multimodal Communicative Competence in ESP and EMI Live Online Lectures: A Team-Teaching Proposal
With the Covid-19 outbreak, many universities worldwide have been forced to undertake some changes to continue with the academic commitment, giving rise to a range of
adaptations that pivoted around online teaching delivery and the use of technology and
audiovisual materials. Against this background, this study discusses an adaptive response from face-to-face to live online lectures for ESP and EMI classrooms. These two
settings are deliberately chosen as a way to best prepare ESP learners for EMI courses.
For this purpose, the spoken genre of PechaKucha has been selected, which is characterized as a multimodal (e.g., language, visuals, images) and engaging presentation type.
To deal with this genre and promote learners’ multimodal communicative competence
and multimodal literacy, we drawn on a multimodal-centered genre-based pedagogy.
This proposal explains the pedagogical adaptation from face-to-face to online lectures
and discusses the challenges confronted when moving from one setting to the other. We
also argue for a team-teaching approach. In addition, this study points to the need to
train teachers to develop their multimodal interactional competence to equip them to
cope with live online delivery
PechaKucha presentation to deveolp multimodal communicative competence in ESP and EMI live online lectures: A team-teaching proposal
With the Covid-19 outbreak, many universities worldwide have been forced to undertake some changes to continue with the academic commitment, giving rise to a range of adaptations that pivoted around online teaching delivery and the use of technology and audiovisual materials. Against this background, this study discusses an adaptive response from face-to-face to live online lectures for ESP and EMI classrooms. These two settings are deliberately chosen as a way to best prepare ESP learners for EMI courses. For this purpose, the spoken genre of PechaKucha has been selected, which is characterized as a multimodal (e.g., language, visuals, images) and engaging presentation type. To deal with this genre and promote learners’ multimodal communicative competence and multimodal literacy, we drawn on a multimodal-centered genre-based pedagogy. This proposal explains the pedagogical adaptation from face-to-face to online lectures and discusses the challenges confronted when moving from one setting to the other. We also argue for a team-teaching approach. In addition, this study points to the need to train teachers to develop their multimodal interactional competence to equip them to cope with live online delivery
DocMIR: An automatic document-based indexing system for meeting retrieval
This paper describes the DocMIR system which captures, analyzes and indexes automatically meetings, conferences, lectures, etc. by taking advantage of the documents projected (e.g. slideshows, budget tables, figures, etc.) during the events. For instance, the system can automatically apply the above-mentioned procedures to a lecture and automatically index the event according to the presented slides and their contents. For indexing, the system requires neither specific software installed on the presenter's computer nor any conscious intervention of the speaker throughout the presentation. The only material required by the system is the electronic presentation file of the speaker. Even if not provided, the system would temporally segment the presentation and offer a simple storyboard-like browsing interface. The system runs on several capture boxes connected to cameras and microphones that records events, synchronously. Once the recording is over, indexing is automatically performed by analyzing the content of the captured video containing projected documents and detects the scene changes, identifies the documents, computes their duration and extracts their textual content. Each of the captured images is identified from a repository containing all original electronic documents, captured audio-visual data and metadata created during post-production. The identification is based on documents' signatures, which hierarchically structure features from both layout structure and color distributions of the document images. Video segments are finally enriched with textual content of the identified original documents, which further facilitate the query and retrieval without using OCR. The signature-based indexing method proposed in this article is robust and works with low-resolution images and can be applied to several other applications including real-time document recognition, multimedia IR and augmented reality system
Creating an Online Concrete Masonry Course for Accessibility
An online undergraduate course in masonry design was created for an asynchronous delivery format and to fulfill web accessibility requirements. This report outlines the process of the course creation with Canvas as the host platform. Emphasis is placed on how content including lecture presentations, assignments, and course modules were developed with strong graphic software and communication skills. For accessibility, the content creation was guided by the Web Content Accessibility Guidelines (WCAG) 2.1 Level AA Standards. Lastly, the significance of learning masonry design in undergraduate structural engineering curriculum is discussed in addition to the structural engineering industry practices used to complete this project
CONTENT BASED RETRIEVAL OF LECTURE VIDEO REPOSITORY: LITERATURE REVIEW
Multimedia has a significant role in communicating the information and a large amount of multimedia repositories make the browsing, retrieval and delivery of video contents. For higher education, using video as a tool for learning and teaching through multimedia application is a considerable promise. Many universities adopt educational systems where the teacher lecture is video recorded and the video lecture is made available to students with minimum post-processing effort. Since each video may cover many subjects, it is critical for an e-Learning environment to have content-based video searching capabilities to meet diverse individual learning needs. The present paper reviewed 120+ core research article on the content based retrieval of the lecture video repositories hosted on cloud by government academic and research organization of India
Recommended from our members
Multimodal Indexing of Presentation Videos
This thesis presents four novel methods to help users efficiently and effectively retrieve information from unstructured and unsourced multimedia sources, in particular the increasing amount and variety of presentation videos such as those in e-learning, conference recordings, corporate talks, and student presentations. We demonstrate a system to summarize, index and cross-reference such videos, and measure the quality of the produced indexes as perceived by the end users. We introduce four major semantic indexing cues: text, speaker faces, graphics, and mosaics, going beyond standard tag based searches and simple video playbacks. This work aims at recognizing visual content "in the wild", where the system cannot rely on any additional information besides the video itself. For text, within a scene text detection and recognition framework, we present a novel locally optimal adaptive binarization algorithm, implemented with integral histograms. It determines of an optimal threshold that maximizes the between-classes variance within a subwindow, with computational complexity independent from the size of the window itself. We obtain character recognition rates of 74%, as validated against ground truth of 8 presentation videos spanning over 1 hour and 45 minutes, which almost doubles the baseline performance of an open source OCR engine. For speaker faces, we detect, track, match, and finally select a humanly preferred face icon per speaker, based on three quality measures: resolution, amount of skin, and pose. We register a 87% accordance (51 out of 58 speakers) between the face indexes automatically generated from three unstructured presentation videos of approximately 45 minutes each, and human preferences recorded through Mechanical Turk experiments. For diagrams, we locate graphics inside frames showing a projected slide, cluster them according to an on-line algorithm based on a combination of visual and temporal information, and select and color-correct their representatives to match human preferences recorded through Mechanical Turk experiments. We register 71% accuracy (57 out of 81 unique diagrams properly identified, selected and color-corrected) on three hours of videos containing five different presentations. For mosaics, we combine two existing suturing measures, to extend video images into in-the-world coordinate system. A set of frames to be registered into a mosaic are sampled according to the PTZ camera movement, which is computed through least square estimation starting from the luminance constancy assumption. A local features based stitching algorithm is then applied to estimate the homography among a set of video frames and median blending is used to render pixels in overlapping regions of the mosaic. For two of these indexes, namely faces and diagrams, we present two novel MTurk-derived user data collections to determine viewer preferences, and show that they are matched in selection by our methods. The net result work of this thesis allows users to search, inside a video collection as well as within a single video clip, for a segment of presentation by professor X on topic Y, containing graph Z
Preferences in relation to Video Lecture Styles: A Survey with Students and Teachers of Distance Education Technical Courses of the Open Technical School of Brazil
The use of video lectures has increased considerably in the past years. The interest of students and teachers has grown in part because of several initiatives that provide access to video lectures through the Internet. This paper presents a survey with students and teachers of distance education technical courses of the Open Technical School of Brazil. The questionnaires were different for students and teachers and contained questions about preferences regarding video lecture styles and average duration of video lectures, questions to identify the agreement level regarding some statements about video lectures and, only for teachers, questions regarding video lecture production. The results suggest some directions for planning a training program for teachers on production of video lectures
- …