6 research outputs found

    Captioning Multiple Speakers using Speech Recognition to Assist Disabled People

    No full text
    Meetings and seminars involving many people speaking can be some of the hardest situations for deaf people to be able to follow what is being said and also for people with physical, visual or cognitive disabilities to take notes or remember key points. People may also be absent during important interactions or they may arrive late or leave early. Real time captioning using phonetic keyboards can provide an accurate live as well as archived transcription of what has been said but is often not available because of the cost and shortage of highly skilled and trained stenographers. This paper describes the development of applications that use speech recognition to provide automatic real time text transcriptions in situations when there can be many people speaking. 1 Introductio

    Towards Universally Designed Communication: Opportunities and Challenges in the Use of Automatic Speech Recognition Systems to Support Access, Understanding and Use of Information in Communicative Settings

    Get PDF
    Unlike physical barriers, communication barriers do not have an easy solution: people speak or sign in different languages and may have wide-ranging proficiency levels in the languages they understand and produce. Universal Design (UD) principles in the domain of language and communication have guided the production of multimodal (audio, visual, written) information. For example, UD guidelines encourage websites to provide information in alternative formats (for example, a video with captions; a sign language version). The same UD for Learning principles apply in the classroom, and instructors are encouraged to prepare content to be presented multimodally, making use of increasingly available technology. In this chapter, I will address some of the opportunities and challenges offered by automatic speech recognition (ASR) systems. These systems have many strengths, and the most evident is the time they employ to convert speech sounds into a written form, faster than the time human transcribers need to perform the same process. These systems also present weaknesses, for example, a higher rate of errors when compared to human-generated transcriptions. It is essential to weigh the strengths and weaknesses of technology when choosing which device(s) to use in a universally designed environment to enhance access to information and communication. It is equally imperative to understand which tools are most appropriate for diverse populations. Therefore, researchers should continue investigating how people process information in a multimodal format, and how technology can be improved based on this knowledge and users’ needs and feedback

    Evaluating intelligent interfaces for post-editing automatic transcriptions of online video lectures

    Full text link
    Video lectures are fast becoming an everyday educational resource in higher education. They are being incorporated into existing university curricula around the world, while also emerging as a key component of the open education movement. In 2007, the Universitat Politècnica de València (UPV) implemented its poliMedia lecture capture system for the creation and publication of quality educational video content and now has a collection of over 10,000 video objects. In 2011, it embarked on the EU-subsidised transLectures project to add automatic subtitles to these videos in both Spanish and other languages. By doing so, it allows access to their educational content by non-native speakers and the deaf and hard-of-hearing, as well as enabling advanced repository management functions. In this paper, following a short introduction to poliMedia, transLectures and Docència en Xarxa (Teaching Online), the UPV s action plan to boost the use of digital resources at the university, we will discuss the three-stage evaluation process carried out with the collaboration of UPV lecturers to find the best interaction protocol for the task of post-editing automatic subtitles.Valor Miró, JD.; Spencer, RN.; Pérez González De Martos, AM.; Garcés Díaz-Munío, GV.; Turró Ribalta, C.; Civera Saiz, J.; Juan Císcar, A. (2014). Evaluating intelligent interfaces for post-editing automatic transcriptions of online video lectures. Open Learning: The Journal of Open and Distance Learning. 29(1):72-85. doi:10.1080/02680513.2014.909722S7285291Fujii, A., Itou, K., & Ishikawa, T. (2006). LODEM: A system for on-demand video lectures. Speech Communication, 48(5), 516-531. doi:10.1016/j.specom.2005.08.006Gilbert, M., Knight, K., & Young, S. (2008). Spoken Language Technology [From the Guest Editors]. IEEE Signal Processing Magazine, 25(3), 15-16. doi:10.1109/msp.2008.918412Leggetter, C. J., & Woodland, P. C. (1995). Maximum likelihood linear regression for speaker adaptation of continuous density hidden Markov models. Computer Speech & Language, 9(2), 171-185. doi:10.1006/csla.1995.0010Proceedings of the 9th ACM SIGCHI New Zealand Chapter’s International Conference on Human-Computer Interaction Design Centered HCI - CHINZ ’08. (2008). doi:10.1145/1496976Martinez-Villaronga, A., del Agua, M. A., Andres-Ferrer, J., & Juan, A. (2013). Language model adaptation for video lectures transcription. 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. doi:10.1109/icassp.2013.6639314Munteanu, C., Baecker, R., & Penn, G. (2008). Collaborative editing for improved usefulness and usability of transcript-enhanced webcasts. Proceeding of the twenty-sixth annual CHI conference on Human factors in computing systems - CHI ’08. doi:10.1145/1357054.1357117Repp, S., Gross, A., & Meinel, C. (2008). Browsing within Lecture Videos Based on the Chain Index of Speech Transcription. IEEE Transactions on Learning Technologies, 1(3), 145-156. doi:10.1109/tlt.2008.22Proceedings of the 2012 ACM international conference on Intelligent User Interfaces - IUI ’12. (2012). doi:10.1145/2166966Serrano, N., Giménez, A., Civera, J., Sanchis, A., & Juan, A. (2013). Interactive handwriting recognition with limited user effort. International Journal on Document Analysis and Recognition (IJDAR), 17(1), 47-59. doi:10.1007/s10032-013-0204-5Torre Toledano, D., Ortega Giménez, A., Teixeira, A., González Rodríguez, J., Hernández Gómez, L., San Segundo Hernández, R., & Ramos Castro, D. (Eds.). (2012). Advances in Speech and Language Technologies for Iberian Languages. Communications in Computer and Information Science. doi:10.1007/978-3-642-35292-8Wald, M. (2006). Creating accessible educational multimedia through editing automatic speech recognition captioning in real time. Interactive Technology and Smart Education, 3(2), 131-141. doi:10.1108/1741565068000005

    Efficient Generation of High-Quality Multilingual Subtitles for Video Lecture Repositories

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-24258-3_44Video lectures are a valuable educational tool in higher education to support or replace face-to-face lectures in active learning strategies. In 2007 the Universitat Politècnica de València (UPV) implemented its video lecture capture system, resulting in a high quality educational video repository, called poliMedia, with more than 10.000 mini lectures created by 1.373 lecturers. Also, in the framework of the European project transLectures, UPV has automatically generated transcriptions and translations in Spanish, Catalan and English for all videos included in the poliMedia video repository. transLectures’s objective responds to the widely-recognised need for subtitles to be provided with video lectures, as an essential service for non-native speakers and hearing impaired persons, and to allow advanced repository functionalities. Although high-quality automatic transcriptions and translations were generated in transLectures, they were not error-free. For this reason, lecturers need to manually review video subtitles to guarantee the absence of errors. The aim of this study is to evaluate the efficiency of the manual review process from automatic subtitles in comparison with the conventional generation of video subtitles from scratch. The reported results clearly indicate the convenience of providing automatic subtitles as a first step in the generation of video subtitles and the significant savings in time of up to almost 75 % involved in reviewing subtitles.The research leading to these results has received funding fromthe European Union FP7/2007-2013 under grant agreement no 287755 (transLectures) and ICT PSP/2007-2013 under grant agreement no 621030 (EMMA), and the Spanish MINECO Active2Trans (TIN2012-31723) research project.Valor Miró, JD.; Silvestre Cerdà, JA.; Civera Saiz, J.; Turró Ribalta, C.; Juan Císcar, A. (2015). Efficient Generation of High-Quality Multilingual Subtitles for Video Lecture Repositories. En Design for Teaching and Learning in a Networked World. Springer Verlag (Germany). 485-490. https://doi.org/10.1007/978-3-319-24258-3_44S485490del-Agua, M.A., Giménez, A., Serrano, N., Andrés-Ferrer, J., Civera, J., Sanchis, A., Juan, A.: The translectures-UPV toolkit. In: Navarro Mesa, J.L., Ortega, A., Teixeira, A., Hernández Pérez, E., Quintana Morales, P., Ravelo García, A., Guerra Moreno, I., Toledano, D.T. (eds.) IberSPEECH 2014. LNCS, vol. 8854, pp. 269–278. Springer, Heidelberg (2014)Glass, J., et al.: Recent progress in the MIT spoken lecture processing project. In: Proceedings of Interspeech 2007, vol. 3, pp. 2553–2556 (2007)Koehn, P., et al.: Moses: open source toolkit for statistical machine translation. In: Proceedings of ACL, pp. 177–180 (2007)Munteanu, C., et al.: Improving ASR for lectures through transformation-based rules learned from minimal data. In: Proceedings of ACL-AFNLP, pp. 764–772 (2009)poliMedia: polimedia platform (2007). http://media.upv.es/Ross, T., Bell, P.: No significant difference only on the surface. Int. J. Instr. Technol. Distance Learn. 4(7), 3–13 (2007)Silvestre, J.A. et al.: Translectures. In: Proceedings of IberSPEECH 2012 (2012)Soong, S.K.A., Chan, L.K., Cheers, C., Hu, C.: Impact of video recorded lectures among students. In: Who’s Learning, pp. 789–793 (2006)Valor Miró, J.D., Pérez González de Martos, A., Civera, J., Juan, A.: Integrating a state-of-the-art ASR system into the opencast matterhorn platform. In: Torre Toledano, D., Ortega Giménez, A., Teixeira, A., González Rodríguez, J., Hernández Gómez, L., San Segundo Hernández, R., Ramos Castro, D. (eds.) IberSPEECH 2012. CCIS, vol. 328, pp. 237–246. Springer, Heidelberg (2012)Wald, M.: Creating accessible educational multimedia through editing automatic speech recognition captioning in real time. Inter. Technol. Smart Educ. 3(2), 131–141 (2006

    Creating Accessible Educational Multimedia through Editing Automatic Speech Recognition Captioning in Real Time

    No full text
    Lectures can be digitally recorded and replayed to provide multimedia revision material for students who attended the class and a substitute learning experience for students unable to attend. Deaf and hard of hearing people can find it difficult to follow speech through hearing alone or to take notes while they are lip-reading or watching a sign-language interpreter. Notetakers can only summarise what is being said while qualified sign language interpreters with a good understanding of the relevant higher education subject content are in very scarce supply. Synchronising the speech with text captions can ensure deaf students are not disadvantaged and assist all learners to search for relevant specific parts of the multimedia recording by means of the synchronised text. Real time stenography transcription is not normally available in UK higher education because of the shortage of stenographers wishing to work in universities. Captions are time consuming and expensive to create by hand and while Automatic Speech Recognition can be used to provide real time captioning directly from lecturers’ speech in classrooms it has proved difficult to obtain accuracy comparable to stenography. This paper describes the development of a system that enables editors to correct errors in the captions as they are created by Automatic Speech Recognition
    corecore