61 research outputs found

    Producción de polimedia por el profesorado universitario y su grado de aceptación en República Dominicana

    Get PDF
    The use of video in distance education contexts and in virtual training is essential. This study presents the results of an experience with 114 university professors from two universities in the Dominican Republic who carried out training activities on the pedagogical bases of the Polimedia systems. The degree of acceptance of the Polimedia system was measured through an adaptation of the Technology Acceptance Model (TAM) developed by Davis in 1989, and the significance of different sociodemographic variables in the model was analyzed. The results demonstrate the high degree of teacher acceptance of the Polimedia system and its relationship with variables such as experience with ICT, as well as the robustness of the TAM. This strengthens the need to establish training plans that focus less on technological aspects and more on the pedagogical dimensionEl uso del video en contextos de formación a distancia y en actividades de formación virtuales se hace imprescindible. El presente estudio expone los resultados de una experiencia llevada a cabo con 114 profesores universitarios de dos universidades de República Dominicana, quienes realizaron actividades formativas sobre las bases pedagógicas de los sistemas Polimedia. Se mide el grado de aceptación del sistema Polimedia a través de una adaptación del Modelo de Aceptación Tecnológica (TAM) formulado por Davis en 1989 y se analiza la significación de diferentes variables sociodemográficas en el modelo. Los resultados demuestran el alto grado de aceptación entre los docentes hacia el sistema Polimedia y su relación con variables como la experiencia con las TIC, así como la robustez del TAM. De esta manera, se afianza la necesidad de establecer planes de formación que no se centran tanto en aspectos tecnológicos, sino en la dimensión pedagógica

    Evaluating intelligent interfaces for post-editing automatic transcriptions of online video lectures

    Full text link
    Video lectures are fast becoming an everyday educational resource in higher education. They are being incorporated into existing university curricula around the world, while also emerging as a key component of the open education movement. In 2007, the Universitat Politècnica de València (UPV) implemented its poliMedia lecture capture system for the creation and publication of quality educational video content and now has a collection of over 10,000 video objects. In 2011, it embarked on the EU-subsidised transLectures project to add automatic subtitles to these videos in both Spanish and other languages. By doing so, it allows access to their educational content by non-native speakers and the deaf and hard-of-hearing, as well as enabling advanced repository management functions. In this paper, following a short introduction to poliMedia, transLectures and Docència en Xarxa (Teaching Online), the UPV s action plan to boost the use of digital resources at the university, we will discuss the three-stage evaluation process carried out with the collaboration of UPV lecturers to find the best interaction protocol for the task of post-editing automatic subtitles.Valor Miró, JD.; Spencer, RN.; Pérez González De Martos, AM.; Garcés Díaz-Munío, GV.; Turró Ribalta, C.; Civera Saiz, J.; Juan Císcar, A. (2014). Evaluating intelligent interfaces for post-editing automatic transcriptions of online video lectures. Open Learning: The Journal of Open and Distance Learning. 29(1):72-85. doi:10.1080/02680513.2014.909722S7285291Fujii, A., Itou, K., & Ishikawa, T. (2006). LODEM: A system for on-demand video lectures. Speech Communication, 48(5), 516-531. doi:10.1016/j.specom.2005.08.006Gilbert, M., Knight, K., & Young, S. (2008). Spoken Language Technology [From the Guest Editors]. IEEE Signal Processing Magazine, 25(3), 15-16. doi:10.1109/msp.2008.918412Leggetter, C. J., & Woodland, P. C. (1995). Maximum likelihood linear regression for speaker adaptation of continuous density hidden Markov models. Computer Speech & Language, 9(2), 171-185. doi:10.1006/csla.1995.0010Proceedings of the 9th ACM SIGCHI New Zealand Chapter’s International Conference on Human-Computer Interaction Design Centered HCI - CHINZ ’08. (2008). doi:10.1145/1496976Martinez-Villaronga, A., del Agua, M. A., Andres-Ferrer, J., & Juan, A. (2013). Language model adaptation for video lectures transcription. 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. doi:10.1109/icassp.2013.6639314Munteanu, C., Baecker, R., & Penn, G. (2008). Collaborative editing for improved usefulness and usability of transcript-enhanced webcasts. Proceeding of the twenty-sixth annual CHI conference on Human factors in computing systems - CHI ’08. doi:10.1145/1357054.1357117Repp, S., Gross, A., & Meinel, C. (2008). Browsing within Lecture Videos Based on the Chain Index of Speech Transcription. IEEE Transactions on Learning Technologies, 1(3), 145-156. doi:10.1109/tlt.2008.22Proceedings of the 2012 ACM international conference on Intelligent User Interfaces - IUI ’12. (2012). doi:10.1145/2166966Serrano, N., Giménez, A., Civera, J., Sanchis, A., & Juan, A. (2013). Interactive handwriting recognition with limited user effort. International Journal on Document Analysis and Recognition (IJDAR), 17(1), 47-59. doi:10.1007/s10032-013-0204-5Torre Toledano, D., Ortega Giménez, A., Teixeira, A., González Rodríguez, J., Hernández Gómez, L., San Segundo Hernández, R., & Ramos Castro, D. (Eds.). (2012). Advances in Speech and Language Technologies for Iberian Languages. Communications in Computer and Information Science. doi:10.1007/978-3-642-35292-8Wald, M. (2006). Creating accessible educational multimedia through editing automatic speech recognition captioning in real time. Interactive Technology and Smart Education, 3(2), 131-141. doi:10.1108/1741565068000005

    Towards a better learning models through OCWs and MOOCs

    Get PDF
    echnological advances of XXth century have induced a profound change in society and, therefore, in the high education. Internet supposed a qualitative difference, as information and digital images flooded into homes around the world. The Universitat Politècnica de València (UPV) is a medium sized university of Spain that has been involved in the development of digital video content (Polimedia) to support teaching processes for several years. Joint with Polimedia and other learning objects (virtual laboratories, applets, etc.), the UPV promoted the construction of OCWs. Along with the improvement of technology, MOOCs appeared as e-learning material. In this work, we analyze the advantages and drawbacks of OCWs and MOOCs when they are used in our classroom. This experience has led us to incorporate in our methodology the flip teaching

    Developing a MOOC Initiative: lessons learnt from The Universitat Politecnica de Valencia experience

    Full text link
    [EN] Universitat Politecnica de Valencia (UPV) was the European institution with more MOOC course runs done by December 2016. This paper describes and analyses the decision-making process, and rationale of the development of this low budget real MOOC institutional initiative carried out by a traditional higher educational medium sized institution. This analysis is done using an Evaluative Research (ER) method based on an iterative approach of 6 cycles of formative evaluation. It has been a multiple stage process that includes many aspects. In this paper, we will analyze two of them: the creation, organization and management of a fast and cheap MOOC production process, including the several tools, plans, and procedures that have enabled UPV to create courses fast and with a low-cost; and the technical evolution of the initiative, with the different platforms that have been used. As a consequence of this process, UPV has its own platform (upvx.es) based on openedX and is a member of edx.org, with 50 courses, 177 editions, and more than 632.000 enrollments. The completion rate is 8.69%, and post-course surveys reveal a high level of satisfaction from students. The paper will finish addressing the challenges of making this an entirely self-sustainable initiative and reflecting about what is required for evaluating this experience globally.Despujol Zabala, I.; Castañeda Quintero, L.; Turró Ribalta, C. (2018). Developing a MOOC Initiative: lessons learnt from The Universitat Politecnica de Valencia experience. Turkish Online Journal of Distance Education. 19(1):215-233. doi:10.17718/tojde.38280521523319

    Efficient Generation of High-Quality Multilingual Subtitles for Video Lecture Repositories

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-24258-3_44Video lectures are a valuable educational tool in higher education to support or replace face-to-face lectures in active learning strategies. In 2007 the Universitat Politècnica de València (UPV) implemented its video lecture capture system, resulting in a high quality educational video repository, called poliMedia, with more than 10.000 mini lectures created by 1.373 lecturers. Also, in the framework of the European project transLectures, UPV has automatically generated transcriptions and translations in Spanish, Catalan and English for all videos included in the poliMedia video repository. transLectures’s objective responds to the widely-recognised need for subtitles to be provided with video lectures, as an essential service for non-native speakers and hearing impaired persons, and to allow advanced repository functionalities. Although high-quality automatic transcriptions and translations were generated in transLectures, they were not error-free. For this reason, lecturers need to manually review video subtitles to guarantee the absence of errors. The aim of this study is to evaluate the efficiency of the manual review process from automatic subtitles in comparison with the conventional generation of video subtitles from scratch. The reported results clearly indicate the convenience of providing automatic subtitles as a first step in the generation of video subtitles and the significant savings in time of up to almost 75 % involved in reviewing subtitles.The research leading to these results has received funding fromthe European Union FP7/2007-2013 under grant agreement no 287755 (transLectures) and ICT PSP/2007-2013 under grant agreement no 621030 (EMMA), and the Spanish MINECO Active2Trans (TIN2012-31723) research project.Valor Miró, JD.; Silvestre Cerdà, JA.; Civera Saiz, J.; Turró Ribalta, C.; Juan Císcar, A. (2015). Efficient Generation of High-Quality Multilingual Subtitles for Video Lecture Repositories. En Design for Teaching and Learning in a Networked World. Springer Verlag (Germany). 485-490. https://doi.org/10.1007/978-3-319-24258-3_44S485490del-Agua, M.A., Giménez, A., Serrano, N., Andrés-Ferrer, J., Civera, J., Sanchis, A., Juan, A.: The translectures-UPV toolkit. In: Navarro Mesa, J.L., Ortega, A., Teixeira, A., Hernández Pérez, E., Quintana Morales, P., Ravelo García, A., Guerra Moreno, I., Toledano, D.T. (eds.) IberSPEECH 2014. LNCS, vol. 8854, pp. 269–278. Springer, Heidelberg (2014)Glass, J., et al.: Recent progress in the MIT spoken lecture processing project. In: Proceedings of Interspeech 2007, vol. 3, pp. 2553–2556 (2007)Koehn, P., et al.: Moses: open source toolkit for statistical machine translation. In: Proceedings of ACL, pp. 177–180 (2007)Munteanu, C., et al.: Improving ASR for lectures through transformation-based rules learned from minimal data. In: Proceedings of ACL-AFNLP, pp. 764–772 (2009)poliMedia: polimedia platform (2007). http://media.upv.es/Ross, T., Bell, P.: No significant difference only on the surface. Int. J. Instr. Technol. Distance Learn. 4(7), 3–13 (2007)Silvestre, J.A. et al.: Translectures. In: Proceedings of IberSPEECH 2012 (2012)Soong, S.K.A., Chan, L.K., Cheers, C., Hu, C.: Impact of video recorded lectures among students. In: Who’s Learning, pp. 789–793 (2006)Valor Miró, J.D., Pérez González de Martos, A., Civera, J., Juan, A.: Integrating a state-of-the-art ASR system into the opencast matterhorn platform. In: Torre Toledano, D., Ortega Giménez, A., Teixeira, A., González Rodríguez, J., Hernández Gómez, L., San Segundo Hernández, R., Ramos Castro, D. (eds.) IberSPEECH 2012. CCIS, vol. 328, pp. 237–246. Springer, Heidelberg (2012)Wald, M.: Creating accessible educational multimedia through editing automatic speech recognition captioning in real time. Inter. Technol. Smart Educ. 3(2), 131–141 (2006

    Integrating a State-of-the-Art ASR System into the Opencast Matterhorn Platform

    Full text link
    [EN] In this paper we present the integration of a state-of-the-art ASR system into the Opencast Matterhorn platform, a free, open-source platform to support the management of educational audio and video content. The ASR system was trained on a novel large speech corpus, known as poliMedia, that was manually transcribed for the European project transLectures. This novel corpus contains more than 115 hours of transcribed speech that will be available for the research community. Initial results on the poliMedia corpus are also reported to compare the performance of different ASR systems based on the linear interpolation of language models. To this purpose, the in-domain poliMedia corpus was linearly interpolated with an external large-vocabulary dataset, the well-known Google N-Gram corpus. WER figures reported denote the notable improvement over the baseline performance as a result of incorporating the vast amount of data represented by the Google N-Gram corpus.The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement no 287755. Also supported by the Spanish Government (MIPRCV ”Consolider Ingenio 2010” and iTrans2 TIN2009-14511) and the Generalitat Valenciana (Prometeo/2009/014).Valor Miró, JD.; Pérez González De Martos, AM.; Civera Saiz, J.; Juan Císcar, A. (2012). Integrating a State-of-the-Art ASR System into the Opencast Matterhorn Platform. Communications in Computer and Information Science. 328:237-246. https://doi.org/10.1007/978-3-642-35292-8_25S237246328UPVLC, XEROX, JSI-K4A, RWTH, EML, DDS: transLectures: Transcription and Translation of Video Lectures. In: Proc. of EAMT, p. 204 (2012)Zhan, P., Ries, K., Gavalda, M., Gates, D., Lavie, A., Waibel, A.: JANUS-II: towards spontaneous Spanish speech recognition 4, 2285–2288 (1996)Nogueiras, A., Fonollosa, J.A.R., Bonafonte, A., Mariño, J.B.: RAMSES: El sistema de reconocimiento del habla continua y gran vocabulario desarrollado por la UPC. In: VIII Jornadas de I+D en Telecomunicaciones, pp. 399–408 (1998)Huang, X., Alleva, F., Hon, H.W., Hwang, M.Y., Rosenfeld, R.: The SPHINX-II Speech Recognition System: An Overview. Computer, Speech and Language 7, 137–148 (1992)Speech and Language Technology Group. Sumat: An online service for subtitling by machine translation (May 2012), http://www.sumat-project.euBroman, S., Kurimo, M.: Methods for combining language models in speech recognition. In: Proc. of Interspeech, pp. 1317–1320 (2005)Liu, X., Gales, M., Hieronymous, J., Woodland, P.: Use of contexts in language model interpolation and adaptation. In: Proc. of Interspeech (2009)Liu, X., Gales, M., Hieronymous, J., Woodland, P.: Language model combination and adaptation using weighted finite state transducers (2010)Goodman, J.T.: Putting it all together: Language model combination. In: Proc. of ICASSP, pp. 1647–1650 (2000)Lööf, J., Gollan, C., Hahn, S., Heigold, G., Hoffmeister, B., Plahl, C., Rybach, D., Schlüter, R., Ney, H.: The rwth 2007 tc-star evaluation system for european english and spanish. In: Proc. of Interspeech, pp. 2145–2148 (2007)Rybach, D., Gollan, C., Heigold, G., Hoffmeister, B., Lööf, J., Schlüter, R., Ney, H.: The rwth aachen university open source speech recognition system. In: Proc. of Interspeech, pp. 2111–2114 (2009)Stolcke, A.: SRILM - An Extensible Language Modeling Toolkit. In: Proc. of ICSLP (2002)Michel, J.B., et al.: Quantitative analysis of culture using millions of digitized books. Science 331(6014), 176–182Turro, C., Cañero, A., Busquets, J.: Video learning objects creation with polimedia. In: 2010 IEEE International Symposium on Multimedia (ISM), December 13-15, pp. 371–376 (2010)Barras, C., Geoffrois, E., Wu, Z., Liberman, M.: Transcriber: development and use of a tool for assisting speech corpora production. Speech Communication Special Issue on Speech Annotation and Corpus Tools 33(1-2) (2000)Apache. Apache felix (May 2012), http://felix.apache.org/site/index.htmlOsgi alliance. osgi r4 service platform (May 2012), http://www.osgi.org/Main/HomePageSahidullah, M., Saha, G.: Design, analysis and experimental evaluation of block based transformation in MFCC computation for speaker recognition 54(4), 543–565 (2012)Gascó, G., Rocha, M.-A., Sanchis-Trilles, G., Andrés-Ferrer, J., Casacuberta, F.: Does more data always yield better translations? In: Proc. of EACL, pp. 152–161 (2012)Sánchez-Cortina, I., Serrano, N., Sanchis, A., Juan, A.: A prototype for interactive speech transcription balancing error and supervision effort. In: Proc. of IUI, pp. 325–326 (2012

    A System Architecture to Support Cost-Effective Transcription and Translation of Large Video Lecture Repositories

    Full text link
    [EN] Online video lecture repositories are rapidly growing and becoming established as fundamental knowledge assets. However, most lectures are neither transcribed nor translated because of the lack of cost-effective solutions that can give accurate enough results. In this paper, we describe a system architecture that supports the cost-effective transcription and translation of large video lecture repositories. This architecture has been adopted in the EU project transLectures and is now being tested on a repository of more than 9000 video lectures at the Universitat Politecnica de Valencia. Following a brief description of this repository and of the transLectures project, we describe the proposed system architecture in detail. We also report empirical results on the quality of the transcriptions and translations currently being maintained and steadily improved.The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 287755. Funding was also provided by the Spanish Government with the FPU scholarship AP2010-4349.Silvestre Cerdà, JA.; Pérez González De Martos, AM.; Jiménez López, M.; Turró Ribalta, C.; Juan Císcar, A.; Civera Saiz, J. (2013). A System Architecture to Support Cost-Effective Transcription and Translation of Large Video Lecture Repositories. IEEE International Conference on Systems, Man, and Cybernetics. Conference proceedings. 3994-3999. https://doi.org/10.1109/SMC.2013.682S3994399

    Evaluation of innovative computer-assisted transcription and translation strategies for video lecture repositories

    Full text link
    Nowadays, the technology enhanced learning area has experienced a strong growth with many new learning approaches like blended learning, flip teaching, massive open online courses, and open educational resources to complement face-to-face lectures. Specifically, video lectures are fast becoming an everyday educational resource in higher education for all of these new learning approaches, and they are being incorporated into existing university curricula around the world. Transcriptions and translations can improve the utility of these audiovisual assets, but rarely are present due to a lack of cost-effective solutions to do so. Lecture searchability, accessibility to people with impairments, translatability for foreign students, plagiarism detection, content recommendation, note-taking, and discovery of content-related videos are examples of advantages of the presence of transcriptions. For this reason, the aim of this thesis is to test in real-life case studies ways to obtain multilingual captions for video lectures in a cost-effective way by using state-of-the-art automatic speech recognition and machine translation techniques. Also, we explore interaction protocols to review these automatic transcriptions and translations, because unfortunately automatic subtitles are not error-free. In addition, we take a step further into multilingualism by extending our findings and evaluation to several languages. Finally, the outcomes of this thesis have been applied to thousands of video lectures in European universities and institutions.Hoy en día, el área del aprendizaje mejorado por la tecnología ha experimentado un fuerte crecimiento con muchos nuevos enfoques de aprendizaje como el aprendizaje combinado, la clase inversa, los cursos masivos abiertos en línea, y nuevos recursos educativos abiertos para complementar las clases presenciales. En concreto, los videos docentes se están convirtiendo rápidamente en un recurso educativo cotidiano en la educación superior para todos estos nuevos enfoques de aprendizaje, y se están incorporando a los planes de estudios universitarios existentes en todo el mundo. Las transcripciones y las traducciones pueden mejorar la utilidad de estos recursos audiovisuales, pero rara vez están presentes debido a la falta de soluciones rentables para hacerlo. La búsqueda de y en los videos, la accesibilidad a personas con impedimentos, la traducción para estudiantes extranjeros, la detección de plagios, la recomendación de contenido, la toma de notas y el descubrimiento de videos relacionados son ejemplos de las ventajas de la presencia de transcripciones. Por esta razón, el objetivo de esta tesis es probar en casos de estudio de la vida real las formas de obtener subtítulos multilingües para videos docentes de una manera rentable, mediante el uso de técnicas avanzadas de reconocimiento automático de voz y de traducción automática. Además, exploramos diferentes modelos de interacción para revisar estas transcripciones y traducciones automáticas, pues desafortunadamente los subtítulos automáticos no están libres de errores. Además, damos un paso más en el multilingüismo extendiendo nuestros hallazgos y evaluaciones a muchos idiomas. Por último, destacar que los resultados de esta tesis se han aplicado a miles de vídeos docentes en universidades e instituciones europeas.Hui en dia, l'àrea d'aprenentatge millorat per la tecnologia ha experimentat un fort creixement, amb molts nous enfocaments d'aprenentatge com l'aprenentatge combinat, la classe inversa, els cursos massius oberts en línia i nous recursos educatius oberts per tal de complementar les classes presencials. En concret, els vídeos docents s'estan convertint ràpidament en un recurs educatiu quotidià en l'educació superior per a tots aquests nous enfocaments d'aprenentatge i estan incorporant-se als plans d'estudi universitari existents arreu del món. Les transcripcions i les traduccions poden millorar la utilitat d'aquests recursos audiovisuals, però rara vegada estan presents a causa de la falta de solucions rendibles per fer-ho. La cerca de i als vídeos, l'accessibilitat a persones amb impediments, la traducció per estudiants estrangers, la detecció de plagi, la recomanació de contingut, la presa de notes i el descobriment de vídeos relacionats són un exemple dels avantatges de la presència de transcripcions. Per aquesta raó, l'objectiu d'aquesta tesi és provar en casos d'estudi de la vida real les formes d'obtenir subtítols multilingües per a vídeos docents d'una manera rendible, mitjançant l'ús de tècniques avançades de reconeixement automàtic de veu i de traducció automàtica. A més a més, s'exploren diferents models d'interacció per a revisar aquestes transcripcions i traduccions automàtiques, puix malauradament els subtítols automàtics no estan lliures d'errades. A més, es fa un pas més en el multilingüisme estenent els nostres descobriments i avaluacions a molts idiomes. Per últim, destacar que els resultats d'aquesta tesi s'han aplicat a milers de vídeos docents en universitats i institucions europees.Valor Miró, JD. (2017). Evaluation of innovative computer-assisted transcription and translation strategies for video lecture repositories [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90496TESI

    Polimedia: Using advanced screencasting for effective e-learning content production

    Get PDF
    In this paper we present Polimedia, a system designed to help teachers producing high quality e-learning content without technological knowledge. We achieve this by using a special TV-like recording studio that gets teachers' video and learning material in a way that resembles standard slide presentations. We present the architecture of the system and we provide examples and results of its usage at Universidad Politecnica de Valencia, where we have used this system with more than 200 teachers and 2,500 recording tracks
    corecore