67 research outputs found
Arabic Printed Word Recognition Using Windowed Bernoulli HMMs
[EN] Hidden Markov Models (HMMs) are now widely used for off-line text recognition in many languages and, in particular, Arabic. In previous work, we proposed to directly use columns of raw, binary image pixels, which are directly fed into embedded Bernoulli (mixture) HMMs, that is, embedded HMMs in which the emission probabilities are modeled with Bernoulli mixtures. The idea was to by-pass feature extraction and to ensure that no discriminative information is filtered out during feature extraction, which in some sense is integrated into the recognition model. More recently, we extended the column bit vectors by means of a sliding window of adequate width to better capture image context at each horizontal position of the word image. However, these models might have limited capability to properly model vertical image distortions. In this paper, we have considered three methods of window repositioning after window extraction to overcome this limitation. Each sliding window is translated (repositioned) to align its center to the center of mass. Using this approach, state-of-art results are reported on the Arabic Printed Text Recognition (APTI) database.The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement no 287755. Also supported by the Spanish Government (Plan E, iTrans2 TIN2009-14511 and AECID 2011/2012 grant).Alkhoury, I.; Giménez Pastor, A.; Juan Císcar, A.; Andrés Ferrer, J. (2013). Arabic Printed Word Recognition Using Windowed Bernoulli HMMs. Lecture Notes in Computer Science. 8156:330-339. https://doi.org/10.1007/978-3-642-41181-6_34S3303398156Dehghan, M., et al.: Handwritten Farsi (Arabic) word recognition: a holistic approach using discrete HMM. Pattern Recognition 34(5), 1057–1065 (2001), http://www.sciencedirect.com/science/article/pii/S0031320300000510Giménez, A., Juan, A.: Embedded Bernoulli Mixture HMMs for Handwritten Word Recognition. In: ICDAR 2009, Barcelona, Spain, pp. 896–900 (July 2009)Giménez, A., Khoury, I., Juan, A.: Windowed Bernoulli Mixture HMMs for Arabic Handwritten Word Recognition. In: ICFHR 2010, Kolkata, India, pp. 533–538 (November 2010)Grosicki, E., El Abed, H.: ICDAR 2009 Handwriting Recognition Competition. In: ICDAR 2009, Barcelona, Spain, pp. 1398–1402 (July 2009)Günter, S., et al.: HMM-based handwritten word recognition: on the optimization of the number of states, training iterations and Gaussian components. Pattern Recognition 37, 2069–2079 (2004)Märgner, V., El Abed, H.: ICDAR 2007 - Arabic Handwriting Recognition Competition. In: ICDAR 2007, Curitiba, Brazil, pp. 1274–1278 (September 2007)Märgner, V., El Abed, H.: ICDAR 2009 Arabic Handwriting Recognition Competition. In: ICDAR 2009, Barcelona, Spain, pp. 1383–1387 (July 2009)Pechwitz, M., et al.: IFN/ENIT - database of handwritten Arabic words. In: CIFED 2002, Hammamet, Tunis, pp. 21–23 (October 2002)Rabiner, L., Juang, B.: Fundamentals of speech recognition. Prentice-Hall (1993)Slimane, F., et al.: A new arabic printed text image database and evaluation protocols. In: ICDAR 2009, pp. 946–950 (July 2009)Slimane, F., et al.: ICDAR 2011 - arabic recognition competition: Multi-font multi-size digitally represented text. In: ICDAR 2011 - Arabic Recognition Competition, pp. 1449–1453. IEEE (September 2011)Young, S.: et al.: The HTK Book. Cambridge University Engineering Department (1995
Integrating a State-of-the-Art ASR System into the Opencast Matterhorn Platform
[EN] In this paper we present the integration of a state-of-the-art ASR system into the Opencast Matterhorn platform, a free, open-source platform to support the management of educational audio and video content. The ASR system was trained on a novel large speech corpus, known as poliMedia, that was manually transcribed for the European project transLectures. This novel corpus contains more than 115 hours of transcribed speech that will be available for the research community. Initial results on the poliMedia corpus are also reported to compare the performance of different ASR systems based on the linear interpolation of language models. To this purpose, the in-domain poliMedia corpus was linearly interpolated with an external large-vocabulary dataset, the well-known Google N-Gram corpus. WER figures reported denote the notable improvement over the baseline performance as a result of incorporating the vast amount of data represented by the Google N-Gram corpus.The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013)
under grant agreement no 287755. Also supported by the Spanish Government (MIPRCV ”Consolider Ingenio 2010” and iTrans2 TIN2009-14511) and the Generalitat Valenciana (Prometeo/2009/014).Valor Miró, JD.; Pérez González De Martos, AM.; Civera Saiz, J.; Juan Císcar, A. (2012). Integrating a State-of-the-Art ASR System into the Opencast Matterhorn Platform. Communications in Computer and Information Science. 328:237-246. https://doi.org/10.1007/978-3-642-35292-8_25S237246328UPVLC, XEROX, JSI-K4A, RWTH, EML, DDS: transLectures: Transcription and Translation of Video Lectures. In: Proc. of EAMT, p. 204 (2012)Zhan, P., Ries, K., Gavalda, M., Gates, D., Lavie, A., Waibel, A.: JANUS-II: towards spontaneous Spanish speech recognition 4, 2285–2288 (1996)Nogueiras, A., Fonollosa, J.A.R., Bonafonte, A., Mariño, J.B.: RAMSES: El sistema de reconocimiento del habla continua y gran vocabulario desarrollado por la UPC. In: VIII Jornadas de I+D en Telecomunicaciones, pp. 399–408 (1998)Huang, X., Alleva, F., Hon, H.W., Hwang, M.Y., Rosenfeld, R.: The SPHINX-II Speech Recognition System: An Overview. Computer, Speech and Language 7, 137–148 (1992)Speech and Language Technology Group. Sumat: An online service for subtitling by machine translation (May 2012), http://www.sumat-project.euBroman, S., Kurimo, M.: Methods for combining language models in speech recognition. In: Proc. of Interspeech, pp. 1317–1320 (2005)Liu, X., Gales, M., Hieronymous, J., Woodland, P.: Use of contexts in language model interpolation and adaptation. In: Proc. of Interspeech (2009)Liu, X., Gales, M., Hieronymous, J., Woodland, P.: Language model combination and adaptation using weighted finite state transducers (2010)Goodman, J.T.: Putting it all together: Language model combination. In: Proc. of ICASSP, pp. 1647–1650 (2000)Lööf, J., Gollan, C., Hahn, S., Heigold, G., Hoffmeister, B., Plahl, C., Rybach, D., Schlüter, R., Ney, H.: The rwth 2007 tc-star evaluation system for european english and spanish. In: Proc. of Interspeech, pp. 2145–2148 (2007)Rybach, D., Gollan, C., Heigold, G., Hoffmeister, B., Lööf, J., Schlüter, R., Ney, H.: The rwth aachen university open source speech recognition system. In: Proc. of Interspeech, pp. 2111–2114 (2009)Stolcke, A.: SRILM - An Extensible Language Modeling Toolkit. In: Proc. of ICSLP (2002)Michel, J.B., et al.: Quantitative analysis of culture using millions of digitized books. Science 331(6014), 176–182Turro, C., Cañero, A., Busquets, J.: Video learning objects creation with polimedia. In: 2010 IEEE International Symposium on Multimedia (ISM), December 13-15, pp. 371–376 (2010)Barras, C., Geoffrois, E., Wu, Z., Liberman, M.: Transcriber: development and use of a tool for assisting speech corpora production. Speech Communication Special Issue on Speech Annotation and Corpus Tools 33(1-2) (2000)Apache. Apache felix (May 2012), http://felix.apache.org/site/index.htmlOsgi alliance. osgi r4 service platform (May 2012), http://www.osgi.org/Main/HomePageSahidullah, M., Saha, G.: Design, analysis and experimental evaluation of block based transformation in MFCC computation for speaker recognition 54(4), 543–565 (2012)Gascó, G., Rocha, M.-A., Sanchis-Trilles, G., Andrés-Ferrer, J., Casacuberta, F.: Does more data always yield better translations? In: Proc. of EACL, pp. 152–161 (2012)Sánchez-Cortina, I., Serrano, N., Sanchis, A., Juan, A.: A prototype for interactive speech transcription balancing error and supervision effort. In: Proc. of IUI, pp. 325–326 (2012
Speaker-adapted confidence measures for speech recognition of video lectures
[EN] Automatic speech recognition applications can benefit from a confidence measure (CM) to predict the reliability of the output. Previous works showed that a word-dependent native Bayes (NB) classifier outperforms the conventional word posterior probability as a CM. However, a discriminative formulation usually renders improved performance due to the available training techniques.
Taking this into account, we propose a logistic regression (LR) classifier defined with simple input functions to approximate to the NB behaviour. Additionally, as a main contribution, we propose to adapt the CM to the speaker in cases in which it is possible to identify the speakers, such as online lecture repositories.
The experiments have shown that speaker-adapted models outperform their non-adapted counterparts on two difficult tasks from English (videoLectures.net) and Spanish (poliMedia) educational lectures. They have also shown that the NB model is clearly superseded by the proposed LR classifier.The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement no 287755. Also supported by the Spanish MINECO (iTrans2 TIN2009-14511 and Active2Trans TIN2012-31723) research projects and the FPI Scholarship BES-2010-033005.Sanchez-Cortina, I.; Andrés Ferrer, J.; Sanchis Navarro, JA.; Juan Císcar, A. (2016). Speaker-adapted confidence measures for speech recognition of video lectures. Computer Speech and Language. 37:11-23. https://doi.org/10.1016/j.csl.2015.10.003S11233
Discriminative Bernoulli Mixture Models for Handwritten Digit Recognition
Bernoulli-based models such as Bernoulli mixtures
or Bernoulli HMMs (BHMMs), have been successfully applied
to several handwritten text recognition (HTR) tasks which
range from character recognition to continuous and isolated
handwritten words. All these models belong to the generative
model family and, hence, are usually trained by (joint) maximum
likelihood estimation (MLE). Despite the good properties
of the MLE criterion, there are better training criteria such as
maximum mutual information (MMI). The MMI is a widespread
criterion that is mainly employed to train discriminative models
such as log-linear (or maximum entropy) models. Inspired by
the Bernoulli mixture classifier, in this work a log-linear model
for binary data is proposed, the so-called mixture of multiclass
logistic regression. The proposed model is proved to be
equivalent to the Bernoulli mixture classifier. In this way, we
give a discriminative training framework for Bernoulli mixture
models. The proposed discriminative training framework is
applied to a well-known Indian digit recognition task.Work supported by the EC (FEDER/FSE) and the Spanish MEC/MICINN under the MIPRCV “Consolider Ingenio 2010” program (CSD2007-00018), iTrans2 (TIN2009-14511) and MITTRAL (TIN2009-14633-C03-01) projects. Also supported by the IST Programme of the European Community, under the PASCAL2 Network of Excellence, IST-2007-216886, and by the Spanish MITyC under the erudito.com (TSI-020110-2009-439).Giménez Pastor, A.; Andrés Ferrer, J.; Juan Císcar, A.; Serrano Martinez Santos, N. (2011). Discriminative Bernoulli Mixture Models for Handwritten Digit Recognition. En Document Analysis and Recognition (ICDAR), 2011 International Conference on. Institute of Electrical and Electronics Engineers (IEEE). 558-562. https://doi.org/10.1109/ICDAR.2011.118S55856
Language model adaptation for video lectures transcription
Videolectures are currently being digitised all over the world for its enormous value as reference resource. Many of these lectures are accompanied with slides. The slides offer a great opportunity for improving ASR systems performance. We propose a simple yet powerful extension to the linear interpolation of language models for adapting language models with slide information. Two types of slides are considered, correct slides, and slides automatic extracted from the videos with OCR. Furthermore, we compare both time aligned and unaligned slides. Results report an improvement of up to 3.8 % absolute WER points when using correct slides. Surprisingly, when using automatic slides obtained with poor OCR quality, the ASR system still improves up to 2.2 absolute WER points.The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement no 287755 (transLectures). Also supported by the Spanish Government (Plan E, iTrans2 TIN2009-14511).Martínez-Villaronga, A.; Del Agua Teba, MA.; Andrés Ferrer, J.; Juan Císcar, A. (2013). Language model adaptation for video lectures transcription. En Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on. IInstitute of Electrical and Electronics Engineers (IEEE). 8450-8454. https://doi.org/10.1109/ICASSP.2013.6639314S8450845
Efficient Generation of High-Quality Multilingual Subtitles for Video Lecture Repositories
The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-24258-3_44Video lectures are a valuable educational tool in higher education to support or replace face-to-face lectures in active learning strategies. In 2007 the Universitat Politècnica de València (UPV) implemented its video lecture capture system, resulting in a high quality educational video repository, called poliMedia, with more than 10.000 mini lectures created by 1.373 lecturers. Also, in the framework of the European project transLectures, UPV has automatically generated transcriptions and translations in Spanish, Catalan and English for all videos included in the poliMedia video repository. transLectures’s objective responds to the widely-recognised need for subtitles to be provided with video lectures, as an essential service for non-native speakers and hearing impaired persons, and to allow advanced repository functionalities. Although high-quality automatic transcriptions and translations were generated in transLectures, they were not error-free. For this reason, lecturers need to manually review video subtitles to guarantee the absence of errors. The aim of this study is to evaluate the efficiency of the manual review process from automatic subtitles in comparison with the conventional generation of video subtitles from scratch. The reported results clearly indicate the convenience of providing automatic subtitles as a first step in the generation of video subtitles and the significant savings in time of up to almost 75 % involved in reviewing subtitles.The research leading to these results has received funding fromthe European Union FP7/2007-2013 under grant agreement no 287755 (transLectures) and ICT PSP/2007-2013 under grant agreement no 621030 (EMMA), and the Spanish MINECO Active2Trans (TIN2012-31723) research project.Valor Miró, JD.; Silvestre Cerdà, JA.; Civera Saiz, J.; Turró Ribalta, C.; Juan Císcar, A. (2015). Efficient Generation of High-Quality Multilingual Subtitles for Video Lecture Repositories. En Design for Teaching and Learning in a Networked World. Springer Verlag (Germany). 485-490. https://doi.org/10.1007/978-3-319-24258-3_44S485490del-Agua, M.A., Giménez, A., Serrano, N., Andrés-Ferrer, J., Civera, J., Sanchis, A., Juan, A.: The translectures-UPV toolkit. In: Navarro Mesa, J.L., Ortega, A., Teixeira, A., Hernández Pérez, E., Quintana Morales, P., Ravelo García, A., Guerra Moreno, I., Toledano, D.T. (eds.) IberSPEECH 2014. LNCS, vol. 8854, pp. 269–278. Springer, Heidelberg (2014)Glass, J., et al.: Recent progress in the MIT spoken lecture processing project. In: Proceedings of Interspeech 2007, vol. 3, pp. 2553–2556 (2007)Koehn, P., et al.: Moses: open source toolkit for statistical machine translation. In: Proceedings of ACL, pp. 177–180 (2007)Munteanu, C., et al.: Improving ASR for lectures through transformation-based rules learned from minimal data. In: Proceedings of ACL-AFNLP, pp. 764–772 (2009)poliMedia: polimedia platform (2007). http://media.upv.es/Ross, T., Bell, P.: No significant difference only on the surface. Int. J. Instr. Technol. Distance Learn. 4(7), 3–13 (2007)Silvestre, J.A. et al.: Translectures. In: Proceedings of IberSPEECH 2012 (2012)Soong, S.K.A., Chan, L.K., Cheers, C., Hu, C.: Impact of video recorded lectures among students. In: Who’s Learning, pp. 789–793 (2006)Valor Miró, J.D., Pérez González de Martos, A., Civera, J., Juan, A.: Integrating a state-of-the-art ASR system into the opencast matterhorn platform. In: Torre Toledano, D., Ortega Giménez, A., Teixeira, A., González Rodríguez, J., Hernández Gómez, L., San Segundo Hernández, R., Ramos Castro, D. (eds.) IberSPEECH 2012. CCIS, vol. 328, pp. 237–246. Springer, Heidelberg (2012)Wald, M.: Creating accessible educational multimedia through editing automatic speech recognition captioning in real time. Inter. Technol. Smart Educ. 3(2), 131–141 (2006
Comparison of Bernoulli and Gaussian HMMs using a vertical repositioning technique for off-line handwriting recognition
—In this paper a vertical repositioning method
based on the center of gravity is investigated for handwriting
recognition systems and evaluated on databases containing
Arabic and French handwriting. Experiments show that vertical
distortion in images has a large impact on the performance
of HMM based handwriting recognition systems. Recently good
results were obtained with Bernoulli HMMs (BHMMs) using a
preprocessing with vertical repositioning of binarized images.
In order to isolate the effect of the preprocessing from the
BHMM model, experiments were conducted with Gaussian
HMMs and the LSTM-RNN tandem HMM approach with
relative improvements of 33% WER on the Arabic and up
to 62% on the French database.Doetsch, P.; Hamdani, M.; Ney, H.; Giménez Pastor, A.; Andrés Ferrer, J.; Juan Císcar, A. (2012). Comparison of Bernoulli and Gaussian HMMs using a vertical repositioning technique for off-line handwriting recognition. En 2012 International Conference on Frontiers in Handwriting Recognition ICFHR 2012. Institute of Electrical and Electronics Engineers (IEEE). 3-7. doi:10.1109/ICFHR.2012.194S3
TransLectures - Transcription and Translation of Video Lectures
TransLectures: Transcription and Translation of Video Lectures. Funding agency: European Commission. Funding call identification: FP7-ICT. Type of project: STREP. Project ID number: 287755Andrés Ferrer, J.; Civera Saiz, J.; Juan Císcar, A. (2012). TransLectures - Transcription and Translation of Video Lectures. Fondazione Bruno Kessler. 204-204. http://hdl.handle.net/10251/37027S20420
Character-Based Handwritten Text Recognition of Multilingual Documents
[EN] An effective approach to transcribe handwritten text documents is to follow a sequential interactive approach. During the supervision phase, user corrections are incorporated into the system through an ongoing retraining process. In the case of multilingual documents with a high percentage of out-of-vocabulary (OOV) words, two principal issues arise. On the one hand, a minor yet important matter for this interactive approach is to identify the language of the current text line image to be transcribed, as a language dependent recognisers typically performs better than a monolingual recogniser. On the other hand, word-based language models suffer from data scarcity in the presence of a large number of OOV words, degrading their estimation and affecting the performance of the transcription system. In this paper, we successfully tackle both issues deploying character-based language models combined with language identification techniques on an entire 764-page multilingual document. The results obtained significantly reduce previously reported results in terms of transcription error on the same task, but showed that a language dependent approach is not effective on top of character-based recognition of similar languages.The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement n◦ 287755. Also supported by the Spanish Government (MIPRCV ”Consolider Ingenio 2010”, iTrans2 TIN2009-14511, MITTRAL TIN2009-14633-C03-01 and FPU AP2007-0286) and the Generalitat Valenciana (Prometeo/2009/014).Del Agua Teba, MA.; Serrano Martinez Santos, N.; Civera Saiz, J.; Juan Císcar, A. (2012). Character-Based Handwritten Text Recognition of Multilingual Documents. Communications in Computer and Information Science. 328:187-196. https://doi.org/10.1007/978-3-642-35292-8_20S187196328Graves, A., Liwicki, M., Fernandez, S., Bertolami, R., Bunke, H., Schmidhuber, J.: A novel connectionist system for unconstrained handwriting recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 31(5), 855–868 (2009)Serrano, N., Tarazón, L., Pérez, D., Ramos-Terrades, O., Juan, A.: The GIDOC prototype. In: Proc. of the 10th Int. Workshop on Pattern Recognition in Information Systems (PRIS 2010), Funchal, Portugal, pp. 82–89 (2010)Serrano, N., Pérez, D., Sanchis, A., Juan, A.: Adaptation from Partially Supervised Handwritten Text Transcriptions. In: Proc. of the 11th Int. Conf. on Multimodal Interfaces and the 6th Workshop on Machine Learning for Multimodal Interaction (ICMI-MLMI 2009), Cambridge, MA, USA, pp. 289–292 (2009)Serrano, N., Sanchis, A., Juan, A.: Balancing error and supervision effort in interactive-predictive handwriting recognition. In: Proc. of the Int. Conf. on Intelligent User Interfaces (IUI 2010), Hong Kong, China, pp. 373–376 (2010)Serrano, N., Giménez, A., Sanchis, A., Juan, A.: Active learning strategies in handwritten text recognition. In: Proc. of the 12th Int. Conf. on Multimodal Interfaces and the 7th Workshop on Machine Learning for Multimodal Interaction (ICMI-MLMI 2010), Beijing, China, vol. (86) (November 2010)Pérez, D., Tarazón, L., Serrano, N., Castro, F., Ramos-Terrades, O., Juan, A.: The GERMANA database. In: Proc. of the 10th Int. Conf. on Document Analysis and Recognition (ICDAR 2009), Barcelona, Spain, pp. 301–305 (2009)del Agua, M.A., Serrano, N., Juan, A.: Language Identification for Interactive Handwriting Transcription of Multilingual Documents. In: Vitrià, J., Sanches, J.M., Hernández, M. (eds.) IbPRIA 2011. LNCS, vol. 6669, pp. 596–603. Springer, Heidelberg (2011)Ghosh, D., Dube, T., Shivaprasad, P.: Script Recognition: A Review. IEEE Trans. on Pattern Analysis and Machine Intelligence (PAMI) 32(12), 2142–2161 (2010)Bisani, M., Ney, H.: Open vocabulary speech recognition with flat hybrid models. In: Proc. of the European Conf. on Speech Communication and Technology, pp. 725–728 (2005)Szoke, I., Burget, L., Cernocky, J., Fapso, M.: Sub-word modeling of out of vocabulary words in spoken term detection. In: IEEE Spoken Language Technology Workshop, SLT 2008, pp. 273–276 (December 2008)Brakensiek, A., Rottl, J., Kosmala, A., Rigoll, G.: Off-Line handwriting recognition using various hybrid modeling techniques and character N-Grams. In: 7th International Workshop on Frontiers in Handwritten Recognition, pp. 343–352 (2000)Zamora, F., Castro, M.J., España, S., Gorbe, J.: Unconstrained offline handwriting recognition using connectionist character n-grams. In: The 2010 International Joint Conference on Neural Networks (IJCNN), pp. 1–7 (July 2010)Marti, U.V., Bunke, H.: The IAM-database: an English sentence database for off-line handwriting recognition. IJDAR, 39–46 (2002)Schultz, T., Kirchhoff, K.: Multilingual Speech Processing (2006)Stolcke, A.: SRILM – an extensible language modeling toolkit. In: Proc. of ICSLP 2002, pp. 901–904 (September 2002)Rybach, D., Gollan, C., Heigold, G., Hoffmeister, B., Lööf, J., Schlüter, R., Ney, H.: The RWTH aachen university open source speech recognition system. In: Interspeech, Brighton, U.K., pp. 2111–2114 (September 2009)Efron, B., Tibshirani, R.J.: An Introduction to Bootstrap. Chapman & Hall/CRC (1994
Evaluación de la revisión de transcripciones y traducciones automáticas de vídeos poliMedia
[ES] Los vídeos docentes son una herramienta de gran aceptación en el mundo universitario, que se integran en metodologías activas y innovadoras, dando pie a plataformas como poliMedia, plataforma de la Universitat Politècnica de València (UPV) que permite la creación, publicación y difusión de este tipo de contenido multimedia. En el marco del proyecto europeo transLectures, la UPV ha generado automáticamente transcripciones y traducciones en español, catalán e inglés para todos los vídeos incluidos en el repositorio poliMedia.
Las transcripciones y traducciones automáticas generadas tienen una alta calidad. Sin embargo, estas transcripciones y traducciones poseen errores inherentes al proceso automático de subtitulado utilizado, por lo que en ocasiones puede ser necesaria una revisión manual posterior para garantizar la ausencia de errores. El objetivo de este trabajo es evaluar este proceso de revisión manual para poder compararlo en términos de coste temporal con un proceso de obtención completamente manual. Así pues, en el marco de las ayudas de la UPV de Docencia en Red 2013-2014, pusimos en marcha una campaña de evaluación del proceso de revisión de transcripciones y traducciones automáticas por parte del profesorado, cuyos resultados indican inequívocamente la eficacia de estas técnicas y el importante ahorro de tiempo que suponen.The research leading to these results has received funding from the European Union FP7/2007-
2013 under grant agreement no 287755 (transLectures) and ICT PSP/2007-2013 under grant agreement
no 621030 (EMMA), and the Spanish MINECO Active2Trans (TIN2012-31723) research project.Valor Miró, JD.; Turró Ribalta, C.; Civera Saiz, J.; Juan Císcar, A. (2015). Evaluación de la revisión de transcripciones y traducciones automáticas de vídeos poliMedia. Editorial Universitat Politècnica de València. https://doi.org/10.4995/INRED2015.2015.1574
- …