1,661 research outputs found

    Integration of a Spanish-to-LSE machine translation system into an e-learning platform

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-21657-2_61This paper presents the first results of the integration of a Spanish-to-LSE Machine Translation (MT) system into an e-learning platform. Most e-learning platforms provide speech-based contents, which makes them inaccessible to the Deaf. To solve this issue, we have developed a MT system that translates Spanish speech-based contents into LSE. To test our MT system, we have integrated it into an e-learning tool. The e-learning tool sends the audio to our platform. The platform sends back the subtitles and a video stream with the signed translation to the e-learning tool. Preliminary results, evaluating the sign language synthesis module, show an isolated sign recognition accuracy of 97%. The sentence recognition accuracy was of 93%.Authors would like to acknowledge the FPU-UAM grant program for its financial support. Authors are grateful to the FCNSE linguistic department for sharing their knowledge in LSE and performing the evaluations. Many thanks go to María Chulvi and Benjamín Nogal for providing help during the imple-mentation of this system. This work was partially supported by the Telefónica Móviles España S.A. project number 10-047158-TE-Ed-01-1

    An on-line system adding subtitles and sign language to Spanish audio-visual content

    Full text link
    Deaf people cannot properly access the speech information stored in any kind of recording format (audio, video, etc). We present a system that provides with subtitling and Spanish Sign Language representation capabilities to allow Spanish Deaf population can access to such speech content. The system is composed by a speech recognition module, a machine translation module from Spanish to Spanish Sign Language and a Spanish Sign Language synthesis module. On the deaf person side, a user-friendly interface with subtitle and avatar components allows him/her to access the speech information

    A rule-based translation from written Spanish to Spanish Sign Language glosses

    Full text link
    This is the author’s version of a work that was accepted for publication in Computer Speech and Language. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Computer Speech and Language, 28, 3 (2015) DOI: 10.1016/j.csl.2013.10.003One of the aims of Assistive Technologies is to help people with disabilities to communicate with others and to provide means of access to information. As an aid to Deaf people, we present in this work a production-quality rule-based machine system for translating from Spanish to Spanish Sign Language (LSE) glosses, which is a necessary precursor to building a full machine translation system that eventually produces animation output. The system implements a transfer-based architecture from the syntactic functions of dependency analyses. A sketch of LSE is also presented. Several topics regarding translation to sign languages are addressed: the lexical gap, the bootstrapping of a bilingual lexicon, the generation of word order for topic-oriented languages, and the treatment of classifier predicates and classifier names. The system has been evaluated with an open-domain testbed, reporting a 0.30 BLEU (BiLingual Evaluation Understudy) and 42% TER (Translation Error Rate). These results show consistent improvements over a statistical machine translation baseline, and some improvements over the same system preserving the word order in the source sentence. Finally, the linguistic analysis of errors has identified some differences due to a certain degree of structural variation in LSE

    Accelerating the adoption of Industry 4.0 supporting technologies in manufacturing engineering courses

    Full text link
    [EN] Universities are one of the fundamental actors to guarantee the dissemination of knowledge and the development of competences related to the Industry of the Future (IoF) or Industry 4.0. Computer Aided (CAX) and Product Lifecycle Management (PLM) technologies are key part in the IoF. With this aim, it was launch a project focused on Manufacturing and partially funded by La Fondation Dassault Systèmes. This communication presents a review on CAX-PLM training, four initiatives already in place in universities participating in the project, the project scope, the approach to integrate with the industrial context, the working method to consider different competence profiles and the development framework.The authors express their gratitude to the other project colleagues and to La Fondation Dassault Systèmes for its funding support.Ríos, J.; Mas, F.; Marcos, M.; Vila, C.; Ugarte, D.; Chevrot, T. (2017). Accelerating the adoption of Industry 4.0 supporting technologies in manufacturing engineering courses. MATERIALS SCIENCE FORUM. 903:100-111. https://doi.org/10.4028/www.scientific.net/MSF.903.100S10011190

    Application-driven visual computing towards industry 4.0 2018

    Get PDF
    245 p.La Tesis recoge contribuciones en tres campos: 1. Agentes Virtuales Interactivos: autónomos, modulares, escalables, ubicuos y atractivos para el usuario. Estos IVA pueden interactuar con los usuarios de manera natural.2. Entornos de RV/RA Inmersivos: RV en la planificación de la producción, el diseño de producto, la simulación de procesos, pruebas y verificación. El Operario Virtual muestra cómo la RV y los Co-bots pueden trabajar en un entorno seguro. En el Operario Aumentado la RA muestra información relevante al trabajador de una manera no intrusiva. 3. Gestión Interactiva de Modelos 3D: gestión online y visualización de modelos CAD multimedia, mediante conversión automática de modelos CAD a la Web. La tecnología Web3D permite la visualización e interacción de estos modelos en dispositivos móviles de baja potencia.Además, estas contribuciones han permitido analizar los desafíos presentados por Industry 4.0. La tesis ha contribuido a proporcionar una prueba de concepto para algunos de esos desafíos: en factores humanos, simulación, visualización e integración de modelos

    A Systematic Mapping of Translation-Enabling Technologies for Sign Languages

    Get PDF
    Sign languages (SL) are the first language for most deaf people. Consequently, bidirectional communication among deaf and non-deaf people has always been a challenging issue. Sign language usage has increased due to inclusion policies and general public agreement, which must then become evident in information technologies, in the many facets that comprise sign language understanding and its computational treatment. In this study, we conduct a thorough systematic mapping of translation-enabling technologies for sign languages. This mapping has considered the most recommended guidelines for systematic reviews, i.e., those pertaining software engineering, since there is a need to account for interdisciplinary areas of accessibility, human computer interaction, natural language processing, and education, all of them part of ACM (Association for Computing Machinery) computing classification system directly related to software engineering. An ongoing development of a software tool called SYMPLE (SYstematic Mapping and Parallel Loading Engine) facilitated the querying and construction of a base set of candidate studies. A great diversity of topics has been studied over the last 25 years or so, but this systematic mapping allows for comfortable visualization of predominant areas, venues, top authors, and different measures of concentration and dispersion. The systematic review clearly shows a large number of classifications and subclassifications interspersed over time. This is an area of study in which there is much interest, with a basically steady level of scientific publications over the last decade, concentrated mainly in the European continent. The publications by country, nevertheless, usually favor their local sign language.The authors thank the School of Computing and the Computer Research Center of the Technological Institute of Costa Rica for the financial support, as well as CONICIT (Consejo Nacional para Investigaciones Científicas y Tecnológicas), Costa Rica, under grant 290-2006. This work was partly supported by the Spanish Ministry of Science, Innovation, and Universities through the Project ECLIPSE-UA under Grant RTI2018-094283-B-C32 and the Project INTEGER under Grant RTI2018-094649-B-I00, and partly by the Conselleria de Educación, Investigación, Cultura y Deporte of the Community of Valencia, Spain, within the Project PROMETEO/2018/089

    CercleS 2022

    Get PDF
    CHAIRPERSON Manuel Moreira da Silva, Instituto Politécnico do Porto, Portugal EDITORS Ana Gonçalves, Estoril Higher Institute for Tourism and Hotel Studies, Portugal Célia Tavares, Instituto Politécnico do Porto, Portugal Joaquim Guerra, Universidade do Algarve, Portugal Luciana Oliveira, Instituto Politécnico do Porto, Portugal Manuel Moreira da Silva, Instituto Politécnico do Porto, Portugal Ricardo Soares, Instituto Politécnico do Porto, PortugalCercleS 2022 The Future of Language Education in an Increasingly Digital World: Embracing ChangeN/

    Machine learning approaches to video activity recognition: from computer vision to signal processing

    Get PDF
    244 p.La investigación presentada se centra en técnicas de clasificación para dos tareas diferentes, aunque relacionadas, de tal forma que la segunda puede ser considerada parte de la primera: el reconocimiento de acciones humanas en vídeos y el reconocimiento de lengua de signos.En la primera parte, la hipótesis de partida es que la transformación de las señales de un vídeo mediante el algoritmo de Patrones Espaciales Comunes (CSP por sus siglas en inglés, comúnmente utilizado en sistemas de Electroencefalografía) puede dar lugar a nuevas características que serán útiles para la posterior clasificación de los vídeos mediante clasificadores supervisados. Se han realizado diferentes experimentos en varias bases de datos, incluyendo una creada durante esta investigación desde el punto de vista de un robot humanoide, con la intención de implementar el sistema de reconocimiento desarrollado para mejorar la interacción humano-robot.En la segunda parte, las técnicas desarrolladas anteriormente se han aplicado al reconocimiento de lengua de signos, pero además de ello se propone un método basado en la descomposición de los signos para realizar el reconocimiento de los mismos, añadiendo la posibilidad de una mejor explicabilidad. El objetivo final es desarrollar un tutor de lengua de signos capaz de guiar a los usuarios en el proceso de aprendizaje, dándoles a conocer los errores que cometen y el motivo de dichos errores
    corecore