13,488 research outputs found

    Methodology for developing an advanced communications system for the Deaf in a new domain

    Get PDF
    A methodology for developing an advanced communications system for the Deaf in a new domain is presented in this paper. This methodology is a user-centred design approach consisting of four main steps: requirement analysis, parallel corpus generation, technology adaptation to the new domain, and finally, system evaluation. During the requirement analysis, both the user and technical requirements are evaluated and defined. For generating the parallel corpus, it is necessary to collect Spanish sentences in the new domain and translate them into LSE (Lengua de Signos Española: Spanish Sign Language). LSE is represented by glosses and using video recordings. This corpus is used for training the two main modules of the advanced communications system to the new domain: the spoken Spanish into the LSE translation module and the Spanish generation from the LSE module. The main aspects to be generated are the vocabularies for both languages (Spanish words and signs), and the knowledge for translating in both directions. Finally, the field evaluation is carried out with deaf people using the advanced communications system to interact with hearing people in several scenarios. In this evaluation, the paper proposes several objective and subjective measurements for evaluating the performance. In this paper, the new considered domain is about dialogues in a hotel reception. Using this methodology, the system was developed in several months, obtaining very good performance: good translation rates (10% Sign Error Rate) with small processing times, allowing face-to-face dialogues

    A framework for accessible m-government implementation

    Get PDF
    The great popularity and rapid diffusion of mobile technologies at worldwide level has also been recognised by the public sector, leading to the creation of m-government. A major challenge for m-government is accessibility – the provision of an equal service to all citizens irrespective of their psychical, mental or technical capabilities. This paper sketches the profiles of six citizen groups: Visually Impaired, Hearing Impaired, Motor Impaired, Speech Impaired, Cognitive Impaired and Elderly. M-government examples that target the aforementioned groups are discussed and a framework for accessible m-government implementation with reference to the W3C Mobile Web Best Practices is proposed

    DeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation

    Full text link
    There is an undeniable communication barrier between deaf people and people with normal hearing ability. Although innovations in sign language translation technology aim to tear down this communication barrier, the majority of existing sign language translation systems are either intrusive or constrained by resolution or ambient lighting conditions. Moreover, these existing systems can only perform single-sign ASL translation rather than sentence-level translation, making them much less useful in daily-life communication scenarios. In this work, we fill this critical gap by presenting DeepASL, a transformative deep learning-based sign language translation technology that enables ubiquitous and non-intrusive American Sign Language (ASL) translation at both word and sentence levels. DeepASL uses infrared light as its sensing mechanism to non-intrusively capture the ASL signs. It incorporates a novel hierarchical bidirectional deep recurrent neural network (HB-RNN) and a probabilistic framework based on Connectionist Temporal Classification (CTC) for word-level and sentence-level ASL translation respectively. To evaluate its performance, we have collected 7,306 samples from 11 participants, covering 56 commonly used ASL words and 100 ASL sentences. DeepASL achieves an average 94.5% word-level translation accuracy and an average 8.2% word error rate on translating unseen ASL sentences. Given its promising performance, we believe DeepASL represents a significant step towards breaking the communication barrier between deaf people and hearing majority, and thus has the significant potential to fundamentally change deaf people's lives

    Artificial Intelligence for Sign Language Translation – A Design Science Research Study

    Get PDF
    Although our digitalized society is able to foster social inclusion and integration, there are still numerous communities having unequal opportunities. This is also the case with deaf people. About 750,000 deaf people only in the European Union and over 4 million people in the United States face daily challenges in terms of communication and participation, such as in leisure activities but more importantly in emergencies too. To provide equal environments and allow people with hearing handicaps to communicate in their native language, this paper presents an AI-based sign language translator. We adopted a transformer neural network capable of analyzing over 500 data points from a person’s gestures and face to translate sign language into text. We have designed a machine learning pipeline that enables the translator to evolve, build new datasets, and train sign language recognition models. As proof of concept, we instantiated a sign language interpreter for an emergency call with over 200 phrases. The overall goal is to support people with hearing inabilities by enabling them to participate in economic, social, political, and cultural life

    Suggested approach for establishing a rehabilitation engineering information service for the state of California

    Get PDF
    An ever expanding body of rehabilitation engineering technology is developing in this country, but it rarely reaches the people for whom it is intended. The increasing concern of state and federal departments of rehabilitation for this technology lag was the stimulus for a series of problem-solving workshops held in California during 1977. As a result of the workshops, the recommendation emerged that the California Department of Rehabilitation take the lead in the development of a coordinated delivery system that would eventually serve the entire state and be a model for similar systems across the nation

    ASL Champ!: A Virtual Reality Game with Deep-Learning Driven Sign Recognition

    Full text link
    We developed an American Sign Language (ASL) learning platform in a Virtual Reality (VR) environment to facilitate immersive interaction and real-time feedback for ASL learners. We describe the first game to use an interactive teaching style in which users learn from a fluent signing avatar and the first implementation of ASL sign recognition using deep learning within the VR environment. Advanced motion-capture technology powers an expressive ASL teaching avatar within an immersive three-dimensional environment. The teacher demonstrates an ASL sign for an object, prompting the user to copy the sign. Upon the user's signing, a third-party plugin executes the sign recognition process alongside a deep learning model. Depending on the accuracy of a user's sign production, the avatar repeats the sign or introduces a new one. We gathered a 3D VR ASL dataset from fifteen diverse participants to power the sign recognition model. The proposed deep learning model's training, validation, and test accuracy are 90.12%, 89.37%, and 86.66%, respectively. The functional prototype can teach sign language vocabulary and be successfully adapted as an interactive ASL learning platform in VR.Comment: 36 pages, 9 figure

    Application for Iraqi sign language translation on Android system

    Get PDF
    Deaf people suffer from difficulty in social communication, especially those who have been denied the blessing of hearing before the acquisition of spoken language and before learning to read and write. For the purpose of employing mobile devices for the benefit of these people, their teachers and everyone who has contact with them, this research aims to design an application for social communication and learning by translating Iraqi sign language into text in Arabic and vice versa. Iraqi sign language has been chosen because of a lack of applications for this field. The current research, to the best of our knowledge, is the first of its kind in Iraq. The application is open source; words that are not found in the application database can be processed by translating them into letters alphabetically. The importance of the application lies in the fact that it is a means of communication and e-learning through Iraqi sign language, reading and writing in Arabic. Similarly, it is regarded as a means of social communication between deaf people and those with normal hearing. This application is designed by using JAVA language and it was tested on several deaf students at Al-Amal Institute for Special Needs Care in Mosul, Iraq. It was well comprehended and accepted

    ImpacT2 project: preliminary study 1: establishing the relationship between networked technology and attainment

    Get PDF
    This report explored teaching practices, beliefs and teaching styles and their influences on ICT use and implementation by pupils. Additional factors explored included the value of school and LEA policies and teacher competence in the use of ICT in classroom settings. ImpaCT2 was a major longitudinal study (1999-2002) involving 60 schools in England, its aims were to: identify the impact of networked technologies on the school and out-of-school environment; determine whether or not this impact affected the educational attainment of pupils aged 816 years (at Key Stages 2, 3, and 4); and provide information that would assist in the formation of national, local and school policies on the deployment of IC
    corecore