7,675 research outputs found

    Real-Time Sign Language Translator

    Get PDF
    Sign language is so widespread that people with hearing/speech impairments are familiar with it, and others are unfamiliar with it. As a result, there is a significant communication gap between people with speech and hearing impairments and the rest of the general population. A human sign language interpreter is a common solution for bridging this gap. However, because the number of sign language interpreters is small in comparison to the number of deaf and mute people in the world, some deaf and mute people cannot afford to use a human interpreter all of the time when communicating with others. This communication must be automated so that the deaf-mute community does not rely on human interpreters. This paper focuses on developing a system that can translate American Sign Language into words/sentences and vice versa in real-time, with some extra features that will help remove the communication barrier between ordinary and hearing/talking impaired people. The main function is to detect and identify sign language performed by the user. Initially, the system is trained to detect and identify signs using object detection and motion-tracking techniques. A convolutional neural network model was trained on a manually created ASL data set. The identified signs are then translated into English to form grammatically correct sentences. A text-to-text transformer built on an encoder-decoder architecture is used to detect grammatical errors and provide correct sentences. To further improve effectiveness, the system incorporates a feature to translate an image containing English text into American Sign Language. Furthermore, the system consists of a voice-to-sign language translator and a virtual sign keyboard. The methodology has been explained in further sections

    COMPUTER ASSISTED COMMUNICATION FOR THE HEARING IMPAIRED FOR AN EMERGENCY ROOM SCENARIO

    Get PDF
    While there has been research on computerized communication facilities for those with hearing impairment, issues still remain. Current approaches utilize an avatar based approach which lacks the ability to adequately use facial expressions which are an integral aspect to the communication process in American Sign Language (ASL). Additionally, there is a lack of research into integrating a system to facilitate communication with the hearing impaired into a clinical environment, namely an emergency room admission scenario. This research aims to determine if an alternate approach of using videos created by humans in ASL can overcome the understandability barrier and still be usable in the communication process

    Specifying Logic Programs in Controlled Natural Language

    Full text link
    Writing specifications for computer programs is not easy since one has to take into account the disparate conceptual worlds of the application domain and of software development. To bridge this conceptual gap we propose controlled natural language as a declarative and application-specific specification language. Controlled natural language is a subset of natural language that can be accurately and efficiently processed by a computer, but is expressive enough to allow natural usage by non-specialists. Specifications in controlled natural language are automatically translated into Prolog clauses, hence become formal and executable. The translation uses a definite clause grammar (DCG) enhanced by feature structures. Inter-text references of the specification, e.g. anaphora, are resolved with the help of discourse representation theory (DRT). The generated Prolog clauses are added to a knowledge base. We have implemented a prototypical specification system that successfully processes the specification of a simple automated teller machine.Comment: 16 pages, compressed, uuencoded Postscript, published in Proceedings CLNLP 95, COMPULOGNET/ELSNET/EAGLES Workshop on Computational Logic for Natural Language Processing, Edinburgh, April 3-5, 199

    Full Issue

    Get PDF

    Inclusive AR-games for Education of Deaf Children: Challenges and Opportunities

    Get PDF
    Game-based learning has had a rapid development in the 21st century, attracting an increasing audience. However, inclusion of all is still not a reality in society, with accessibility for deaf and hard of hearing children as a remaining challenge. To be excluded from learning due to communication barriers can have severe consequences for further studies and work. Based on previous research Augmented Reality (AR) games can be joyful learning tools that include activities with different sign languages, but AR based learning games for deaf and hard of hearing lack research. This paper aims to present opportunities and challenges of designing inclusive AR games for education of deaf children. Methods involved conducting a scoping review of previous studies about AR for deaf people. Experts were involved as co-authors for in-depth understanding of sign languages and challenges for deaf people. A set of AR input and output techniques were analysed for appropriateness, and various AR based game mechanics were compared. Results indicate that inclusive AR gameplay for deaf people could be built on AR based image and object tracking, complemented with sign recognition. These technologies provide input from the user and the real-world environment typically via the camera to the app. Scene tracking and GPS can be used for location-based game mechanics. Output to the user can be done via local signed videos ideally, but also with images and animations. Moreover, a civic intelligence approach can be applied to overcome many of the challenges that have been identified in five dimensions for inclusion of deaf people i.e., cultural, educational, psycho-social, semantic, and multimodal. The input from trusted, educated signers and teachers can enable the connection between real world objects and signed videos to provide explanations of concepts. The conclusion is that the development of an inclusive, multi-language AR game for deaf people needs to be carried out as an international collaboration, addressing all five dimensions

    A Novel Machine Learning Based Two-Way Communication System for Deaf and Mute

    Get PDF
    first_pagesettingsOrder Article Reprints Open AccessArticle A Novel Machine Learning Based Two-Way Communication System for Deaf and Mute by Muhammad Imran Saleem 1,2,*ORCID,Atif Siddiqui 3ORCID,Shaheena Noor 4ORCID,Miguel-Angel Luque-Nieto 1,2ORCID andPablo Otero 1,2ORCID 1 Telecommunications Engineering School, University of Malaga, 29010 Malaga, Spain 2 Institute of Oceanic Engineering Research, University of Malaga, 29010 Malaga, Spain 3 Airbus Defence and Space, UK 4 Department of Computer Engineering, Faculty of Engineering, Sir Syed University of Engineering and Technology, Karachi 75300, Pakistan * Author to whom correspondence should be addressed. Appl. Sci. 2023, 13(1), 453; https://doi.org/10.3390/app13010453 Received: 12 November 2022 / Revised: 22 December 2022 / Accepted: 26 December 2022 / Published: 29 December 2022 Download Browse Figures Versions Notes Abstract Deaf and mute people are an integral part of society, and it is particularly important to provide them with a platform to be able to communicate without the need for any training or learning. These people rely on sign language, but for effective communication, it is expected that others can understand sign language. Learning sign language is a challenge for those with no impairment. Another challenge is to have a system in which hand gestures of different languages are supported. In this manuscript, a system is presented that provides communication between deaf and mute (DnM) and non-deaf and mute (NDnM). The hand gestures of DnM people are acquired and processed using deep learning, and multiple language support is achieved using supervised machine learning. The NDnM people are provided with an audio interface where the hand gestures are converted into speech and generated through the sound card interface of the computer. Speech from NDnM people is acquired using microphone input and converted into text. The system is easy to use and low cost. (...)This research has been partially funded by Universidad de Málaga, Málaga, Spain
    • …
    corecore