43,530 research outputs found

    DeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation

    Full text link
    There is an undeniable communication barrier between deaf people and people with normal hearing ability. Although innovations in sign language translation technology aim to tear down this communication barrier, the majority of existing sign language translation systems are either intrusive or constrained by resolution or ambient lighting conditions. Moreover, these existing systems can only perform single-sign ASL translation rather than sentence-level translation, making them much less useful in daily-life communication scenarios. In this work, we fill this critical gap by presenting DeepASL, a transformative deep learning-based sign language translation technology that enables ubiquitous and non-intrusive American Sign Language (ASL) translation at both word and sentence levels. DeepASL uses infrared light as its sensing mechanism to non-intrusively capture the ASL signs. It incorporates a novel hierarchical bidirectional deep recurrent neural network (HB-RNN) and a probabilistic framework based on Connectionist Temporal Classification (CTC) for word-level and sentence-level ASL translation respectively. To evaluate its performance, we have collected 7,306 samples from 11 participants, covering 56 commonly used ASL words and 100 ASL sentences. DeepASL achieves an average 94.5% word-level translation accuracy and an average 8.2% word error rate on translating unseen ASL sentences. Given its promising performance, we believe DeepASL represents a significant step towards breaking the communication barrier between deaf people and hearing majority, and thus has the significant potential to fundamentally change deaf people's lives

    A mobile Deaf-to-hearing communication aid for medical diagnosis

    Get PDF
    This paper describes how a deaf to hearing communication aid built for a mobile phone can be used to provide semi-synchronous communication between a Deaf person and a hearing person who cannot sign. Deaf people with access to mobile phones have become accustomed to using Short Messaging Services, to communicate with both hearing and Deaf people. However Most Deaf people have basic literacy levels and hence prefer not to communicate with text, but with South Africa Sign Language. The prototype uses interpreted communication between sign language and English. The mock-up is meant to help a Deaf person convey their medical conditions to a doctor face-to-face in the office. The prototype is made using prerecorded sign language videos for the Deaf person and English text for the hearing doctor. The interaction on the mobile phone is done inside the phone's browser using video streaming, instead of playing the video in a third-party media player. The design goal was to present the system on a mobile phone from the computer-based prototype. This paper takes a look at the background, related systems, the methods, the design and user testing of such a system on a mobile phone; using two prototypes client-server and client only.Telkom, Cisco, THRIP, SANPADDepartment of HE and Training approved lis

    Adaptation and Feasibility Study of a Digital Health Program to Prevent Diabetes among Low-Income Patients: Results from a Partnership between a Digital Health Company and an Academic Research Team.

    Get PDF
    Background. The feasibility of digital health programs to prevent and manage diabetes in low-income patients has not been adequately explored. Methods. Researchers collaborated with a digital health company to adapt a diabetes prevention program for low-income prediabetes patients at a large safety net clinic. We conducted focus groups to assess patient perspectives, revised lessons for improved readability and cultural relevance to low-income and Hispanic patients, conducted a feasibility study of the adapted program in English and Spanish speaking cohorts, and implemented real-time adaptations to the program for commercial use and for a larger trial of in multiple safety net clinics. Results. The majority of focus group participants were receptive to the program. We modified the curriculum to a 5th-grade reading level and adapted content based on patient feedback. In the feasibility study, 54% of eligible contacted patients expressed interest in enrolling (n = 23). Although some participants' computer access and literacy made registration challenging, they were highly satisfied and engaged (80% logged in at least once/week). Conclusions. Underserved prediabetic patients displayed high engagement and satisfaction with a digital diabetes prevention program despite lower digital literacy skills. The collaboration between researchers and a digital health company enabled iterative improvements in technology implementation to address challenges in low-income populations

    Appropriation of mobile cultural resources for learning

    Get PDF
    Copyright © 2010 IGI Global. This article proposes appropriation as the key for the recognition of mobile devices - as well as the artefacts accessed through, and produced with them - as cultural resources across different cultural practices of use, in everyday life and formal education. The article analyses the interrelationship of users of mobile devices with the structures, agency and practices of, and in relation to what the authors call the "mobile complex". Two examples are presented and some curricular options for the assimilation of mobile devices into settings of formal learning are discussed. Also, a typology of appropriation is presented that serves as an explanatory, analytical frame and starting point for a discussion about attendant issues

    Issues of shaping the students’ professional and terminological competence in science area of expertise in the sustainable development era

    Get PDF
    The paper deals with the problem of future biology teachers’ vocational preparation process and shaping in them of those capacities that contribute to the conservation and enhancement of our planet’s biodiversity as a reflection of the leading sustainable development goals of society. Such personality traits are viewed through the prism of forming the future biology teachers’ professional and terminological competence. The main aspects and categories that characterize the professional and terminological competence of future biology teachers, including terminology, nomenclature, term, nomen and term element, have been explained. The criteria and stages of shaping the future biology teachers’ professional and terminological competence during the vocational training process have been fixed. Methods, techniques, technologies, guiding principles and forms of staged work on the forming of an active terminological dictionary of students have been described and specified. The content of the distant special course “Latin. Botanical Terminology”, which provides training for future teachers to study the professional subjects and to understand of international scientific terminology, has been presented. It is concluded that the proper level of formation of the future biology teachers’ professional and terminological competence will eventually ensure the qualitative preparation of pupils for life in a sustainable development era

    Android Flash Based Game for Hard Hearing Kids to Learn Malay Language through Cued Speech and Sign Language (MYKIU)

    Get PDF
    The purpose of this project is to build android application that use as compliment to conventional education system. It use to assist hard hearing kids learning environment to be more interactive and portable. Therefore Android Flash Based Game for Hard Hearing Kids to Learn Malay Language through Cued Speech and Sign Language (MYKIU) developed to assists in hard hearing learning process in reading. This application is using Cued Speech and Malay Sign Language as learning approach. Advantage of MYKIU is act as compliment for traditional system where hard hearing kids would be able to learn through game based approach even though they are not at school. In the android market, android applications that are developed using Malay Sign Language and Cued Speech are not exist yet; most of the application is in American Sign Language (ASL) and Cued Speech that using English vocabulary. Therefore, MYKIU is developed to break the barrier. MYKIU is developed using Cued Speech and Malay Sign Language (MSL) in Malay vocabulary; this application is specifically design to assist hard hearing kids in Malaysia. The scope of the study for this project is focusing for hard hearing kids from 6 to 9 years old. MYKIU developed using phase development life cycle. MYKIU is using Action Script 3 as the programming language. It is developed using Adobe Flash CS5.5 and Adobe Photoshop Portable CS5. MYKIU prototype is tested in Pusat Pertuturan Kiu, Kampung Pandan. The author is able to gather 10 students age from 6 to 9 years old to test the prototype. From the testing, MYKIU get good response when it use by hard hearing kids

    Snap-n-Snack: a Food Image Recognition Application

    Get PDF
    Many people desire to be informed about the nutritional specifics of the food they consume. Current popular dietary tracking methods are too slow and tedious for a lot of consumers due to requiring manual data entry for everything eaten. We propose a system that will take advantage of image recognition and the internal camera of Android phones to identify food based off of a picture of a user’s plate. Over the course the last year, we trained an object detection model with images of different types of food, built a mobile application around it, and tested their integration and performance. We believe that our program meets the requirements we set out for it at its conception and delivers a simple, fast, and efficient way of tracking one’s diet

    An Assistive Technology Framework for Communication with Hearing Impaired Persons

    Get PDF
    This paper presents a novel assistive technology framework which provides an interface to support communication between hearing impaired person and ordinary person over the mobile phone. It converts the ordinary person's voice to text and afterward text to tactile feedback at the hearing impaired person’s end. The Morse Code tactile feedback have been identified as the most appropriate method for providing the tactile feedback at the hearing impaired person’s end, since it is a standard code which helps persons with impairments. The work addresses the challenge of using a set of Morse Code shorthand vibration patterns to translate the whole text message to tactile feedback to provide a simple, efficient and synchronous communication, rather than vibrating each and every character in text usingMorse Code characters. The user evaluation found that, most hearing impaired persons’ preferred method of conversation is the Morse Code shorthand forms with two or three character length rather than reading the entire text message. Due to less perspicuity of a hearing impaired person’s voice, the study comes up with the conversion of the hearing impaired persons’ voice to text and sends it to the ordinary person synchronously as a voice reply. The results of the evaluation experiment shows that the assistive technology framework facilitates by improving the quality of communication of hearing impaired persons over a mobile device
    • …
    corecore