4,142 research outputs found

    DeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation

    Full text link
    There is an undeniable communication barrier between deaf people and people with normal hearing ability. Although innovations in sign language translation technology aim to tear down this communication barrier, the majority of existing sign language translation systems are either intrusive or constrained by resolution or ambient lighting conditions. Moreover, these existing systems can only perform single-sign ASL translation rather than sentence-level translation, making them much less useful in daily-life communication scenarios. In this work, we fill this critical gap by presenting DeepASL, a transformative deep learning-based sign language translation technology that enables ubiquitous and non-intrusive American Sign Language (ASL) translation at both word and sentence levels. DeepASL uses infrared light as its sensing mechanism to non-intrusively capture the ASL signs. It incorporates a novel hierarchical bidirectional deep recurrent neural network (HB-RNN) and a probabilistic framework based on Connectionist Temporal Classification (CTC) for word-level and sentence-level ASL translation respectively. To evaluate its performance, we have collected 7,306 samples from 11 participants, covering 56 commonly used ASL words and 100 ASL sentences. DeepASL achieves an average 94.5% word-level translation accuracy and an average 8.2% word error rate on translating unseen ASL sentences. Given its promising performance, we believe DeepASL represents a significant step towards breaking the communication barrier between deaf people and hearing majority, and thus has the significant potential to fundamentally change deaf people's lives

    Roadmap on 3D integral imaging: Sensing, processing, and display

    Get PDF
    This Roadmap article on three-dimensional integral imaging provides an overview of some of the research activities in the field of integral imaging. The article discusses various aspects of the field including sensing of 3D scenes, processing of captured information, and 3D display and visualization of information. The paper consists of a series of 15 sections from the experts presenting various aspects of the field on sensing, processing, displays, augmented reality, microscopy, object recognition, and other applications. Each section represents the vision of its author to describe the progress, potential, vision, and challenging issues in this field

    Learning Anatomy for Pre Schools Via Kinect Technology

    Get PDF
    In this project, we will discuss about the development and implementation of Kinect in learning body parts for pre-schoolers. The objective of this project is to introduce a new form of learning to children, to explore the use of Kinect based technology on science subject in schools, to develop a Kinect system for biology subject and make it more interactive, and finally to evaluate the reaction and acceptance of this technology in learning. The reason this project was decided is that to improve the current method of learning in schools. Traditional learning styles are usually boring and linear. This had caused some students to lose interest in the subject. So teachers are in constant need to do something to attract student’s attentions and gain their interest. This is where my project comes in.We will develop software that uses Kinect technology to make learning more fun. Since we only have about 3 months time to develop this project, the methodology chosen for the system development is Throw-away-Prototyping. The reason this method is chosen is that it is fast and it helps to give clearer view of what the final product will looks like.In the course of the development of this project, there is a few problems that encountered. One of them is that it is not possible to use 3D model in the project within the time frame. So the solution is that pictures or images will be used instead of 3D models. The final prototype will have 3 main functions; the head where the students can learn about parts of their head or faces, the second part is the body where the student can learn which part of the body is called what. The final part is the extra where the pre-schoolers can have some fun

    T.A.C: Augmented Reality System for Collaborative Tele-Assistance in the Field of Maintenance through Internet.

    Get PDF
    ISBN: 978-1-60558-825-4International audienceIn this paper we shall present the T.A.C. (Télé-Assistance-Collaborative) system whose aim is to combine remote collaboration and industrial maintenance. T.A.C. enables the copresence of parties within the framework of a supervised maintenance task to be remotely "simulated" thanks to augmented reality (AR) and audio-video communication. To support such cooperation, we propose a simple way of interacting through our O.A.P. paradigm and AR goggles specially developed for the occasion. The handling of 3D items to reproduce gestures and an additional knowledge management tool (e-portfolio, feedback, etc) also enables this solution to satisfy the new needs of industry
    • …
    corecore