177 research outputs found

    Proceedings of the 2nd Computer Science Student Workshop: Microsoft Istanbul, Turkey, April 9, 2011

    Get PDF

    RoboTalk - Prototyping a Humanoid Robot as Speech-to-Sign Language Translator

    Get PDF
    Information science mostly focused on sign language recognition. The current study instead examines whether humanoid robots might be fruitful avatars for sign language translation. After a review of research into sign language technologies, a survey of 50 deaf participants regarding their preferences for potential reveals that humanoid robots represent a promising option. The authors also 3D-printed two arms of a humanoid robot, InMoov, with special joints for the index finger and thumb that would provide it with additional degrees of freedom to express sign language. They programmed the robotic arms with German sign language and integrated it with a voice recognition system. Thus this study provides insights into human–robot interactions in the context of sign language translation; it also contributes ideas for enhanced inclusion of deaf people into society

    The eyes have it

    Get PDF

    Sensory Communication

    Get PDF
    Contains table of contents for Section 2, an introduction, reports on nine research projects and a list of publications.National Institutes of Health Grant 5 R01 DC00117National Institutes of Health Grant 2 R01 DC00270National Institutes of Health Grant 1 P01 DC00361National Institutes of Health Grant 2 R01 DC00100National Institutes of Health Grant FV00428National Institutes of Health Grant 5 R01 DC00126U.S. Air Force - Office of Scientific Research Grant AFOSR 90-200U.S. Navy - Office of Naval Research Grant N00014-90-J-1935National Institutes of Health Grant 5 R29 DC0062

    Real-time Immersive human-computer interaction based on tracking and recognition of dynamic hand gestures

    Get PDF
    With fast developing and ever growing use of computer based technologies, human-computer interaction (HCI) plays an increasingly pivotal role. In virtual reality (VR), HCI technologies provide not only a better understanding of three-dimensional shapes and spaces, but also sensory immersion and physical interaction. With the hand based HCI being a key HCI modality for object manipulation and gesture based communication, challenges are presented to provide users a natural, intuitive, effortless, precise, and real-time method for HCI based on dynamic hand gestures, due to the complexity of hand postures formed by multiple joints with high degrees-of-freedom, the speed of hand movements with highly variable trajectories and rapid direction changes, and the precision required for interaction between hands and objects in the virtual world. Presented in this thesis is the design and development of a novel real-time HCI system based on a unique combination of a pair of data gloves based on fibre-optic curvature sensors to acquire finger joint angles, a hybrid tracking system based on inertia and ultrasound to capture hand position and orientation, and a stereoscopic display system to provide an immersive visual feedback. The potential and effectiveness of the proposed system is demonstrated through a number of applications, namely, hand gesture based virtual object manipulation and visualisation, hand gesture based direct sign writing, and hand gesture based finger spelling. For virtual object manipulation and visualisation, the system is shown to allow a user to select, translate, rotate, scale, release and visualise virtual objects (presented using graphics and volume data) in three-dimensional space using natural hand gestures in real-time. For direct sign writing, the system is shown to be able to display immediately the corresponding SignWriting symbols signed by a user using three different signing sequences and a range of complex hand gestures, which consist of various combinations of hand postures (with each finger open, half-bent, closed, adduction and abduction), eight hand orientations in horizontal/vertical plans, three palm facing directions, and various hand movements (which can have eight directions in horizontal/vertical plans, and can be repetitive, straight/curve, clockwise/anti-clockwise). The development includes a special visual interface to give not only a stereoscopic view of hand gestures and movements, but also a structured visual feedback for each stage of the signing sequence. An excellent basis is therefore formed to develop a full HCI based on all human gestures by integrating the proposed system with facial expression and body posture recognition methods. Furthermore, for finger spelling, the system is shown to be able to recognise five vowels signed by two hands using the British Sign Language in real-time

    An Introspective Enquiry Mutually Emplacing Teacher and Non-literate Former Refugee Students in Pedagogical Landscapes

    Get PDF
    Searching to discover what it is that is in the self that is brought to the teaching role this study transports the teacher along a lived experience trajectory. Assuming an interface stance as teacher-learner, researcher-interpreter, narrator-writer, to find insightful meaning the inquiry glances back as it moves forward, emplacing the teacher with adult former refugee students in an empowering landscape that engenders self-learning and respect for diversity within a culture of education

    Real-time New Zealand sign language translator using convolution neural network

    Get PDF
    Over the past quarter of a century, machine Learning performs an essential role in information technology revolution. From predictive internet web browsing to autonomous vehicles; machine learning has become the heart of all intelligence applications in service today. Image classification through gesture recognition is sub field which has benefited immensely from the existence of this machine learning method. In particular, a subset of Machine Learning known as deep learning has exhibited impressive performance in this regard while outperforming other conventional approaches such as image processing. The advanced Deep Learning architectures come with artificial neural networks particularly convolution neural networks (CNN). Deep Learning has dominated the field of computer vision since 2012; however, a general criticism of this deep learning method is its dependence on large datasets. In order to overcome this criticism, research focusing on discovering data- efficient deep learning methods have been carried out. The foremost finding of the data-efficient deep learning function is a transfer learning technique, which is basically carried out with pre-trained networks. In this research, the InceptionV3 pre trained model has been used to perform the transfer learning method in a convolution neural network to implement New Zealand sign language translator in real-time. The focus of this research is to introduce a vision-based application that offers New Zealand sign language translation into text format by recognizing sign gestures to overcome the communication barriers between the deaf community and hearing-unimpaired community in New Zealand. As a byproduct of this research work, a new dataset for New Zealand sign Language alphabet has been created. After training the pre-trained InceptionV3 network with this captured dataset, a prototype for this New Zealand sign language translating system has been created

    Emotional Facial Expressions in Synthesised Sign Language Avatars: A Manual Evaluation

    Get PDF
    This research explores and evaluates the contribution that facial expressions might have regarding improved comprehension and acceptability in sign language avatars. Focusing specifically on Irish Sign Language (ISL), we examine the Deaf 1 community’s responsiveness to sign language avatars. The hypothesis of this is: Augmenting an existing avatar with the 7 widely accepted universal emotions identified by Ekman [1] to achieve underlying facial expressions, will make that avatar more human-like and improve usability and understandability for the ISL user. Using human evaluation methods [2] we compare an augmented set of avatar utterances against a baseline set, focusing on 2 key areas: comprehension and naturalness of facial configuration. We outline our approach to the evaluation including our choice of ISL participants, interview environment and evaluation methodology. The evaluation results reveal that in a comprehension test there was little difference between the baseline avatars and those augmented with emotional facial expression also we found that the avatars are lacking various linguistic attributes
    corecore