436 research outputs found

    American Sign Language alphabet recognition using Microsoft Kinect

    Get PDF
    American Sign Language (ASL) fingerspelling recognition using marker-less vision sensors is a challenging task due to the complexity of ASL signs, self-occlusion of the hand, and limited resolution of the sensors. This thesis describes a new method for ASL fingerspelling recognition using a low-cost vision camera, which is Microsoft\u27s Kinect. A segmented hand configuration is first obtained by using a depth contrast feature based per-pixel classification algorithm. Then, a hierarchical mode-finding method is developed and implemented to localize hand joint positions under kinematic constraints. Finally, a Random Decision Forest (RDF) classifier is built to recognize ASL signs according to the joint angles. To validate the performance of this method, a dataset containing 75,000 samples of 24 static ASL alphabet signs is used. The system is able to achieve a mean accuracy of 92%. We have also used a publicly available dataset from Surrey University to evaluate our method. The results have shown that our method can achieve higher accuracy in recognizing ASL alphabet signs in comparison to the previous benchmarks. --Abstract, page iii

    Sign Language Fingerspelling Classification from Depth and Color Images using a Deep Belief Network

    Full text link
    Automatic sign language recognition is an open problem that has received a lot of attention recently, not only because of its usefulness to signers, but also due to the numerous applications a sign classifier can have. In this article, we present a new feature extraction technique for hand pose recognition using depth and intensity images captured from a Microsoft Kinect sensor. We applied our technique to American Sign Language fingerspelling classification using a Deep Belief Network, for which our feature extraction technique is tailored. We evaluated our results on a multi-user data set with two scenarios: one with all known users and one with an unseen user. We achieved 99% recall and precision on the first, and 77% recall and 79% precision on the second. Our method is also capable of real-time sign classification and is adaptive to any environment or lightning intensity.Comment: Published in 2014 Canadian Conference on Computer and Robot Visio

    New Method for Optimization of License Plate Recognition system with Use of Edge Detection and Connected Component

    Full text link
    License Plate recognition plays an important role on the traffic monitoring and parking management systems. In this paper, a fast and real time method has been proposed which has an appropriate application to find tilt and poor quality plates. In the proposed method, at the beginning, the image is converted into binary mode using adaptive threshold. Then, by using some edge detection and morphology operations, plate number location has been specified. Finally, if the plat has tilt, its tilt is removed away. This method has been tested on another paper data set that has different images of the background, considering distance, and angel of view so that the correct extraction rate of plate reached at 98.66%.Comment: 3rd IEEE International Conference on Computer and Knowledge Engineering (ICCKE 2013), October 31 & November 1, 2013, Ferdowsi Universit Mashha

    Spelling it out: Real-time ASL fingerspelling recognition

    Get PDF
    This article presents an interactive hand shape recognition user interface for American Sign Language (ASL) finger-spelling. The system makes use of a Microsoft Kinect device to collect appearance and depth images, and of the OpenNI+NITE framework for hand detection and tracking. Hand-shapes corresponding to letters of the alphabet are characterized using appearance and depth images and classified using random forests. We compare classification using appearance and depth images, and show a combination of both lead to best results, and validate on a dataset of four different users. This hand shape detection works in real-time and is integrated in an interactive user interface allowing the signer to select between ambiguous detections and integrated with an English dictionary for efficient writing

    Review on Classification Methods used in Image based Sign Language Recognition System

    Get PDF
    Sign language is the way of communication among the Deaf-Dumb people by expressing signs. This paper is present review on Sign language Recognition system that aims to provide communication way for Deaf and Dumb pople. This paper describes review of Image based sign language recognition system. Signs are in the form of hand gestures and these gestures are identified from images as well as videos. Gestures are identified and classified according to features of Gesture image. Features are like shape, rotation, angle, pixels, hand movement etc. Features are finding by various Features Extraction methods and classified by various machine learning methods. Main pupose of this paper is to review on classification methods of similar systems used in Image based hand gesture recognition . This paper also describe comarison of various system on the base of classification methods and accuracy rate

    Sign Language Translation Approach to Sinhalese Language

    Get PDF
    Sign language is used for communication between deafpersons while Sinhalese language is used by normal hearingpersons whose first language is Sinhalese in Sri Lanka. Thisresearch focuses on an approach for a real-time translation fromSri Lankan sign language to Sinhalese language which willbridge the communication gap between deaf and ordinarycommunities. This study further focuses on a novel methodologyof enabling distance communication between deaf and ordinarypersons. Once the sign based gestures captured by depth sensingcamera, series of feature extraction techniques will be used toidentify essential attributes in gesture frame. Identified featureframe will be compared with pre-trained gesture dictionarybased on classification techniques, in order to identify gesturebased word. Detected word will be displayed for ordinary user orcould be used for communication between two individuals in twodifferent geographic locations. Proposed prototype has providedan overall recognition rate of 94.2% for a dictionary of fifteensigns in Sri Lankan sign language

    Virtual sign : a real time bidirectional translator of portuguese sign language

    Get PDF
    Promoting equity, equal opportunities to all and social inclusion of people with disabilities is a concern of modern societies at large and a key topic in the agenda of European Higher Education. Despite all the progress, we cannot ignore the fact that the conditions provided by the society for the deaf are still far from being perfect. The communication with deaf by means of written text is not as efficient as it might seem at first. In fact, there is a very deep gap between sign language and spoken/written language. The vocabulary, the sentence construction and the grammatical rules are quite different among these two worlds. These facts bring significant difficulties in reading and understanding the meaning of text for deaf people and, on the other hand, make it quite difficult for people with no hearing disabilities to understand sign language. The deployment of tools to assist the daily communication, in schools, in public services, in museums and other, between deaf people and the rest may be a significant contribution to the social inclusion of the deaf community. The work described in this paper addresses the development of a bidirectional translator between Portuguese Sign Language and Portuguese text. The translator from sign language to text resorts to two devices, namely the Microsoft Kinect and 5DT Sensor Gloves in order to gather data about the motion and shape of the hands. The hands configurations are classified using Support Vector Machines. The classification of the movement and orientation of the hands are achieved through the use of Dynamic Time Warping algorithm. The translator exhibits a precision higher than 90%. In the other direction, the translation of Portuguese text to Portuguese Sign Language is supported by a 3D avatar which interprets the entered text and performs the corresponding animations
    • …
    corecore