810,374 research outputs found

    Automatic recognition of fingerspelled words in British Sign Language

    Get PDF
    We investigate the problem of recognizing words from video, fingerspelled using the British Sign Language (BSL) fingerspelling alphabet. This is a challenging task since the BSL alphabet involves both hands occluding each other, and contains signs which are ambiguous from the observer’s viewpoint. The main contributions of our work include: (i) recognition based on hand shape alone, not requiring motion cues; (ii) robust visual features for hand shape recognition; (iii) scalability to large lexicon recognition with no re-training. We report results on a dataset of 1,000 low quality webcam videos of 100 words. The proposed method achieves a word recognition accuracy of 98.9%

    Real time hand gesture recognition including hand segmentation and tracking

    Get PDF
    In this paper we present a system that performs automatic gesture recognition. The system consists of two main components: (i) A unified technique for segmentation and tracking of face and hands using a skin detection algorithm along with handling occlusion between skin objects to keep track of the status of the occluded parts. This is realized by combining 3 useful features, namely, color, motion and position. (ii) A static and dynamic gesture recognition system. Static gesture recognition is achieved using a robust hand shape classification, based on PCA subspaces, that is invariant to scale along with small translation and rotation transformations. Combining hand shape classification with position information and using DHMMs allows us to accomplish dynamic gesture recognition

    Transformation invariance in hand shape recognition

    Get PDF
    In hand shape recognition, transformation invariance is key for successful recognition. We propose a system that is invariant to small scale, translation and shape variations. This is achieved by using a-priori knowledge to create a transformation subspace for each hand shape. Transformation subspaces are created by performing principal component analysis (PCA) on images produced using computer animation. A method to increase the efficiency of the system is outlined. This is achieved using a technique of grouping subspaces based on their origin and then organising them into a hierarchical decision tree. We compare the accuracy of this technique with that of the tangent distance technique and display the result

    Shape-based hand recognition

    Full text link

    Vision-based hand shape identification for sign language recognition

    Get PDF
    This thesis introduces an approach to obtain image-based hand features to accurately describe hand shapes commonly found in the American Sign Language. A hand recognition system capable of identifying 31 hand shapes from the American Sign Language was developed to identify hand shapes in a given input image or video sequence. An appearance-based approach with a single camera is used to recognize the hand shape. A region-based shape descriptor, the generic Fourier descriptor, invariant of translation, scale, and orientation, has been implemented to describe the shape of the hand. A wrist detection algorithm has been developed to remove the forearm from the hand region before the features are extracted. The recognition of the hand shapes is performed with a multi-class Support Vector Machine. Testing provided a recognition rate of approximately 84% based on widely varying testing set of approximately 1,500 images and training set of about 2,400 images. With a larger training set of approximately 2,700 images and a testing set of approximately 1,200 images, a recognition rate increased to about 88%

    Construction of Latent Descriptor Space and Inference Model of Hand-Object Interactions

    Full text link
    Appearance-based generic object recognition is a challenging problem because all possible appearances of objects cannot be registered, especially as new objects are produced every day. Function of objects, however, has a comparatively small number of prototypes. Therefore, function-based classification of new objects could be a valuable tool for generic object recognition. Object functions are closely related to hand-object interactions during handling of a functional object; i.e., how the hand approaches the object, which parts of the object and contact the hand, and the shape of the hand during interaction. Hand-object interactions are helpful for modeling object functions. However, it is difficult to assign discrete labels to interactions because an object shape and grasping hand-postures intrinsically have continuous variations. To describe these interactions, we propose the interaction descriptor space which is acquired from unlabeled appearances of human hand-object interactions. By using interaction descriptors, we can numerically describe the relation between an object's appearance and its possible interaction with the hand. The model infers the quantitative state of the interaction from the object image alone. It also identifies the parts of objects designed for hand interactions such as grips and handles. We demonstrate that the proposed method can unsupervisedly generate interaction descriptors that make clusters corresponding to interaction types. And also we demonstrate that the model can infer possible hand-object interactions

    Hand Contour Recognition In Language Signs Codes Using Shape Based Hand Gestures Methods

    Get PDF
    The deaf and speech impaired are loosing of hearing ability followed by disability of developing talking skill in everyday communication.  Disability of making normal communication makes the deaf and speech impaired being difficult to be accepted by major normal community.  Communication used is gesture language, by using hand gesture communication. The weakness of this communication is that misunderstanding and limitation, it’s due to hand gesture is only understood by minor group.  To make effective communication in real time, it’s needed two ways communication that can change the code of hand gesture pattern to the texts and sounds that can be understood by other people. In this research, it’s focused on hand gesture recognition using shaped based hand algorithm where this method classifies image based on hand contour using hausdorff and Euclidian distance to determine the similarity between two hands based on the shortest range.  The result of this research is recognizing 26 letters gesture, the accuracy of this Gesture is 85%, from different human hands, taken from different session with different lighting condition and different range of camera from image.  It also can recognize 70% different hand contour.  The different of this research from other researches is the more the objects are, the less the classification of hands size is. Using this method, hands size can be minimized
    • 

    corecore