39 research outputs found

    Hand Gesture Recognition using Depth Data for Indian Sign Language

    Get PDF
    It is hard for most people who are not familiar with a sign language to communicate without an interpreter. Thus, a system that transcribes symbols in sign languages into plain text can help with real-time communication, and it may also provide interactive training for people to learn a sign language. A sign language uses manual communication and body language to convey meaning. The depth data for five different gestures corresponding to alphabets Y, V, L, S, I was obtained from online database. Each segmented gesture is represented by its timeseries curve and feature vector is extracted from it. To recognise the class of input noisy hand shape, distance metric for hand dissimilarity measure, called Finger-Earth Mover’s Distance (FEMD) is used. As it only matches fingers while not the complete hand shape, it can distinguish hand gestures of slight differences better

    Hand Gesture Recognition Using Different Algorithms Based on Artificial Neural Network

    Get PDF
    Gesture is one of the most natural and expressive ways of communications between human and computer in a real system. We naturally use various gestures to express our own intentions in everyday life. Hand gesture is one of the important methods of non-verbal communication for human beings. Hand gesture recognition based man-machine interface is being developed vigorously in recent years. This paper gives an overview of different methods for recognizing the hand gestures using MATLAB. It also gives the working details of recognition process using Edge detection and Skin detection algorithms

    3-D Hand Pose Estimation from Kinect's Point Cloud Using Appearance Matching

    Full text link
    We present a novel appearance-based approach for pose estimation of a human hand using the point clouds provided by the low-cost Microsoft Kinect sensor. Both the free-hand case, in which the hand is isolated from the surrounding environment, and the hand-object case, in which the different types of interactions are classified, have been considered. The hand-object case is clearly the most challenging task having to deal with multiple tracks. The approach proposed here belongs to the class of partial pose estimation where the estimated pose in a frame is used for the initialization of the next one. The pose estimation is obtained by applying a modified version of the Iterative Closest Point (ICP) algorithm to synthetic models to obtain the rigid transformation that aligns each model with respect to the input data. The proposed framework uses a "pure" point cloud as provided by the Kinect sensor without any other information such as RGB values or normal vector components. For this reason, the proposed method can also be applied to data obtained from other types of depth sensor, or RGB-D camera

    Back propagation Neural Network Proposed Algorithm to learn deaf a Computer Commands by Hand Gestures

    Get PDF
    Sign language Plays important role in activating the relation between people and computers , through the activation of the concept of hand movements and provide easier way for people with disabilities (deaf) to express what they want and replace it with their hands.This paper give the overview of proposed backpropagation neural network algorithm to construct a method to identify some of computer tools through hand sign (gesture)

    A hybrid method using kinect depth and color data stream for hand blobs segmentation

    Get PDF
    The recently developed depth sensors such as Kinect have provided new potential for human-computer interaction (HCI) and hand gesture are one of main parts in recent researches. Hand segmentation procedure is performed to acquire hand gesture from a captured image. In this paper, a method is produced to segment hand blobs using both depth and color data frames. This method applies a body segmentation and an image threshold techniques to depth data frame using skeleton data and concurrently it uses SLIC super-pixel segmentation method to extract hand blobs from color data frame with the help of skeleton data. The proposed method has low computation time and shows significant results when basic assumption are fulfilled

    Tiny hand gesture recognition without localization via a deep convolutional network

    Get PDF
    Visual hand-gesture recognition is being increasingly desired for human-computer interaction interfaces. In many applications, hands only occupy about 10% of the image, whereas the most of it contains background, human face, and human body. Spatial localization of the hands in such scenarios could be a challenging task and ground truth bounding boxes need to be provided for training, which is usually not accessible. However, the location of the hand is not a requirement when the criteria is just the recognition of a gesture to command a consumer electronics device, such as mobiles phones and TVs. In this paper, a deep convolutional neural network is proposed to directly classify hand gestures in images without any segmentation or detection stage that could discard the irrelevant not-hand areas. The designed hand-gesture recognition network can classify seven sorts of hand gestures in a user-independent manner and on real time, achieving an accuracy of 97.1% in the dataset with simple backgrounds and 85.3% in the dataset with complex backgrounds

    A Brief Survey of Image-Based Depth Upsampling

    Get PDF
    Recently, there has been remarkable growth of interest in the development and applications of Time-of-Flight (ToF) depth cameras. However, despite the permanent improvement of their characteristics, the practical applicability of ToF cameras is still limited by low resolution and quality of depth measurements. This has motivated many researchers to combine ToF cameras with other sensors in order to enhance and upsample depth images. In this paper, we compare ToF cameras to three image-based techniques for depth recovery, discuss the upsampling problem and survey the approaches that couple ToF depth images with high-resolution optical images. Other classes of upsampling methods are also mentioned

    Real-time motion-based hand gestures recognition from time-of-flight video

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/s11265-015-1090-5This paper presents an innovative solution based on Time-Of-Flight (TOF) video technology to motion patterns detection for real-time dynamic hand gesture recognition. The resulting system is able to detect motion-based hand gestures getting as input depth images. The recognizable motion patterns are modeled on the basis of the human arm anatomy and its degrees of freedom, generating a collection of synthetic motion patterns that is compared with the captured input patterns in order to finally classify the input gesture. For the evaluation of our system a significant collection of gestures has been compiled, getting results for 3D pattern classification as well as a comparison with the results using only 2D informatio

    Towards sociable virtual humans : multimodal recognition of human input and behavior

    Get PDF
    One of the biggest obstacles for constructing effective sociable virtual humans lies in the failure of machines to recognize the desires, feelings and intentions of the human user. Virtual humans lack the ability to fully understand and decode the communication signals human users emit when communicating with each other. This article describes our research in overcoming this problem by developing senses for the virtual humans which enables them to hear and understand human speech, localize the human user in front of the display system, recognize hand postures and to recognize the emotional state of the human user by classifying facial expression. We report on the methods needed to perform these tasks in real-time and conclude with an outlook on promising research issues of the future
    corecore