361 research outputs found

    Hand gesture recognition for human computer interaction: a comparative study of different image features

    Get PDF
    Hand gesture recognition for human computer interaction, being a natural way of human computer interaction, is an area of active research in computer vision and machine learning. This is an area with many different possible applications, giving users a simpler and more natural way to communicate with robots/systems interfaces, without the need for extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them to convey information or for device control. For that, vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition in real time. In this study we try to identify hand features that, isolated, respond better in various situations in human-computer interaction. The extracted features are used to train a set of classifiers with the help of RapidMiner in order to find the best learner. A dataset with our own gesture vocabulary consisted of 10 gestures, recorded from 20 users was created for later processing. Experimental results show that the radial signature and the centroid distance are the features that when used separately obtain better results, with an accuracy of 91% and 90,1% respectively obtained with a Neural Network classifier. These to methods have also the advantage of being simple in terms of computational complexity, which make them good candidates for real-time hand gesture recognition

    A comparative study of different image features for hand gesture machine learning

    Get PDF
    Vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition. Hand gesture recognition for human computer interaction is an area of active research in computer vision and machine learning. The primary goal of gesture recognition research is to create a system, which can identify specific human gestures and use them to convey information or for device control. In this paper we present a comparative study of seven different algorithms for hand feature extraction, for static hand gesture classification, analysed with RapidMiner in order to find the best learner. We defined our own gesture vocabulary, with 10 gestures, and we have recorded videos from 20 persons performing the gestures for later processing. Our goal in the present study is to learn features that, isolated, respond better in various situations in human-computer interaction. Results show that the radial signature and the centroid distance are the features that when used separately obtain better results, being at the same time simple in terms of computational complexity.(undefined

    New human action recognition scheme with geometrical feature representation and invariant discretization for video surveillance

    Get PDF
    Human action recognition is an active research area in computer vision because of its immense application in the field of video surveillance, video retrieval, security systems, video indexing and human computer interaction. Action recognition is classified as the time varying feature data generated by human under different viewpoint that aims to build mapping between dynamic image information and semantic understanding. Although a great deal of progress has been made in recognition of human actions during last two decades, few proposed approaches in literature are reported. This leads to a need for much research works to be conducted in addressing on going challenges leading to developing more efficient approaches to solve human action recognition. Feature extraction is the main tasks in action recognition that represents the core of any action recognition procedure. The process of feature extraction involves transforming the input data that describe the shape of a segmented silhouette of a moving person into the set of represented features of action poses. In video surveillance, global moment invariant based on Geometrical Moment Invariant (GMI) is widely used in human action recognition. However, there are many drawbacks of GMI such that it lack of granular interpretation of the invariants relative to the shape. Consequently, the representation of features has not been standardized. Hence, this study proposes a new scheme of human action recognition (HAR) with geometrical moment invariants for feature extraction and supervised invariant discretization in identifying actions uniqueness in video sequencing. The proposed scheme is tested using IXMAS dataset in video sequence that has non rigid nature of human poses that resulting from drastic illumination changes, changing in pose and erratic motion patterns. The invarianceness of the proposed scheme is validated based on the intra-class and inter-class analysis. The result of the proposed scheme yields better performance in action recognition compared to the conventional scheme with an average of more than 99% accuracy while preserving the shape of the human actions in video images

    A new framework for sign language alphabet hand posture recognition using geometrical features through artificial neural network (part 1)

    Get PDF
    Hand pose tracking is essential in sign languages. An automatic recognition of performed hand signs facilitates a number of applications, especially for people with speech impairment to communication with normal people. This framework which is called ASLNN proposes a new hand posture recognition technique for the American sign language alphabet based on the neural network which works on the geometrical feature extraction of hands. A user’s hand is captured by a three-dimensional depth-based sensor camera; consequently, the hand is segmented according to the depth analysis features. The proposed system is called depth-based geometrical sign language recognition as named DGSLR. The DGSLR adopted in easier hand segmentation approach, which is further used in segmentation applications. The proposed geometrical feature extraction framework improves the accuracy of recognition due to unchangeable features against hand orientation compared to discrete cosine transform and moment invariant. The findings of the iterations demonstrate the combination of the extracted features resulted to improved accuracy rates. Then, an artificial neural network is used to drive desired outcomes. ASLNN is proficient to hand posture recognition and provides accuracy up to 96.78% which will be discussed on the additional paper of this authors in this journal

    Review on Classification Methods used in Image based Sign Language Recognition System

    Get PDF
    Sign language is the way of communication among the Deaf-Dumb people by expressing signs. This paper is present review on Sign language Recognition system that aims to provide communication way for Deaf and Dumb pople. This paper describes review of Image based sign language recognition system. Signs are in the form of hand gestures and these gestures are identified from images as well as videos. Gestures are identified and classified according to features of Gesture image. Features are like shape, rotation, angle, pixels, hand movement etc. Features are finding by various Features Extraction methods and classified by various machine learning methods. Main pupose of this paper is to review on classification methods of similar systems used in Image based hand gesture recognition . This paper also describe comarison of various system on the base of classification methods and accuracy rate

    A Framework for Vision-based Static Hand Gesture Recognition

    Get PDF
    In today’s technical world, the intellectual computing of a efficient human-computer interaction (HCI) or human alternative and augmentative communication (HAAC) is essential in our lives. Hand gesture recognition is one of the most important techniques that can be used to build up a gesture based interface system for HCI or HAAC application. Therefore, suitable development of gesture recognition method is necessary to design advance hand gesture recognition system for successful applications like robotics, assistive systems, sign language communication, virtual reality etc. However, the variation of illumination, rotation, position and size of gesture images, efficient feature representation, and classification are the main challenges towards the development of a real time gesture recognition system. The aim of this work is to develop a framework for vision based static hand gesture recognition which overcomes the challenges of illumination, rotation, size and position variation of the gesture images. In general, a framework for gesture recognition system which consists of preprocessing, feature extraction, feature selection, and classification stages is developed in this thesis work. The preprocessing stage involves the following sub-stages: image enhancement which enhances the image by compensating illumination variation; segmentation, which segments hand region from its background image and transforms it into binary silhouette; image rotation that makes the segmented gesture as rotation invariant; filtering that effectively removes background noise and object noise from binary image and provides a well defined segmented hand gesture. This work proposes an image rotation technique by coinciding the first principal component of the segmented hand gesture with vertical axes to make it as rotation invariant. In the feature extraction stage, this work extracts xi localized contour sequence (LCS) and block based features, and proposes a combined feature set by appending LCS features with block-based features to represent static hand gesture images. A discrete wavelets transform (DWT) and Fisher ratio (F-ratio) based feature set is also proposed for better representation of static hand gesture image. To extract this feature set, DWT is applied on resized and enhanced grayscale image and then the important DWT coefficient matrices are selected as features using proposed F-ratio based coefficient matrices selection technique. In sequel, a modified radial basis function neural network (RBF-NN) classifier based on k-mean and least mean square (LMS) algorithms is proposed in this work. In the proposed RBF-NN classifier, the centers are automatically selected using k-means algorithm and estimated weight matrix is updated utilizing LMS algorithm for better recognition of hand gesture images. A sigmoidal activation function based RBF-NN classifier is also proposed here for further improvement of recognition performance. The activation function of the proposed RBF-NN classifier is formed using a set of composite sigmoidal functions. Finally, the extracted features are applied as input to the classifier to recognize the class of static hand gesture images. Subsequently, a feature vector optimization technique based on genetic algorithm (GA) is also proposed to remove the redundant and irrelevant features. The proposed algorithms are tested on three static hand gesture databases which include grayscale images with uniform background (Database I and Database II) and color images with non-uniform background (Database III). Database I is a repository database which consists of hand gesture images of 25 Danish/international sign language (D/ISL) hand alphabets. Database II and III are indigenously developed using VGA Logitech Webcam (C120) with 24 American Sign Language (ASL) hand alphabets

    Infrared face recognition: a comprehensive review of methodologies and databases

    Full text link
    Automatic face recognition is an area with immense practical potential which includes a wide range of commercial and law enforcement applications. Hence it is unsurprising that it continues to be one of the most active research areas of computer vision. Even after over three decades of intense research, the state-of-the-art in face recognition continues to improve, benefitting from advances in a range of different research fields such as image processing, pattern recognition, computer graphics, and physiology. Systems based on visible spectrum images, the most researched face recognition modality, have reached a significant level of maturity with some practical success. However, they continue to face challenges in the presence of illumination, pose and expression changes, as well as facial disguises, all of which can significantly decrease recognition accuracy. Amongst various approaches which have been proposed in an attempt to overcome these limitations, the use of infrared (IR) imaging has emerged as a particularly promising research direction. This paper presents a comprehensive and timely review of the literature on this subject. Our key contributions are: (i) a summary of the inherent properties of infrared imaging which makes this modality promising in the context of face recognition, (ii) a systematic review of the most influential approaches, with a focus on emerging common trends as well as key differences between alternative methodologies, (iii) a description of the main databases of infrared facial images available to the researcher, and lastly (iv) a discussion of the most promising avenues for future research.Comment: Pattern Recognition, 2014. arXiv admin note: substantial text overlap with arXiv:1306.160
    corecore