13,067 research outputs found

    PENGENALAN ISYARAT TANGAN STATIS PADA SISTEM ISYARAT BAHASA INDONESIA BERBASIS JARINGAN SYARAF TIRUAN PERAMBATAN BALIK

    Get PDF
    Static Hand Gesture Recognition of Indonesian Sign Language System Based on Backpropagation Neural Networks. The main objective of this research is to perform pattern recognition of static hand gesture in Indonesian sign language. Basically, pattern recognition of static hand gesture in the form of image had three phases include: 1) segmentation of the image that will be recognizable form of the hands and face, 2) feature extraction and 3) pattern classification. In this research, we used images data of 15 classes of words static. Segmentation is performed using HSV with a threshold filter based on skin color. Feature extraction performed with the Haar wavelet decomposition filter to level 2. Classification is done by applying the back propagation system of neural network architecture with 4096 neurons in input layer, 75 neurons in hidden layer and 15 neurons in output layer. The system was tested by using 225 data validation and accuracy achieved was 69%.Keywords: artificial neural networks, feature extraction, hand gesture, segmentation, stati

    Low Complexity Radar Gesture Recognition Using Synthetic Training Data

    Get PDF
    Developments in radio detection and ranging (radar) technology have made hand gesture recognition feasible. In heat map-based gesture recognition, feature images have a large size and require complex neural networks to extract information. Machine learning methods typically require large amounts of data and collecting hand gestures with radar is time- and energy-consuming. Therefore, a low computational complexity algorithm for hand gesture recognition based on a frequency-modulated continuous-wave (FMCW) radar and a synthetic hand gesture feature generator are proposed. In the low computational complexity algorithm, two-dimensional Fast Fourier Transform is implemented on the radar raw data to generate a range-Doppler matrix. After that, background modelling is applied to separate the dynamic object and the static background. Then a bin with the highest magnitude in the range-Doppler matrix is selected to locate the target and obtain its range and velocity. The bins at this location along the dimension of the antenna can be utilised to calculate the angle of the target using Fourier beam steering. In the synthetic generator, the Blender software is used to generate different hand gestures and trajectories and then the range, velocity and angle of targets are extracted directly from the trajectory. The experimental results demonstrate that the average recognition accuracy of the model on the test set can reach 89.13% when the synthetic data are used as the training set and the real data are used as the test set. This indicates that the generation of synthetic data can make a meaningful contribution in the pre-training phase

    Hand Gesture Recognition Using Artificial Neural Networks

    Get PDF
    Hand gesture has been part of human communication, where, young children usually communicate by using gesture before they can talk. Adults may have to also gesture if they need to or they are indeed mute or deaf. Thus the idea of teaching a machine to also learn gestures is very appealing due to its unique mode of communications. A reliable hand gesture recognition system will make the remote control become obsolete. However, many of the new techniques proposed are complicated to be implemented in real time, especially as a human machine interface. This thesis focuses on recognizing hand gesture in static posture. Since static hand postures not only can express some concepts, but also can act as special transition states in temporal gestures recognition, thus estimating static hand postures is in fact a big topics in gesture recognition. A database consists of 200 gesture images have been built, where five volunteers had help in the making of the database. The images were captured in a controlled environment and the postures are free from occlusion where the background is uncluttered and the hand is assumed to have been localized. A system was then built to recognize the hand gesture. The captured image will be first preprocessed in order to binarize the palm region, where Sobel edge detection technique has been employed, with later followed by morphological operation. A new feature extraction technique has been developed, based on horizontal and vertical states transition count, and the ratio of hand area with respect to the whole area of image. These set of features have been proven to have high intra class dissimilarity attributes. In order to have a system that can be easily trained, artificial neural networks has been chosen in the classification stage. A multilayer perceptron with back-propagation algorithm has been developed, thus the system is actually in-built to be used as a human machine interface. The gesture recognition system has been built and tested in Matlab, where simulations have shown promising results. The performance of recognition rate in this research is 95% which shows a major improvement in comparison to the available methods

    Dynamic Hand Gesture Recognition Using Ultrasonic Sonar Sensors and Deep Learning

    Get PDF
    The space of hand gesture recognition using radar and sonar is dominated mostly by radar applications. In addition, the machine learning algorithms used by these systems are typically based on convolutional neural networks with some applications exploring the use of long short term memory networks. The goal of this study was to build and design a Sonar system that can classify hand gestures using a machine learning approach. Secondly, the study aims to compare convolutional neural networks to long short term memory networks as a means to classify hand gestures using sonar. A Doppler Sonar system was designed and built to be able to sense hand gestures. The Sonar system is a multi-static system containing one transmitter and three receivers. The sonar system can measure the Doppler frequency shifts caused by dynamic hand gestures. Since the system uses three receivers, three different Doppler frequency channels are measured. Three additional differential frequency channels are formed by computing the differences between the frequency of each of the receivers. These six channels are used as inputs to the deep learning models. Two different deep learning algorithms were used to classify the hand gestures; a Doppler biLSTM network [1] and a CNN [2]. Six basic hand gestures, two in each x- y- and z-axis, and two rotational hand gestures are recorded using both left and right hand at different distances. The gestures were also recorded using both left and right hands. Ten-Fold cross-validation is used to evaluate the networks' performance and classification accuracy. The LSTM was able to classify the six basic gestures with an accuracy of at least 96% but with the addition of the two rotational gestures, the accuracy drops to 47%. This result is acceptable since the basic gestures are more commonly used gestures than rotational gestures. The CNN was able to classify all the gestures with an accuracy of at least 98%. Additionally, The LSTM network is also able to classify separate left and right-hand gestures with an accuracy of 80% and The CNN with an accuracy of 83%. The study shows that CNN is the most widely used algorithm for hand gesture recognition as it can consistently classify gestures with various degrees of complexity. The study also shows that the LSTM network can also classify hand gestures with a high degree of accuracy. More experimentation, however, needs to be done in order to increase the complexity of recognisable gestures
    corecore