2 research outputs found

    Hand Gesture Recognition Based on Keypoint Vector

    Get PDF
    Human-computer interaction (HCI) is usually associated with using popular input devices such as a mouse or keyboard. In other cases hand gestures can actually be useful for human-computer interaction when hand gestures are needed to make the game controls more interesting. There are three basic controls as input mouse: move, click, and drag. Hand gestures and hand shape are different for each person. This becomes a problem during automatic recognition. Recent research has proven the success of the Deep Neural Network (DNN) for representation and high accuracy in hand gesture recognition. DNN algorithms can study complex and nonlinear relationships between features by applying multiple layers. This paper proposes hand feature based on the normalized keypoint vector using DNN. The model was trained on 2250 hand datasets which were divided into 3 classes to identify the mouse movement. The network design uses multilayer with neuron sizes (13, 12, 15, 14) with 500 epochs and achieves the best accuracy of 98.5% for normalized features. The important work in this research is the use of keypoint vector from hand gestures as features to be fed to the DNN to achieve good accuracy

    An Exploration into Human–Computer Interaction::Hand Gesture Recognition Management in a Challenging Environment

    Get PDF
    Scientists are developing hand gesture recognition systems to improve authentic, efficient, and effortless human–computer interactions without additional gadgets, particularly for the speech-impaired community, which relies on hand gestures as their only mode of communication. Unfortunately, the speech-impaired community has been underrepresented in the majority of human–computer interaction research, such as natural language processing and other automation fields, which makes it more difficult for them to interact with systems and people through these advanced systems. This system’s algorithm is in two phases. The first step is the Region of Interest Segmentation, based on the color space segmentation technique, with a pre-set color range that will remove pixels (hand) of the region of interest from the background (pixels not in the desired area of interest). The system’s second phase is inputting the segmented images into a Convolutional Neural Network (CNN) model for image categorization. For image training, we utilized the Python Keras package. The system proved the need for image segmentation in hand gesture recognition. The performance of the optimal model is 58 percent which is about 10 percent higher than the accuracy obtained without image segmentation
    corecore