23,620 research outputs found

    Vision-based hand gesture interaction using particle filter, principle component analysis and transition network

    Get PDF
    Vision-based human-computer interaction is becoming important nowadays. It offers natural interaction with computers and frees users from mechanical interaction devices, which is favourable especially for wearable computers. This paper presents a human-computer interaction system based on a conventional webcam and hand gesture recognition. This interaction system works in real time and enables users to control a computer cursor with hand motions and gestures instead of a mouse. Five hand gestures are designed on behalf of five mouse operations: moving, left click, left-double click, right click and no-action. An algorithm based on Particle Filter is used for tracking the hand position. PCA-based feature selection is used for recognizing the hand gestures. A transition network is also employed for improving the accuracy and reliability of the interaction system. This interaction system shows good performance in the recognition and interaction test

    Connected Component Algorithm for Gestures Recognition

    Get PDF
    This paper presents head and hand gestures recognition system for Human Computer Interaction (HCI). Head and Hand gestures are an important modality for human computer interaction. Vision based recognition system can give computers the capability of understanding and responding to the hand and head gestures. The aim of this paper is the proposal of real time vision system for its application within a multimedia interaction environment. This recognition system consists of four modules, i.e. capturing the image, image extraction, pattern matching and command determination. If hand and head gestures are shown in front of the camera, hardware will perform respective action. Gestures are matched with the stored database of gestures using pattern matching. Corresponding to matched gesture, the hardware is moved in left, right, forward and backward directions. An algorithm for optimizing connected component in gesture recognition is proposed, which makes use of segmentation in two images. Connected component algorithm scans an image and group its pixels into component based on pixel connectivity i.e. all pixels in connected component share similar pixel intensity values and are in some way connected with each other. Once all groups have been determined, each pixel is labeled with a color according to component it was assigned to

    A method for image-based shadow interaction with virtual objects

    Get PDF
    AbstractA lot of researchers have been investigating interactive portable projection systems such as a mini-projector. In addition, in exhibition halls and museums, there is a trend toward using interactive projection systems to make viewing more exciting and impressive. They can also be applied in the field of art, for example, in creating shadow plays. The key idea of the interactive portable projection systems is to recognize the user׳s gesture in real-time. In this paper, a vision-based shadow gesture recognition method is proposed for interactive projection systems. The gesture recognition method is based on the screen image obtained by a single web camera. The method separates only the shadow area by combining the binary image with an input image using a learning algorithm that isolates the background from the input image. The region of interest is recognized with labeling the shadow of separated regions, and then hand shadows are isolated using the defect, convex hull, and moment of each region. To distinguish hand gestures, Hu׳s invariant moment method is used. An optical flow algorithm is used for tracking the fingertip. Using this method, a few interactive applications are developed, which are presented in this paper

    Feature extraction: hand shape, hand position and hand trajectory path

    Get PDF
    Vision-based hand posture detection and tracking is an important issue for Human to Computer Interaction applications. The performance of recognition system fIrst depends on the process of getting effIcient features to represent pattern characteristics [1]. There is no algorithm which shows how to select the representation or choose the features [2] so the selection of features will depend on the application. There are many different methods to represent 2-D images such as boundary, topological, shape grammar, description of similarity etc. [2-4]. Features should be chosen so that they are intensive to noise-like variation in pattern and keep the number of feature small for easy computation [5]. Hand posture shape features, motion trajectory feature and hand position with respect to other human upper body parts play an important role within the preparation stage of the gesture before recognition. In this chapter, features have been extracted from hand posture closed contours, hand posture trajectory and hand position has been identifIed. Algorithms have been developed for extracting these features after segmenting the head and the two hands. These extracted features can be attached to a recognizer such as Support Vector machine, Hidden Markov Model, etc. for hand gesture recognition

    A Framework for Vision-based Static Hand Gesture Recognition

    Get PDF
    In today’s technical world, the intellectual computing of a efficient human-computer interaction (HCI) or human alternative and augmentative communication (HAAC) is essential in our lives. Hand gesture recognition is one of the most important techniques that can be used to build up a gesture based interface system for HCI or HAAC application. Therefore, suitable development of gesture recognition method is necessary to design advance hand gesture recognition system for successful applications like robotics, assistive systems, sign language communication, virtual reality etc. However, the variation of illumination, rotation, position and size of gesture images, efficient feature representation, and classification are the main challenges towards the development of a real time gesture recognition system. The aim of this work is to develop a framework for vision based static hand gesture recognition which overcomes the challenges of illumination, rotation, size and position variation of the gesture images. In general, a framework for gesture recognition system which consists of preprocessing, feature extraction, feature selection, and classification stages is developed in this thesis work. The preprocessing stage involves the following sub-stages: image enhancement which enhances the image by compensating illumination variation; segmentation, which segments hand region from its background image and transforms it into binary silhouette; image rotation that makes the segmented gesture as rotation invariant; filtering that effectively removes background noise and object noise from binary image and provides a well defined segmented hand gesture. This work proposes an image rotation technique by coinciding the first principal component of the segmented hand gesture with vertical axes to make it as rotation invariant. In the feature extraction stage, this work extracts xi localized contour sequence (LCS) and block based features, and proposes a combined feature set by appending LCS features with block-based features to represent static hand gesture images. A discrete wavelets transform (DWT) and Fisher ratio (F-ratio) based feature set is also proposed for better representation of static hand gesture image. To extract this feature set, DWT is applied on resized and enhanced grayscale image and then the important DWT coefficient matrices are selected as features using proposed F-ratio based coefficient matrices selection technique. In sequel, a modified radial basis function neural network (RBF-NN) classifier based on k-mean and least mean square (LMS) algorithms is proposed in this work. In the proposed RBF-NN classifier, the centers are automatically selected using k-means algorithm and estimated weight matrix is updated utilizing LMS algorithm for better recognition of hand gesture images. A sigmoidal activation function based RBF-NN classifier is also proposed here for further improvement of recognition performance. The activation function of the proposed RBF-NN classifier is formed using a set of composite sigmoidal functions. Finally, the extracted features are applied as input to the classifier to recognize the class of static hand gesture images. Subsequently, a feature vector optimization technique based on genetic algorithm (GA) is also proposed to remove the redundant and irrelevant features. The proposed algorithms are tested on three static hand gesture databases which include grayscale images with uniform background (Database I and Database II) and color images with non-uniform background (Database III). Database I is a repository database which consists of hand gesture images of 25 Danish/international sign language (D/ISL) hand alphabets. Database II and III are indigenously developed using VGA Logitech Webcam (C120) with 24 American Sign Language (ASL) hand alphabets

    A Wearable Textile 3D Gesture Recognition Sensor Based on Screen-Printing Technology

    Full text link
    [EN] Research has developed various solutions in order for computers to recognize hand gestures in the context of human machine interface (HMI). The design of a successful hand gesture recognition system must address functionality and usability. The gesture recognition market has evolved from touchpads to touchless sensors, which do not need direct contact. Their application in textiles ranges from the field of medical environments to smart home applications and the automotive industry. In this paper, a textile capacitive touchless sensor has been developed by using screen-printing technology. Two different designs were developed to obtain the best configuration, obtaining good results in both cases. Finally, as a real application, a complete solution of the sensor with wireless communications is presented to be used as an interface for a mobile phone.The work presented is funded by the Conselleria d'Economia Sostenible, Sectors Productius i Treball, through IVACE (Instituto Valenciano de Competitividad Empresarial) and cofounded by ERDF funding from the EU. Application No.: IMAMCI/2019/1. This work was also supported by the Spanish Government/FEDER funds (RTI2018-100910-B-C43) (MINECO/FEDER).Ferri Pascual, J.; Llinares Llopis, R.; Moreno Canton, J.; Ibáñez Civera, FJ.; Garcia-Breijo, E. (2019). A Wearable Textile 3D Gesture Recognition Sensor Based on Screen-Printing Technology. Sensors. 19(23):1-32. https://doi.org/10.3390/s19235068S1321923Chakraborty, B. K., Sarma, D., Bhuyan, M. K., & MacDorman, K. F. (2017). Review of constraints on vision‐based gesture recognition for human–computer interaction. IET Computer Vision, 12(1), 3-15. doi:10.1049/iet-cvi.2017.0052Zhang, Z. (2012). Microsoft Kinect Sensor and Its Effect. IEEE Multimedia, 19(2), 4-10. doi:10.1109/mmul.2012.24Rautaray, S. S. (2012). Real Time Hand Gesture Recognition System for Dynamic Applications. International Journal of UbiComp, 3(1), 21-31. doi:10.5121/iju.2012.3103Karim, R. A., Zakaria, N. F., Zulkifley, M. A., Mustafa, M. M., Sagap, I., & Md Latar, N. H. (2013). Telepointer technology in telemedicine : a review. BioMedical Engineering OnLine, 12(1), 21. doi:10.1186/1475-925x-12-21Santos, L., Carbonaro, N., Tognetti, A., González, J., de la Fuente, E., Fraile, J., & Pérez-Turiel, J. (2018). Dynamic Gesture Recognition Using a Smart Glove in Hand-Assisted Laparoscopic Surgery. Technologies, 6(1), 8. doi:10.3390/technologies6010008Singh, A., Buonassisi, J., & Jain, S. (2014). Autonomous Multiple Gesture Recognition System for Disabled People. International Journal of Image, Graphics and Signal Processing, 6(2), 39-45. doi:10.5815/ijigsp.2014.02.05Ohn-Bar, E., & Trivedi, M. M. (2014). Hand Gesture Recognition in Real Time for Automotive Interfaces: A Multimodal Vision-Based Approach and Evaluations. IEEE Transactions on Intelligent Transportation Systems, 15(6), 2368-2377. doi:10.1109/tits.2014.2337331Khan, S. A., & Engelbrecht, A. P. (2010). A fuzzy particle swarm optimization algorithm for computer communication network topology design. Applied Intelligence, 36(1), 161-177. doi:10.1007/s10489-010-0251-2Abraham, L., Urru, A., Normani, N., Wilk, M., Walsh, M., & O’Flynn, B. (2018). Hand Tracking and Gesture Recognition Using Lensless Smart Sensors. Sensors, 18(9), 2834. doi:10.3390/s18092834Zeng, Q., Kuang, Z., Wu, S., & Yang, J. (2019). A Method of Ultrasonic Finger Gesture Recognition Based on the Micro-Doppler Effect. Applied Sciences, 9(11), 2314. doi:10.3390/app9112314Lien, J., Gillian, N., Karagozler, M. E., Amihood, P., Schwesig, C., Olson, E., … Poupyrev, I. (2016). Soli. ACM Transactions on Graphics, 35(4), 1-19. doi:10.1145/2897824.2925953Sang, Y., Shi, L., & Liu, Y. (2018). Micro Hand Gesture Recognition System Using Ultrasonic Active Sensing. IEEE Access, 6, 49339-49347. doi:10.1109/access.2018.2868268Ferri, J., Lidón-Roger, J., Moreno, J., Martinez, G., & Garcia-Breijo, E. (2017). A Wearable Textile 2D Touchpad Sensor Based on Screen-Printing Technology. Materials, 10(12), 1450. doi:10.3390/ma10121450Nunes, J., Castro, N., Gonçalves, S., Pereira, N., Correia, V., & Lanceros-Mendez, S. (2017). Marked Object Recognition Multitouch Screen Printed Touchpad for Interactive Applications. Sensors, 17(12), 2786. doi:10.3390/s17122786Ferri, J., Perez Fuster, C., Llinares Llopis, R., Moreno, J., & Garcia‑Breijo, E. (2018). Integration of a 2D Touch Sensor with an Electroluminescent Display by Using a Screen-Printing Technology on Textile Substrate. Sensors, 18(10), 3313. doi:10.3390/s18103313Cronin, S., & Doherty, G. (2018). Touchless computer interfaces in hospitals: A review. Health Informatics Journal, 25(4), 1325-1342. doi:10.1177/1460458217748342Haslinger, L., Wasserthal, S., & Zagar, B. G. (2017). P3.1 - A capacitive measurement system for gesture regocnition. Proceedings Sensor 2017. doi:10.5162/sensor2017/p3.1Cherenack, K., & van Pieterson, L. (2012). Smart textiles: Challenges and opportunities. Journal of Applied Physics, 112(9), 091301. doi:10.1063/1.474272

    On recognition of gestures arising in flight deck officer (FDO) training

    Get PDF
    This thesis presents an on-line recognition machine RM for the continuous and isolated recognition of dynamic and static gestures that arise in Flight Deck Officer (FDO) training. This thesis considers 18 distinct and commonly used dynamic and static gestures of FDO. Tracker and computer vision based systems are used to acquire the gestures. The recognition machine is based on the generic pattern recognition framework. The gestures are represented as templates using summary statistics. The proposed recognition algorithm exploits temporal and spatial characteristics of the gestures via dynamic programming and Markovian process. The algorithm predicts the correspond-ing index of incremental input data in the templates in an on-line mode. Accumulated consistency in the sequence of prediction provides a similarity measurement (Score) between input data and the templates. Having estimated Score, some heuristics are employed to control the declaration in the final stages. The recognition machine addresses general gesture recognition issues: to recognize real time and dynamic gesture, no starting/end point and inter-intra personal tem-poral and spatial variance. The first two issues and temporal variance are addressed by the proposed algorithm. The spatial invariance is addressed by introducing inde-pendent units to construct gesture models. An important aspect of the algorithm is that it provides an intuitive mechanism for automatic detection of start/end frames of continuous gestures. The algorithm has the additional advantage of providing timely feedback for training purposes. In this thesis, we consider isolated and continuous gestures. The performance of RM is evaluated using six datasets - artificial (W_TTest), hand motion (Yang, Perrotta), Gesture Panel and FDO (tracker, vision). The Hidden Markov Model (HMM) and Dynamic Time Warping (DTW) are used to compare RM's results. Various data analyses techniques are deployed to reveal the complexity and inter similarity of the datasets before experiments are conducted. In the isolated recogni-tion experiments, the recognition machine obtains comparable results with HMM and outperforms DTW. In the continuous experiments, RM surpasses HMM in terms of sentence and word recognition. In addition to these experiments, a multilayer per-ceptron neural network (MLPNN) is introduced for the prediction process of RM to validate modularity of RM. The overall conclusion of the thesis is that, RM achieves comparable results which are in agreement with HMM and DTW. Furthermore, the recognition machine pro-vides more reliable and accurate recognition in the case of missing and noisy data. The recognition machine addresses some common limitations of these algorithms and general temporal pattern recognition in the context of FDO training. The recognition algorithm is thus suited for on-line recognition.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Deep Learning Methods for Hand Gesture Recognition via High-Density Surface Electromyogram (HD-sEMG) Signals

    Get PDF
    Hand Gesture Recognition (HGR) using surface Electromyogram (sEMG) signals can be considered as one of the most important technologies in making efficient Human Machine Interface (HMI) systems. In particular, sEMG-based hand gesture has been a topic of growing interest for development of assistive systems to improve the quality of life in individuals suffering from amputated limbs. Generally speaking, myoelectric prosthetic devices work by classifying existing patterns of the collected sEMG signals and synthesizing intended gestures. While conventional myoelectric control systems, e.g., on/off control or direct-proportional, have potential advantages, challenges such as limited Degree of Freedom (DoF) due to crosstalk have resulted in the emergence of data-driven solutions. More specifically, to improve efficiency, intuitiveness, and the control performance of hand prosthetic systems, several Artificial Intelligence (AI) algorithms ranging from conventional Machine Learning (ML) models to highly complicated Deep Neural Network (DNN) architectures have been designed for sEMG-based hand gesture recognition in myoelectric prosthetic devices. In this thesis, we, first, perform a literature review on hand gesture recognition methods and elaborate on the recently proposed Deep Learning/Machine Learning (DL/ML) models in the literature. Then, our utilized High-Density sEMG (HD-sEMG) dataset is introduced and the rationales behind our main focus on this particular type of sEMG dataset are explained. We, then, develop a Vision Transformer (ViT)-based model for gesture recognition with HD-sEMG signals and evaluate its performance under different conditions such as variable window sizes, number of electrode channels, and model's complexity. We compare its performance with that of two conventional ML and one DL algorithm that are typically adopted in this domain. Furthermore, we introduce another capability of our proposed framework for instantaneous training, which is its ability to classify hand gestures based on a single frame of HD-sEMG dataset. Following that, we introduce the idea of integrating the macroscopic and microscopic neural drive information obtained from HD-sEMG data into a hybrid ViT-based framework for gesture recognition, which outperforms a standalone ViT architecture in terms of classification accuracy. Here, microscopic neural drive information (also called Motor Unit Spike Trains) refers to the neural commands sent by the brain and spinal cord to individual muscle fibers and are extracted from HD-sEMG signals using Blind Source Separation (BSP) algorithms. Finally, we design an alternative and novel hand gesture recognition model based on the less-explored topic of Spiking Neural Networks (SNN), which performs spatio-temporal gesture recognition in an event-based fashion. As opposed to the classical DNN architectures, SNNs are of the capacity to imitate human brain's cognitive function by using biologically inspired models of neurons and synapses. Therefore, they are more biologically explainable and computationally efficient

    Multiscale Convolutional Neural Networks for Hand Detection

    Get PDF
    Unconstrained hand detection in still images plays an important role in many hand-related vision problems, for example, hand tracking, gesture analysis, human action recognition and human-machine interaction, and sign language recognition. Although hand detection has been extensively studied for decades, it is still a challenging task with many problems to be tackled. The contributing factors for this complexity include heavy occlusion, low resolution, varying illumination conditions, different hand gestures, and the complex interactions between hands and objects or other hands. In this paper, we propose a multiscale deep learning model for unconstrained hand detection in still images. Deep learning models, and deep convolutional neural networks (CNNs) in particular, have achieved state-of-the-art performances in many vision benchmarks. Developed from the region-based CNN (R-CNN) model, we propose a hand detection scheme based on candidate regions generated by a generic region proposal algorithm, followed by multiscale information fusion from the popular VGG16 model. Two benchmark datasets were applied to validate the proposed method, namely, the Oxford Hand Detection Dataset and the VIVA Hand Detection Challenge. We achieved state-of-the-art results on the Oxford Hand Detection Dataset and had satisfactory performance in the VIVA Hand Detection Challenge.</jats:p
    corecore