36,473 research outputs found

    Intelligent Approaches to interact with Machines using Hand Gesture Recognition in Natural way: A Survey

    Full text link
    Hand gestures recognition (HGR) is one of the main areas of research for the engineers, scientists and bioinformatics. HGR is the natural way of Human Machine interaction and today many researchers in the academia and industry are working on different application to make interactions more easy, natural and convenient without wearing any extra device. HGR can be applied from games control to vision enabled robot control, from virtual reality to smart home systems. In this paper we are discussing work done in the area of hand gesture recognition where focus is on the intelligent approaches including soft computing based methods like artificial neural network, fuzzy logic, genetic algorithms etc. The methods in the preprocessing of image for segmentation and hand image construction also taken into study. Most researchers used fingertips for hand detection in appearance based modeling. Finally the comparison of results given by different researchers is also presented

    Real-Time Human-Computer Interaction Based on Face and Hand Gesture Recognition

    Full text link
    At the present time, hand gestures recognition system could be used as a more expected and useable approach for human computer interaction. Automatic hand gesture recognition system provides us a new tactic for interactive with the virtual environment. In this paper, a face and hand gesture recognition system which is able to control computer media player is offered. Hand gesture and human face are the key element to interact with the smart system. We used the face recognition scheme for viewer verification and the hand gesture recognition in mechanism of computer media player, for instance, volume down/up, next music and etc. In the proposed technique, first, the hand gesture and face location is extracted from the main image by combination of skin and cascade detector and then is sent to recognition stage. In recognition stage, first, the threshold condition is inspected then the extracted face and gesture will be recognized. In the result stage, the proposed technique is applied on the video dataset and the high precision ratio acquired. Additional the recommended hand gesture recognition method is applied on static American Sign Language (ASL) database and the correctness rate achieved nearby 99.40%. also the planned method could be used in gesture based computer games and virtual reality

    Kinect Sensor Based Gesture Recognition for Surveillance Application

    Full text link
    Hand gesture recognition has been granted as one of the emerging fields in research today providing a natural way of communication between man and a machine. Gestures are some forms of body motions which a person expresses when doing a work or giving a reply. Human body tracking is a well studied topic in todays era of Human Computer Interaction and it can be formed by the virtue of human skeleton structures. These skeleton structures have been detected successfully due to the smart progress of some devices, used to measure depth. Human body movements have been viewed using these depth sensors which can provide sufficient accuracy while tracking full body in real time mode with low cost. In reality action and reaction activities are hardly periodic in a multi person perspective situation. Also recognizing their complex a-periodic gestures are highly challenging for detection in surveillance system

    A Novel Human Computer Interaction Platform based College Mathematical Education Methodology

    Full text link
    This article proposes the analysis on novel human computer interaction (HCI) platform based college mathematical education methodology. Above for the application of virtual reality technology in teaching the problems in the study, only through the organization focus on the professional and technical personnel, and constantly improve researchers in development process of professional knowledge, close to the actual needs of the teaching can we achieve the satisfactory result. To obtain better education output, we combine the Kinect to form the HCI based teaching environment. We firstly review the latest HCI technique and principles of college math courses, then we introduce basic components of the Kinect including the gesture segmentation, systematic implementation and the primary characteristics of the platform. As the further step, we implement the system with the re-write of script code to build up the personalized HCI assisted education scenario. The verification and simulation proves the feasibility of our method

    A Gaze-Assisted Multimodal Approach to Rich and Accessible Human-Computer Interaction

    Full text link
    Recent advancements in eye tracking technology are driving the adoption of gaze-assisted interaction as a rich and accessible human-computer interaction paradigm. Gaze-assisted interaction serves as a contextual, non-invasive, and explicit control method for users without disabilities; for users with motor or speech impairments, text entry by gaze serves as the primary means of communication. Despite significant advantages, gaze-assisted interaction is still not widely accepted because of its inherent limitations: 1) Midas touch, 2) low accuracy for mouse-like interactions, 3) need for repeated calibration, 4) visual fatigue with prolonged usage, 5) lower gaze typing speed, and so on. This dissertation research proposes a gaze-assisted, multimodal, interaction paradigm, and related frameworks and their applications that effectively enable gaze-assisted interactions while addressing many of the current limitations. In this regard, we present four systems that leverage gaze-assisted interaction: 1) a gaze- and foot-operated system for precise point-and-click interactions, 2) a dwell-free, foot-operated gaze typing system. 3) a gaze gesture-based authentication system, and 4) a gaze gesture-based interaction toolkit. In addition, we also present the goals to be achieved, technical approach, and overall contributions of this dissertation research.Comment: 4 pages, 5 figures, ACM Richard Tapia Conference, Atlanta, 201

    IPN Hand: A Video Dataset and Benchmark for Real-Time Continuous Hand Gesture Recognition

    Full text link
    In the research community of continuous hand gesture recognition (HGR), the current publicly available datasets lack real-world elements needed to build responsive and efficient HGR systems. In this paper, we introduce a new benchmark dataset named IPN Hand with sufficient size, variation, and real-world elements able to train and evaluate deep neural networks. This dataset contains more than 4 000 gesture samples and 800 000 RGB frames from 50 distinct subjects. We design 13 different static and dynamic gestures focused on interaction with touchless screens. We especially consider the scenario when continuous gestures are performed without transition states, and when subjects perform natural movements with their hands as non-gesture actions. Gestures were collected from about 30 diverse scenes, with real-world variation in background and illumination. With our dataset, the performance of three 3D-CNN models is evaluated on the tasks of isolated and continuous real-time HGR. Furthermore, we analyze the possibility of increasing the recognition accuracy by adding multiple modalities derived from RGB frames, i.e., optical flow and semantic segmentation, while keeping the real-time performance of the 3D-CNN model. Our empirical study also provides a comparison with the publicly available nvGesture (NVIDIA) dataset. The experimental results show that the state-of-the-art ResNext-101 model decreases about 30% accuracy when using our real-world dataset, demonstrating that the IPN Hand dataset can be used as a benchmark, and may help the community to step forward in the continuous HGR. Our dataset and pre-trained models used in the evaluation are publicly available at https://github.com/GibranBenitez/IPN-hand.Comment: Under revie

    Fingertip Detection and Tracking for Recognition of Air-Writing in Videos

    Full text link
    Air-writing is the process of writing characters or words in free space using finger or hand movements without the aid of any hand-held device. In this work, we address the problem of mid-air finger writing using web-cam video as input. In spite of recent advances in object detection and tracking, accurate and robust detection and tracking of the fingertip remains a challenging task, primarily due to small dimension of the fingertip. Moreover, the initialization and termination of mid-air finger writing is also challenging due to the absence of any standard delimiting criterion. To solve these problems, we propose a new writing hand pose detection algorithm for initialization of air-writing using the Faster R-CNN framework for accurate hand detection followed by hand segmentation and finally counting the number of raised fingers based on geometrical properties of the hand. Further, we propose a robust fingertip detection and tracking approach using a new signature function called distance-weighted curvature entropy. Finally, a fingertip velocity-based termination criterion is used as a delimiter to mark the completion of the air-writing gesture. Experiments show the superiority of the proposed fingertip detection and tracking algorithm over state-of-the-art approaches giving a mean precision of 73.1 % while achieving real-time performance at 18.5 fps, a condition which is of vital importance to air-writing. Character recognition experiments give a mean accuracy of 96.11 % using the proposed air-writing system, a result which is comparable to that of existing handwritten character recognition systems.Comment: 32 pages, 10 figures, 2 tables. Submitted to Journal of Expert Systems with Application

    BeCAPTCHA: Behavioral Bot Detection using Touchscreen and Mobile Sensors benchmarked on HuMIdb

    Full text link
    In this paper we study the suitability of a new generation of CAPTCHA methods based on smartphone interactions. The heterogeneous flow of data generated during the interaction with the smartphones can be used to model human behavior when interacting with the technology and improve bot detection algorithms. For this, we propose BeCAPTCHA, a CAPTCHA method based on the analysis of the touchscreen information obtained during a single drag and drop task in combination with the accelerometer data. The goal of BeCAPTCHA is to determine whether the drag and drop task was realized by a human or a bot. We evaluate the method by generating fake samples synthesized with Generative Adversarial Neural Networks and handcrafted methods. Our results suggest the potential of mobile sensors to characterize the human behavior and develop a new generation of CAPTCHAs. The experiments are evaluated with HuMIdb (Human Mobile Interaction database), a novel multimodal mobile database that comprises 14 mobile sensors acquired from 600 users. HuMIdb is freely available to the research community.Comment: arXiv admin note: substantial text overlap with arXiv:2002.0091

    Recent Advances and Challenges in Ubiquitous Sensing

    Full text link
    Ubiquitous sensing is tightly coupled with activity recognition. This survey reviews recent advances in Ubiquitous sensing and looks ahead on promising future directions. In particular, Ubiquitous sensing crosses new barriers giving us new ways to interact with the environment or to inspect our psyche. Through sensing paradigms that parasitically utilise stimuli from the noise of environmental, third-party pre-installed systems, sensing leaves the boundaries of the personal domain. Compared to previous environmental sensing approaches, these new systems mitigate high installation and placement cost by providing a robustness towards process noise. On the other hand, sensing focuses inward and attempts to capture mental activities such as cognitive load, fatigue or emotion through advances in, for instance, eye-gaze sensing systems or interpretation of body gesture or pose. This survey summarises these developments and discusses current research questions and promising future directions.Comment: Submitted to PIEE

    Doppler-Radar Based Hand Gesture Recognition System Using Convolutional Neural Networks

    Full text link
    Hand gesture recognition has long been a hot topic in human computer interaction. Traditional camera-based hand gesture recognition systems cannot work properly under dark circumstances. In this paper, a Doppler Radar based hand gesture recognition system using convolutional neural networks is proposed. A cost-effective Doppler radar sensor with dual receiving channels at 5.8GHz is used to acquire a big database of four standard gestures. The received hand gesture signals are then processed with time-frequency analysis. Convolutional neural networks are used to classify different gestures. Experimental results verify the effectiveness of the system with an accuracy of 98%. Besides, related factors such as recognition distance and gesture scale are investigated.Comment: Best Paper Award of International Conference on Communications, Signal Processing, and Systems 201
    • …
    corecore