39,011 research outputs found

    Real-Time Radar-Based Gesture Detection and Recognition Built in an Edge-Computing Platform

    Full text link
    In this paper, a real-time signal processing frame-work based on a 60 GHz frequency-modulated continuous wave (FMCW) radar system to recognize gestures is proposed. In order to improve the robustness of the radar-based gesture recognition system, the proposed framework extracts a comprehensive hand profile, including range, Doppler, azimuth and elevation, over multiple measurement-cycles and encodes them into a feature cube. Rather than feeding the range-Doppler spectrum sequence into a deep convolutional neural network (CNN) connected with recurrent neural networks, the proposed framework takes the aforementioned feature cube as input of a shallow CNN for gesture recognition to reduce the computational complexity. In addition, we develop a hand activity detection (HAD) algorithm to automatize the detection of gestures in real-time case. The proposed HAD can capture the time-stamp at which a gesture finishes and feeds the hand profile of all the relevant measurement-cycles before this time-stamp into the CNN with low latency. Since the proposed framework is able to detect and classify gestures at limited computational cost, it could be deployed in an edge-computing platform for real-time applications, whose performance is notedly inferior to a state-of-the-art personal computer. The experimental results show that the proposed framework has the capability of classifying 12 gestures in real-time with a high F1-score.Comment: Accepted for publication in IEEE Sensors Journal. A video is available on https://youtu.be/IR5NnZvZBL

    Real-time Hand Gesture Detection and Classification Using Convolutional Neural Networks

    Full text link
    Real-time recognition of dynamic hand gestures from video streams is a challenging task since (i) there is no indication when a gesture starts and ends in the video, (ii) performed gestures should only be recognized once, and (iii) the entire architecture should be designed considering the memory and power budget. In this work, we address these challenges by proposing a hierarchical structure enabling offline-working convolutional neural network (CNN) architectures to operate online efficiently by using sliding window approach. The proposed architecture consists of two models: (1) A detector which is a lightweight CNN architecture to detect gestures and (2) a classifier which is a deep CNN to classify the detected gestures. In order to evaluate the single-time activations of the detected gestures, we propose to use Levenshtein distance as an evaluation metric since it can measure misclassifications, multiple detections, and missing detections at the same time. We evaluate our architecture on two publicly available datasets - EgoGesture and NVIDIA Dynamic Hand Gesture Datasets - which require temporal detection and classification of the performed hand gestures. ResNeXt-101 model, which is used as a classifier, achieves the state-of-the-art offline classification accuracy of 94.04% and 83.82% for depth modality on EgoGesture and NVIDIA benchmarks, respectively. In real-time detection and classification, we obtain considerable early detections while achieving performances close to offline operation. The codes and pretrained models used in this work are publicly available.Comment: Published at IEEE International Conference on Automatic Face and Gesture Recognition (FG 2019) - Best student paper award!

    A biological and real-time framework for hand gestures and head poses

    Get PDF
    Human-robot interaction is an interdisciplinary research area that aims at the development of social robots. Since social robots are expected to interact with humans and understand their behavior through gestures and body movements, cognitive psychology and robot technology must be integrated. In this paper we present a biological and real-time framework for detecting and tracking hands and heads. This framework is based on keypoints extracted by means of cortical V1 end-stopped cells. Detected keypoints and the cells’ responses are used to classify the junction type. Through the combination of annotated keypoints in a hierarchical, multi-scale tree structure, moving and deformable hands can be segregated and tracked over time. By using hand templates with lines and edges at only a few scales, a hand’s gestures can be recognized. Head tracking and pose detection are also implemented, which can be integrated with detection of facial expressions in the future. Through the combinations of head poses and hand gestures a large number of commands can be given to a robot

    Master Hand Technology For The HMI Using Hand Gesture And Colour Detection

    Get PDF
    Master Hand Technology uses different hand gestures and colors to give various commands for the Human-Machine(here Computer) Interfacing. Gestures recognition deals with the goal of interpreting human gestures via mathematical algorithm. Gestures made by users with the help of a color band and/or body pose , in two or three dimensions , get translated by software/image processing into predefined commands .The computer then acts according to the command. There have been a lot work already developed in this field either by extracting hand gesture only or extracting hand with the help of color segmentation. In this project, both hand gesture extraction and color detection used for better, faster, robust, accurate and real-time applications. Red, Green, Blue colors are most efficiently detected if RGB color space used. Using HSV color space, it can be extended to any no of colors. For hand gesture detection, the default background is captured and stored for further processing. Comparing the new captured image with background image and doing necessary extraction and filtering, hand portion can be extracted. Then applying different mathematical algorithms different hand gestures detected. All this work done using MATLAB software. By interfacing a portion of Master hand or/and color to mouse of a Computer, the computer can be controlled same as the mouse. And then many virtual (Augmented reality) or PC based application can be developed (e.g. Calculator, Paint). It does not matter whether the system is within your reach or not; but a camera that is linked with the system must have to be near-by . Showing different gestures by your Master-Hand , the computer can be controlled remotely. If the camera can be set-up online, then the computer can be controlled even from a very far place online

    Real-time motion-based hand gestures recognition from time-of-flight video

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/s11265-015-1090-5This paper presents an innovative solution based on Time-Of-Flight (TOF) video technology to motion patterns detection for real-time dynamic hand gesture recognition. The resulting system is able to detect motion-based hand gestures getting as input depth images. The recognizable motion patterns are modeled on the basis of the human arm anatomy and its degrees of freedom, generating a collection of synthetic motion patterns that is compared with the captured input patterns in order to finally classify the input gesture. For the evaluation of our system a significant collection of gestures has been compiled, getting results for 3D pattern classification as well as a comparison with the results using only 2D informatio

    Hand Gesture Recognition Using Particle Swarm Movement

    Get PDF
    We present a gesture recognition method derived from particle swarm movement for free-air hand gesture recognition. Online gesture recognition remains a difficult problem due to uncertainty in vision-based gesture boundary detection methods. We suggest an automated process of segmenting meaningful gesture trajectories based on particle swarm movement. A subgesture detection and reasoning method is incorporated in the proposed recognizer to avoid premature gesture spotting. Evaluation of the proposed method shows promising recognition results: 97.6% on preisolated gestures, 94.9% on stream gestures with assistive boundary indicators, and 94.2% for blind gesture spotting on digit gesture vocabulary. The proposed recognizer requires fewer computation resources; thus it is a good candidate for real-time applications

    Gesture Detection Towards Real-Time Ergonomic Analysis for Intelligent Automation Assistance

    Get PDF
    Manual handling involves transporting of load by hand through lifting or lowering and operators on the manufacturing shop floor are daily faced with constant lifting and lowering operations which leads to Work-Related Musculoskeletal Disorders. The trend in data collection on the Shop floor for ergonomic evaluation during manual handling activities has revealed a gap in gesture detection as gesture triggered data collection could facilitate more accurate ergonomic data capture and analysis. This paper presents an application developed to detect gestures towards triggering real-time human motion data capture on the shop floor for ergonomic evaluations and risk assessment using the Microsoft Kinect. The machine learning technology known as the discrete indicator—precisely the AdaBoost Trigger indicator was employed to train the gestures. Our results show that the Kinect can be trained to detect gestures towards real-time ergonomic analysis and possibly offering intelligent automation assistance during human posture detrimental tasks

    Hand Detection and Body Language Recognition Using YOLO

    Get PDF
    Neural Networks play an important role in real-time object detection. Several types of networks are being developed in order to perform such detections at a faster pace. One such neural network that can prove useful is the YOLO network. Built to perform real-time detection, YOLO offers great speeds for simple detections. The goal of our research is to see how YOLO would work with body language. Would it be fast enough? And how accurate would it be? Compared to other forms of object detection, body-language detection is more vague. There are several factors to be accounted for. This is why we first begin by talking about hand recognition and gesture recognition, and then move onto body language. This research aims at understanding how YOLO would perform when subject to several tests by using its implementations, building datasets, training and testing the models to see whether it is successful in detecting hand gestures and body language
    corecore