117,126 research outputs found

    Machine learning in 3D space gesture recognition

    Get PDF
    The rapid increase in the development of robotic systems in a controlled and uncontrolled environment leads to the development of a more natural interaction system. One such interaction is gesture recognition. The proposed paper is a simple approach towards gesture recognition technology where the hand movement in a 3-dimensional space is utilized to write the English alphabets and get the corresponding output in the screen or a display device. In order to perform the experiment, an MPU-6050 accelerometer, a microcontroller and a Bluetooth for wireless connection are used as the hardware components of the system. For each of the letters of the alphabets, the data instances are recorded in its raw form. 20 instances for each letter are recorded and it is then standardized using interpolation. The standardized data is fed as inputs to an SVM (Support Vector Machine) classifier to create a model. The created model is used for classification of future data instances at real time. Our method achieves a correct classification accuracy of 98.94% for the English alphabets’ hand gesture recognition. The primary objective of our approach is the development of a low-cost, low power and easily trained supervised gesture recognition system which identifies hand gesture movement efficiently and accurately. The experimental result obtained is based on use of a single subject

    UWB Based Static Gesture Classification

    Full text link
    Our paper presents a robust framework for UWB-based static gesture recognition, leveraging proprietary UWB radar sensor technology. Extensive data collection efforts were undertaken to compile datasets containing five commonly used gestures. Our approach involves a comprehensive data pre-processing pipeline that encompasses outlier handling, aspect ratio-preserving resizing, and false-color image transformation. Both CNN and MobileNet models were trained on the processed images. Remarkably, our best-performing model achieved an accuracy of 96.78%. Additionally, we developed a user-friendly GUI framework to assess the model's system resource usage and processing times, which revealed low memory utilization and real-time task completion in under one second. This research marks a significant step towards enhancing static gesture recognition using UWB technology, promising practical applications in various domains

    Towards Full-Body Gesture Analysis and Recognition

    Get PDF
    With computers being embedded in every walk of our life, there is an increasing demand forintuitive devices for human-computer interaction. As human beings use gestures as importantmeans of communication, devices based on gesture recognition systems will be effective for humaninteraction with computers. However, it is very important to keep such a system as non-intrusive aspossible, to reduce the limitations of interactions. Designing such non-intrusive, intuitive, camerabasedreal-time gesture recognition system has been an active area of research research in the fieldof computer vision.Gesture recognition invariably involves tracking body parts. We find many research works intracking body parts like eyes, lips, face etc. However, there is relatively little work being done onfull body tracking. Full-body tracking is difficult because it is expensive to model the full-body aseither 2D or 3D model and to track its movements.In this work, we propose a monocular gesture recognition system that focuses on recognizing a setof arm movements commonly used to direct traffic, guiding aircraft landing and for communicationover long distances. This is an attempt towards implementing gesture recognition systems thatrequire full body tracking, for e.g. an automated recognition semaphore flag signaling system.We have implemented a robust full-body tracking system, which forms the backbone of ourgesture analyzer. The tracker makes use of two dimensional link-joint (LJ) model, which representsthe human body, for tracking. Currently, we track the movements of the arms in a video sequence,however we have future plans to make the system real-time. We use distance transform techniquesto track the movements by fitting the parameters of LJ model in every frames of the video captured.The tracker\u27s output is fed a to state-machine which identifies the gestures made. We haveimplemented this system using four sub-systems. Namely1. Background subtraction sub-system, using Gaussian models and median filters.2. Full-body Tracker, using L-J Model APIs3. Quantizer, that converts tracker\u27s output into defined alphabets4. Gesture analyzer, that reads the alphabets into action performed.Currently, our gesture vocabulary contains gestures involving arms moving up and down which canbe used for detecting semaphore, flag signaling system. Also we can detect gestures like clappingand waving of arms

    Few-Shot User-Definable Radar-Based Hand Gesture Recognition at the Edge

    Get PDF
    This work was supported in part by ITEA3 Unleash Potentials in Simulation (UPSIM) by the German Federal Ministry of Education and Research (BMBF) under Project 19006, in part by the Austrian Research Promotion Agency (FFG), in part by the Rijksdienst voor Ondernemend Nederland (Rvo), and in part by the Innovation Fund Denmark (IFD).Technological advances and scalability are leading Human-Computer Interaction (HCI) to evolve towards intuitive forms, such as through gesture recognition. Among the various interaction strategies, radar-based recognition is emerging as a touchless, privacy-secure, and versatile solution in different environmental conditions. Classical radar-based gesture HCI solutions involve deep learning but require training on large and varied datasets to achieve robust prediction. Innovative self-learning algorithms can help tackling this problem by recognizing patterns and adapt from similar contexts. Yet, such approaches are often computationally expensive and hardly integrable into hardware-constrained solutions. In this paper, we present a gesture recognition algorithm which is easily adaptable to new users and contexts. We exploit an optimization-based meta-learning approach to enable gesture recognition in learning sequences. This method targets at learning the best possible initialization of the model parameters, simplifying training on new contexts when small amounts of data are available. The reduction in computational cost is achieved by processing the radar sensed data of gestures in the form of time maps, to minimize the input data size. This approach enables the adaptation of simple convolutional neural network (CNN) to new hand poses, thus easing the integration of the model into a hardware-constrained platform. Moreover, the use of a Variational Autoencoders (VAE) to reduce the gestures' dimensionality leads to a model size decrease of an order of magnitude and to half of the required adaptation time. The proposed framework, deployed on the Intel(R) Neural Compute Stick 2 (NCS 2), leads to an average accuracy of around 84% for unseen gestures when only one example per class is utilized at training time. The accuracy increases up to 92.6% and 94.2% when three and five samples per class are used.Federal Ministry of Education & Research (BMBF) 19006Austrian Research Promotion Agency (FFG)Rijksdienst voor Ondernemend Nederland (Rvo)Innovation Fund Denmark (IFD

    Towards Robust and Deployable Gesture and Activity Recognisers

    Get PDF
    Smartphones and wearables have become an extension of one's self, with gestures providing quick access to command execution, and activity tracking helping users log their daily life. Recent research in gesture recognition points towards common events like a user re-wearing or readjusting their smartwatch deteriorate recognition accuracy significantly. Further, the available state-of-the-art deep learning models for gesture or activity recognition are too large and computationally heavy to be deployed and run continuously in the background. This problem of engineering robust yet deployable gesture recognisers for use in wearables is open-ended. This thesis provides a review of known approaches in machine learning and human activity recognition (HAR) for addressing model robustness. This thesis also proposes variations of convolution based models for use with raw or spectrogram sensor data. Finally, a cross-validation based evaluation approach for quantifying individual and situational-variabilities is used to demonstrate that with an application-oriented design, models can be made two orders of magnitude smaller while improving on both recognition accuracy and robustness
    • …
    corecore