23,755 research outputs found

    Hand gesture recognition with jointly calibrated Leap Motion and depth sensor

    Get PDF
    Novel 3D acquisition devices like depth cameras and the Leap Motion have recently reached the market. Depth cameras allow to obtain a complete 3D description of the framed scene while the Leap Motion sensor is a device explicitly targeted for hand gesture recognition and provides only a limited set of relevant points. This paper shows how to jointly exploit the two types of sensors for accurate gesture recognition. An ad-hoc solution for the joint calibration of the two devices is firstly presented. Then a set of novel feature descriptors is introduced both for the Leap Motion and for depth data. Various schemes based on the distances of the hand samples from the centroid, on the curvature of the hand contour and on the convex hull of the hand shape are employed and the use of Leap Motion data to aid feature extraction is also considered. The proposed feature sets are fed to two different classifiers, one based on multi-class SVMs and one exploiting Random Forests. Different feature selection algorithms have also been tested in order to reduce the complexity of the approach. Experimental results show that a very high accuracy can be obtained from the proposed method. The current implementation is also able to run in real-time

    Gesture Recognition with the Leap Motion Controller

    Get PDF
    The Leap Motion Controller is a small USB device that tracks hand and finger movements using infrared LEDs, allowing users to input gesture commands into an application in place of a mouse or keyboard. This creates the potential for developing a general gesture recognition system in 3D that can be easily set up by laypersons using a simple, commercially available device. To investigate the effectiveness of the Leap Motion controller for hand gesture recognition, we collected data from over 100 participants and then used this data to train a 3D recognition model based on convolutional neural networks, which can recognize 2D projections of the 3D space. This achieved an accuracy rate of 92.4% on held out data. We also describe preliminary work on incorporating time series gesture data using hidden Markov models, with the goal of detecting arbitrary start and stop points for gestures when continuously recording data

    Leap Motion Presenter

    Get PDF
    The Leap Motion controller is a device that reads the precise position of a person’s hands in the space above it. The purpose of this project is to create a prototype of an application that uses the Leap Motion controller to create and run a presentation. The hand and gesture recognition of the Leap Motion device facilitates an intuitive application that enables presenters to interact less with their computer and more with their audience. The application is written in Java 8, and is heavily based on the JavaFX graphics library. While there remain imperfections in the application, the application is a working prototype that demonstrates the strengths of gesture-based presentation software and demonstrates how an existing software task can be enhanced with new technology

    A preliminary study of a hybrid user interface for augmented reality applications

    Get PDF
    Augmented Reality (AR) applications are nowadays largely diffused in many fields of use, especially for entertainment, and the market of AR applications for mobile devices grows faster and faster. Moreover, new and innovative hardware for human-computer interaction has been deployed, such as the Leap Motion Controller. This paper presents some preliminary results in the design and development of a hybrid interface for hand-free augmented reality applications. The paper introduces a framework to interact with AR applications through a speech and gesture recognition-based interface. A Leap Motion Controller is mounted on top of AR glasses and a speech recognition module completes the system. Results have shown that, using the speech or the gesture recognition modules singularly, the robustness of the user interface is strongly dependent on environmental conditions. On the other hand, a combined usage of both modules can provide a more robust input

    Automated Tracking of Hand Hygiene Stages

    Get PDF
    The European Centre for Disease Prevention and Control (ECDC) estimates that 2.5 millioncases of Hospital Acquired Infections (HAIs) occur each year in the European Union. Handhygiene is regarded as one of the most important preventive measures for HAIs. If it is implemented properly, hand hygiene can reduce the risk of cross-transmission of an infection in the healthcare environment. Good hand hygiene is not only important for healthcare settings. Therecent ongoing coronavirus pandemic has highlighted the importance of hand hygiene practices in our daily lives, with governments and health authorities around the world promoting goodhand hygiene practices. The WHO has published guidelines of hand hygiene stages to promotegood hand washing practices. A significant amount of existing research has focused on theproblem of tracking hands to enable hand gesture recognition. In this work, gesture trackingdevices and image processing are explored in the context of the hand washing environment.Hand washing videos of professional healthcare workers were carefully observed and analyzedin order to recognize hand features associated with hand hygiene stages that could be extractedautomatically. Selected hand features such as palm shape (flat or curved); palm orientation(palms facing or not); hand trajectory (linear or circular movement) were then extracted andtracked with the help of a 3D gesture tracking device - the Leap Motion Controller. These fea-tures were further coupled together to detect the execution of a required WHO - hand hygienestage,Rub hands palm to palm, with the help of the Leap sensor in real time. In certain conditions, the Leap Motion Controller enables a clear distinction to be made between the left andright hands. However, whenever the two hands came into contact with each other, sensor data from the Leap, such as palm position and palm orientation was lost for one of the two hands.Hand occlusion was found to be a major drawback with the application of the device to this usecase. Therefore, RGB digital cameras were selected for further processing and tracking of the hands. An image processing technique, using a skin detection algorithm, was applied to extractinstantaneous hand positions for further processing, to enable various hand hygiene poses to be detected. Contour and centroid detection algorithms were further applied to track the handtrajectory in hand hygiene video recordings. In addition, feature detection algorithms wereapplied to a hand hygiene pose to extract the useful hand features. The video recordings did not suffer from occlusion as is the case for the Leap sensor, but the segmentation of one handfrom another was identified as a major challenge with images because the contour detectionresulted in a continuous mass when the two hands were in contact. For future work, the datafrom gesture trackers, such as the Leap Motion Controller and cameras (with image processing)could be combined to make a robust hand hygiene gesture classification system

    Personalising hand gesture computing

    Get PDF
    Hand gesture recognition has long held a place in science fiction writing and film, and for many years work has continued to develop and improve hand and body gesture recognition for computing, and in particular for computer games. Despite the efforts going into this field, the ability to effectively communicate with computer systems using hand gestures remains elusive. This study seeks to investigate the opportunities and challenges present in the field of hand gesture computing by conducting an experiment in user driven gesture creation, recording and recognition. This study introduces a simple algorithm for the real-time recording and recognition of dynamic hand gestures using the Leap Motion infrared sensor and Unity games development environment

    An evaluation of depth camera-based hand pose recognition for virtual reality systems.

    Get PDF
    Masters Degree. University of KwaZulu-Natal, Durban.Camera-based hand gesture recognition for interaction in virtual reality systems promises to provide a more immersive and less distracting means of input than the usual hand-held controllers. It is unknown if a camera would effectively distinguish hand poses made in a virtual reality environment, due to lack of research in this area. This research explores and measures the effectiveness of static hand pose input with a depth camera, specifically the Leap Motion controller, for user interaction in virtual reality applications. A pose set was derived by analyzing existing gesture taxonomies and Leap Motion controller-based virtual reality applications, and a dataset of these poses was constructed using data captured by twenty-five participants. Experiments on the dataset utilizing three popular machine learning classifiers were not able to classify the poses with a high enough accuracy, primarily due to occlusion issues affecting the input data. Therefore, a significantly smaller subset was empirically derived using a novel algorithm, which utilized a confusion matrix from the machine learning experiments as well as a table of Hamming Distances between poses. This improved the recognition accuracy to above 99%, making this set more suitable for real-world use. It is concluded that while camera-based pose recognition can be reliable on a small set of poses, finger occlusion hinders the use of larger sets. Thus, alternative approaches, such as multiple input cameras, should be explored as a potential solution to the occlusion problem

    Applying teeline shorthand using leap motion controller

    Get PDF
    A hand gesture recognition program was developed to recognize users’ Teeline shorthand gestures as English letters, words and sentences using Leap Motion Controller. The program is intended to provide a novel way for the users to interact with electronics by waving gestures in the air to input texts instead of using keyboards. In the recognition mode, the dynamic time warping algorithm is used to compare the similarities between different templates and gesture inputs and summarize the recognition results; in the edit process, users are able to build their own gestures to customize the commands. A series of experiment results show that the program can achieve a considerable recognition accuracy, and it has consistent performance in face of different user groups.Master of Science (MSc) in Computational Science

    Development of gesture-controlled robotic arm for upper limb hemiplegia therapy

    Get PDF
    Human-computer interactions using hand gesture recognition has emerge as a current approach in recent rehabilitation studies. The introduction of a vision-based system such as the Microsoft Kinect and the Leap Motion sensor (LMS) provides a very informative description of hand pose that can be exploited for tracking applications. Compared to the Kinect depth camera, the LMS produces a more limited amount of information and interaction zone, but the output data is more accurate. Thus, this study aims to explore the LMS system as an effective method for hand gesture recognition controlled robotic arm in improving upper-extremity motor function therapy. Many engineering challenges are addressed to develop a viable system for the therapy application: a real-time and accurate system for hand movement detection, limitation of robot workspace and hand-robot coordination, and development of hand motion-based robot position algorithm. EMU HS4 robot arm and controller have been retrofitted to allow 3 degrees of freedom (DOF) moment and directly controlled by LMS-based gesture recognition. A series of wrist revolving rehabilitation exercises are conducted that provides a good agreement where the robot can move according to hand movement. The potential of the proposed system has been further illustrated and verified through comprehensive rehabilitation training exercises with around 90% accuracy for flexion-extension training. In conclusion, these findings have significant implications for the understanding of hand recognition application towards robotic-based upper limb assistive and rehabilitation procedures
    • …
    corecore