105,544 research outputs found

    Vision-based portuguese sign language recognition system

    Get PDF
    Vision-based hand gesture recognition is an area of active current research in computer vision and machine learning. Being a natural way of human interaction, it is an area where many researchers are working on, with the goal of making human computer interaction (HCI) easier and natural, without the need for any extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them, for example, to convey information. For that, vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition in real time. Hand gestures are a powerful human communication modality with lots of potential applications and in this context we have sign language recognition, the communication method of deaf people. Sign lan- guages are not standard and universal and the grammars differ from country to coun- try. In this paper, a real-time system able to interpret the Portuguese Sign Language is presented and described. Experiments showed that the system was able to reliably recognize the vowels in real-time, with an accuracy of 99.4% with one dataset of fea- tures and an accuracy of 99.6% with a second dataset of features. Although the im- plemented solution was only trained to recognize the vowels, it is easily extended to recognize the rest of the alphabet, being a solid foundation for the development of any vision-based sign language recognition user interface system

    Hand Gesture Recognition Using Particle Swarm Movement

    Get PDF
    We present a gesture recognition method derived from particle swarm movement for free-air hand gesture recognition. Online gesture recognition remains a difficult problem due to uncertainty in vision-based gesture boundary detection methods. We suggest an automated process of segmenting meaningful gesture trajectories based on particle swarm movement. A subgesture detection and reasoning method is incorporated in the proposed recognizer to avoid premature gesture spotting. Evaluation of the proposed method shows promising recognition results: 97.6% on preisolated gestures, 94.9% on stream gestures with assistive boundary indicators, and 94.2% for blind gesture spotting on digit gesture vocabulary. The proposed recognizer requires fewer computation resources; thus it is a good candidate for real-time applications

    A Collaborative Augmented Reality System Based On Real Time Hand Gesture Recognition

    Get PDF
    Human computer interaction is a major issue in research industry. In order to offer a way to enable untrained users to interact with computer more easily and efficiently gesture based interface has been paid more attention. Gesture based interface provides the most effective means for non-verbal interaction. Various devices like head mounted display and hand glove could be used by the user but they may be cumbersome to use and they limits the user action and make them tired. This problem can be solved by the real time bare hand gesture recognition technique for human computer interaction using computer vision Computer vision is becoming very popular now a days since it can hold a lot of information at a very low cost. With this increasing popularity of computer vision there is a rapid development in the field of virtual reality as it provides an easy and efficient virtual interface between human and computer. At the same time much research is going on to provide more natural interface for human-computer interaction with the power of computer vision .The most powerful and natural interface for human-computer interaction is the hand gesture. In this project we focus our attention to vision based recognition of hand gesture for personal authentication where hand gesture is used as a password. Different hand gestures are used as password for different personals

    End-to-End Multiview Gesture Recognition for Autonomous Car Parking System

    Get PDF
    The use of hand gestures can be the most intuitive human-machine interaction medium. The early approaches for hand gesture recognition used device-based methods. These methods use mechanical or optical sensors attached to a glove or markers, which hinders the natural human-machine communication. On the other hand, vision-based methods are not restrictive and allow for a more spontaneous communication without the need of an intermediary between human and machine. Therefore, vision gesture recognition has been a popular area of research for the past thirty years. Hand gesture recognition finds its application in many areas, particularly the automotive industry where advanced automotive human-machine interface (HMI) designers are using gesture recognition to improve driver and vehicle safety. However, technology advances go beyond active/passive safety and into convenience and comfort. In this context, one of America’s big three automakers has partnered with the Centre of Pattern Analysis and Machine Intelligence (CPAMI) at the University of Waterloo to investigate expanding their product segment through machine learning to provide an increased driver convenience and comfort with the particular application of hand gesture recognition for autonomous car parking. In this thesis, we leverage the state-of-the-art deep learning and optimization techniques to develop a vision-based multiview dynamic hand gesture recognizer for self-parking system. We propose a 3DCNN gesture model architecture that we train on a publicly available hand gesture database. We apply transfer learning methods to fine-tune the pre-trained gesture model on a custom-made data, which significantly improved the proposed system performance in real world environment. We adapt the architecture of the end-to-end solution to expand the state of the art video classifier from a single image as input (fed by monocular camera) to a multiview 360 feed, offered by a six cameras module. Finally, we optimize the proposed solution to work on a limited resources embedded platform (Nvidia Jetson TX2) that is used by automakers for vehicle-based features, without sacrificing the accuracy robustness and real time functionality of the system

    Performance Improvement of Data Fusion Based Real-Time Hand Gesture Recognition by Using 3-D Convolution Neural Networks With Kinect V2

    Get PDF
    Hand gesture recognition is one of the most active areas of research in computer vision. It provides an easy way to interact with a machine without using any extra devices. Hand gestures are natural and intuitive communication way for the human being to interact with his environment. In this paper, we propose Data Fusion Based Real-Time Hand Gesture Recognition using 3-D Convolutional Neural Networks and Kinect V2. To achieve the accurate segmentation and tracking with Kinect V2. Convolution neural network to improve the validity and robustness of the system. Based on the experimental results, the proposed model is accurate, robust and performance with very low processor utilization. The performance of our proposed system in real life application, which is controlling various devices using Kinect V2. Keywords: Hand gesture recognition, Kinect V2, data fusion, Convolutional Neural Networks DOI: 10.7176/IKM/9-1-02

    Vision-based hand gesture interaction using particle filter, principle component analysis and transition network

    Get PDF
    Vision-based human-computer interaction is becoming important nowadays. It offers natural interaction with computers and frees users from mechanical interaction devices, which is favourable especially for wearable computers. This paper presents a human-computer interaction system based on a conventional webcam and hand gesture recognition. This interaction system works in real time and enables users to control a computer cursor with hand motions and gestures instead of a mouse. Five hand gestures are designed on behalf of five mouse operations: moving, left click, left-double click, right click and no-action. An algorithm based on Particle Filter is used for tracking the hand position. PCA-based feature selection is used for recognizing the hand gestures. A transition network is also employed for improving the accuracy and reliability of the interaction system. This interaction system shows good performance in the recognition and interaction test

    Connected Component Algorithm for Gestures Recognition

    Get PDF
    This paper presents head and hand gestures recognition system for Human Computer Interaction (HCI). Head and Hand gestures are an important modality for human computer interaction. Vision based recognition system can give computers the capability of understanding and responding to the hand and head gestures. The aim of this paper is the proposal of real time vision system for its application within a multimedia interaction environment. This recognition system consists of four modules, i.e. capturing the image, image extraction, pattern matching and command determination. If hand and head gestures are shown in front of the camera, hardware will perform respective action. Gestures are matched with the stored database of gestures using pattern matching. Corresponding to matched gesture, the hardware is moved in left, right, forward and backward directions. An algorithm for optimizing connected component in gesture recognition is proposed, which makes use of segmentation in two images. Connected component algorithm scans an image and group its pixels into component based on pixel connectivity i.e. all pixels in connected component share similar pixel intensity values and are in some way connected with each other. Once all groups have been determined, each pixel is labeled with a color according to component it was assigned to

    USING HAND RECOGNITION IN TELEROBOTICS

    Get PDF
    The objective of this project is to recognize selected hand gestures and imitate the recognized hand gesture using a robot. A telerobotics system that relies on computer vision to create the human-machine interface was build. Hand tracking was used as an intuitive control interface, as it represents a natural interaction medium. The system tracks the hand of the operator and the gesture it represents, and relays the appropriate signal to the robot to perform the respective action, in real time. The study focuses on two gestures, open hand, and closed hand, as the NAO robot is not equipped with a dexterous hand. Numerous object recognition algorithms were compared and the SURF based object detector was used. The system was successfully implemented, and was able to recognise the two gestures in 3D space using images from a 2D video camera
    corecore