118,433 research outputs found

    Connected Component Algorithm for Gestures Recognition

    Get PDF
    This paper presents head and hand gestures recognition system for Human Computer Interaction (HCI). Head and Hand gestures are an important modality for human computer interaction. Vision based recognition system can give computers the capability of understanding and responding to the hand and head gestures. The aim of this paper is the proposal of real time vision system for its application within a multimedia interaction environment. This recognition system consists of four modules, i.e. capturing the image, image extraction, pattern matching and command determination. If hand and head gestures are shown in front of the camera, hardware will perform respective action. Gestures are matched with the stored database of gestures using pattern matching. Corresponding to matched gesture, the hardware is moved in left, right, forward and backward directions. An algorithm for optimizing connected component in gesture recognition is proposed, which makes use of segmentation in two images. Connected component algorithm scans an image and group its pixels into component based on pixel connectivity i.e. all pixels in connected component share similar pixel intensity values and are in some way connected with each other. Once all groups have been determined, each pixel is labeled with a color according to component it was assigned to

    Accelerometer based gesture recognition robot

    Get PDF
    Gesture recognition can be termed as an approach in this direction. It is the process by which the gestures made by the user are recognized by the receiver. Gestures are expressive, meaningful body motions involving physical movements of the fingers, hands, arms, head, face, or body with the intent of: conveying meaningful information orinteracting with the environment. They constitute one interesting small subspace of possible human motion. A gesture may also be perceived by the environment as a compression technique for the information to be transmitted elsewhere and subsequently reconstructed by the receiver. Classification hand and arm gestures: Recognition of hand poses, sign languages, and entertainment applications. head and face gestures: Nodding or shaking of head; direction of eye gaze; etc.; body gestures: involvement of full body motion, as in; tracking movements of two people interacting outdoors; analyzing movements of a dancer for generating matching music and graphics; Benefits: A human computer interface can be provided using gestures: Replace mouse and keyboard Pointing gestures Navigate in a virtual environment Pick up and manipulate virtual objects Interact with the 3D worl

    Real-time head nod and shake detection for continuous human affect recognition

    Get PDF
    Human affect recognition is the field of study associated with using automatic techniques to identify human emotion or human affective state. A person’s affective states is often communicated non-verbally through body language. A large part of human body language communication is the use of head gestures. Almost all cultures use subtle head movements to convey meaning. Two of the most common and distinct head gestures are the head nod and the head shake gestures. In this paper we present a robust system to automatically detect head nod and shakes. We employ the Microsoft Kinect and utilise discrete Hidden Markov Models (HMMs) as the backbone to a to a machine learning based classifier within the system. The system achieves 86% accuracy on test datasets and results are provided

    Real-Time Head Gesture Recognition on Head-Mounted Displays using Cascaded Hidden Markov Models

    Full text link
    Head gesture is a natural means of face-to-face communication between people but the recognition of head gestures in the context of virtual reality and use of head gesture as an interface for interacting with virtual avatars and virtual environments have been rarely investigated. In the current study, we present an approach for real-time head gesture recognition on head-mounted displays using Cascaded Hidden Markov Models. We conducted two experiments to evaluate our proposed approach. In experiment 1, we trained the Cascaded Hidden Markov Models and assessed the offline classification performance using collected head motion data. In experiment 2, we characterized the real-time performance of the approach by estimating the latency to recognize a head gesture with recorded real-time classification data. Our results show that the proposed approach is effective in recognizing head gestures. The method can be integrated into a virtual reality system as a head gesture interface for interacting with virtual worlds

    Sign Language Tutoring Tool

    Full text link
    In this project, we have developed a sign language tutor that lets users learn isolated signs by watching recorded videos and by trying the same signs. The system records the user's video and analyses it. If the sign is recognized, both verbal and animated feedback is given to the user. The system is able to recognize complex signs that involve both hand gestures and head movements and expressions. Our performance tests yield a 99% recognition rate on signs involving only manual gestures and 85% recognition rate on signs that involve both manual and non manual components, such as head movement and facial expressions.Comment: eNTERFACE'06. Summer Workshop. on Multimodal Interfaces, Dubrovnik : Croatie (2007

    Facial Feature Tracking and Occlusion Recovery in American Sign Language

    Full text link
    Facial features play an important role in expressing grammatical information in signed languages, including American Sign Language(ASL). Gestures such as raising or furrowing the eyebrows are key indicators of constructions such as yes-no questions. Periodic head movements (nods and shakes) are also an essential part of the expression of syntactic information, such as negation (associated with a side-to-side headshake). Therefore, identification of these facial gestures is essential to sign language recognition. One problem with detection of such grammatical indicators is occlusion recovery. If the signer's hand blocks his/her eyebrows during production of a sign, it becomes difficult to track the eyebrows. We have developed a system to detect such grammatical markers in ASL that recovers promptly from occlusion. Our system detects and tracks evolving templates of facial features, which are based on an anthropometric face model, and interprets the geometric relationships of these templates to identify grammatical markers. It was tested on a variety of ASL sentences signed by various Deaf native signers and detected facial gestures used to express grammatical information, such as raised and furrowed eyebrows as well as headshakes.National Science Foundation (IIS-0329009, IIS-0093367, IIS-9912573, EIA-0202067, EIA-9809340

    Hand Gesture Recognization Using Virtual Canvas

    Get PDF
    Computer vision based hand tracking can be used to interact with computers in a new innovative way. The input components of a normal computer system include keyboard, mouse, joystick are avoided. Gesture recognition pertains to recognizing meaningful expressions of motion by a human, involving the hands, fingers, arms, head, and/or body. It is of utmost importance in designing an intelligent and efficient human–computer interface. The applications of gesture recognition are manifold, ranging from sign language through medical rehabilitation to virtual reality. In this paper, we provide a survey on gesture recognition with particular emphasis on hand gestures and facial expressions. Existing challenges and future research possibilities are also highlighted. Gestures are expressive, meaningful body motions involving physical movements of the fingers, hands, arms, head, face, or body with the intent of conveying meaningful information orinteracting with the environment. A gesture may also be perceived by the environment as a compression technique for the information to be transmitted elsewhere and subsequently reconstructed by the receive

    Gesture recognition using a depth camera for human robot collaboration on assembly line

    Get PDF
    International audienceWe present a framework and preliminary experimental results for technical gestures recognition using a RGB-D camera. We have studied a collaborative task between a robot and an operator: the assembly of a motor hoses. The goal is to enable the robot to understand which task has just been executed by a human operator in order to anticipate on his actions, to adapt his speed and react properly if an unusual event occurs. The depth camera is placed above the operator, to minimize the possible occlusion on an assembly line, and we track the head and the hands of the operator using the geodesic distance between the head and the pixels of his torso. To describe his movements we used the shape of the shortest routes joining the head and the hands. We then used a discreet HMM to learn and recognize five gestures performed during the motor hoses assembly. By using gesture from the same operator for the learning and the recognition, we reach a good recognition rate of 93%. These results are encouraging and ongoing work will lead us to experiment our set up on a larger pool of operators and recognize the gesture in real time

    THE USE OF CONTEXTUAL CLUES IN REDUCING FALSE POSITIVES IN AN EFFICIENT VISION-BASED HEAD GESTURE RECOGNITION SYSTEM

    Get PDF
    This thesis explores the use of head gesture recognition as an intuitive interface for computer interaction. This research presents a novel vision-based head gesture recognition system which utilizes contextual clues to reduce false positives. The system is used as a computer interface for answering dialog boxes. This work seeks to validate similar research, but focuses on using more efficient techniques using everyday hardware. A survey of image processing techniques for recognizing and tracking facial features is presented along with a comparison of several methods for tracking and identifying gestures over time. The design explains an efficient reusable head gesture recognition system using efficient lightweight algorithms to minimize resource utilization. The research conducted consists of a comparison between the base gesture recognition system and an optimized system that uses contextual clues to reduce false positives. The results confirm that simple contextual clues can lead to a significant reduction of false positives. The head gesture recognition system achieves an overall accuracy of 96% using contextual clues and significantly reduces false positives. In addition, the results from a usability study are presented showing that head gesture recognition is considered an intuitive interface and desirable above conventional input for answering dialog boxes. By providing the detailed design and architecture of a head gesture recognition system using efficient techniques and simple hardware, this thesis demonstrates the feasibility of implementing head gesture recognition as an intuitive form of interaction using preexisting infrastructure, and also provides evidence that such a system is desirable
    • 

    corecore