3 research outputs found

    Recognition of Facial Movements and Hand Gestures Using Surface Electromyogram(sEMG) for HCI Based Applications

    Full text link
    This research reports the recognition of facial move-ments during unvoiced speech and the identification of hand gestures using surface Electromyogram (sEMG). The paper proposes two different methods for identifying facial move-ments and hand gestures, which can be useful for provid-ing simple commands and control to computer, an important application of HCI. Experimental results demonstrate that the features of sEMG recordings are suitable for character-ising the muscle activation during unvoiced speech and sub-tle gestures. The scatter plots from the two methods demon-strate the separation of data for each corresponding vowel and each hand gesture. The results indicate that there is small inter-experimental variation but there are large inter-subject variations. This inter-subject variation may be at-tributable to anatomical differences and different speed and style of speaking for the different subjects. The proposed system provides better results when is trained and tested by individual user. The possible applications of this research include giving simple commands to computer for disabled, developing prosthetic hands, use of classifying sEMG for HCI based systems. 1

    Connected Component Algorithm for Gestures Recognition

    Get PDF
    This paper presents head and hand gestures recognition system for Human Computer Interaction (HCI). Head and Hand gestures are an important modality for human computer interaction. Vision based recognition system can give computers the capability of understanding and responding to the hand and head gestures. The aim of this paper is the proposal of real time vision system for its application within a multimedia interaction environment. This recognition system consists of four modules, i.e. capturing the image, image extraction, pattern matching and command determination. If hand and head gestures are shown in front of the camera, hardware will perform respective action. Gestures are matched with the stored database of gestures using pattern matching. Corresponding to matched gesture, the hardware is moved in left, right, forward and backward directions. An algorithm for optimizing connected component in gesture recognition is proposed, which makes use of segmentation in two images. Connected component algorithm scans an image and group its pixels into component based on pixel connectivity i.e. all pixels in connected component share similar pixel intensity values and are in some way connected with each other. Once all groups have been determined, each pixel is labeled with a color according to component it was assigned to
    corecore