203 research outputs found

    A fast and robust hand-driven 3D mouse

    Get PDF
    The development of new interaction paradigms requires a natural interaction. This means that people should be able to interact with technology with the same models used to interact with everyday real life, that is through gestures, expressions, voice. Following this idea, in this paper we propose a non intrusive vision based tracking system able to capture hand motion and simple hand gestures. The proposed device allows to use the hand as a "natural" 3D mouse, where the forefinger tip or the palm centre are used to identify a 3D marker and the hand gesture can be used to simulate the mouse buttons. The approach is based on a monoscopic tracking algorithm which is computationally fast and robust against noise and cluttered backgrounds. Two image streams are processed in parallel exploiting multi-core architectures, and their results are combined to obtain a constrained stereoscopic problem. The system has been implemented and thoroughly tested in an experimental environment where the 3D hand mouse has been used to interact with objects in a virtual reality application. We also provide results about the performances of the tracker, which demonstrate precision and robustness of the proposed syste

    Gravity optimised particle filter for hand tracking

    Get PDF
    This paper presents a gravity optimised particle filter (GOPF) where the magnitude of the gravitational force for every particle is proportional to its weight. GOPF attracts nearby particles and replicates new particles as if moving the particles towards the peak of the likelihood distribution, improving the sampling efficiency. GOPF is incorporated into a technique for hand features tracking. A fast approach to hand features detection and labelling using convexity defects is also presented. Experimental results show that GOPF outperforms the standard particle filter and its variants, as well as state-of-the-art CamShift guided particle filter using a significantly reduced number of particles

    MMX-Accelerated Real-Time Hand Tracking System

    Get PDF
    We describe a system for tracking real-time hand gestures captured by a cheap web camera and a standard Intel Pentium based personal computer with no specialized image processing hardware. To attain the necessary processing speed, the system exploits the Multi-Media Instruction set(MMX) extensions of the Intel Pentium chip family through software including. the Microsoft DirectX SDK and the Intel Image Processing and Open Source Computer Vision (OpenCV) libraries. The system is based on the Camshift algorithm (from OpenCV) and the compound constant acceleration Kalman filter algorithms. Tracking is robust and efficient and can track hand motion at 30 fps

    Fusing face and body gesture for machine recognition of emotions

    Full text link
    Research shows that humans are more likely to consider computers to be human-like when those computers understand and display appropriate nonverbal communicative behavior. Most of the existing systems attempting to analyze the human nonverbal behavior focus only on the face; research that aims to integrate gesture as an expression mean has only recently emerged. This paper presents an approach to automatic visual recognition of expressive face and upper body action units (FAUs and BAUs) suitable for use in a vision-based affective multimodal framework. After describing the feature extraction techniques, classification results from three subjects are presented. Firstly, individual classifiers are trained separately with face and body features for classification into FAU and BAU categories. Secondly, the same procedure is applied for classification into labeled emotion categories. Finally, we fuse face and body information for classification into combined emotion categories. In our experiments, the emotion classification using the two modalities achieved a better recognition accuracy outperforming the classification using the individual face modality. © 2005 IEEE

    Modeling of Human Upper Body for Sign Language Recognition

    Get PDF
    Sign Language Recognition systems require not only the hand motion trajectory to be classified but also facial features, Human Upper Body (HUB) and hand position with respect to other HUB parts. Head, face, forehead, shoulders and chest are very crucial parts that can carry a lot of positioning information of hand gestures in gesture classification. In this paper as the main contribution, a fast and robust search algorithm for HUB parts based on head size has been introduced for real time implementations. Scaling the extracted parts during body orientation was attained using partial estimation of face size. Tracking the extracted parts for front and side view was achieved using CAMSHIFT [24]. The outcome of the system makes it applicable for real-time applications such as Sign Languages Recognition (SLR) systems

    Modeling of human upper body for sign language recognition

    Get PDF
    Sign Language Recognition systems require not only the hand motion trajectory to be classified but also facial features, Human Upper Body (HUB) and hand position with respect to other HUB parts. Head, face, forehead, shoulders and chest are very crucial parts that can carry a lot of positioning information of hand gestures in gesture classification. In this paper as the main contribution, a fast and robust search algorithm for HUB parts based on head size has been introduced for real time implementations. Scaling the extracted parts during body orientation was attained using partial estimation of face size. Tracking the extracted parts for front and side view was achieved using CAMSHIFT [24]. The outcome of the system makes it applicable for real-time applications such as Sign Languages Recognition (SLR) systems. Keywords: Human upper body detectio

    A Real Time Hand Gesture Recognition System Based on DFT and SVM

    Get PDF
    [[abstract]]Vision based band gesture recognition provides a more nature and powerful means for human-computer interaction. A fast detection process of hand gesture and an effective feature extraction process are presented. The proposed a hand gesture recognition algorithm comprises four main steps. First use Cam-shift algorithm to track skin color after closing process. Second, in order to extract feature, we use BEA to extract the boundary of the hand. Third, the benefits of Fourier descriptor are invariance to the starting point of the boundary, deformation, and rotation, and therefore transform the starting point of the boundary by Fourier transformation. Finally, outline feature for the nonlinear non-separable type of data was classified by using SVM. Experimental results showed the accuracy is 93.4% in average and demonstrated the feasibility of proposed system.[[incitationindex]]EI[[booktype]]電子版[[booktype]]紙
    corecore