428 research outputs found

    A fast and robust hand-driven 3D mouse

    Get PDF
    The development of new interaction paradigms requires a natural interaction. This means that people should be able to interact with technology with the same models used to interact with everyday real life, that is through gestures, expressions, voice. Following this idea, in this paper we propose a non intrusive vision based tracking system able to capture hand motion and simple hand gestures. The proposed device allows to use the hand as a "natural" 3D mouse, where the forefinger tip or the palm centre are used to identify a 3D marker and the hand gesture can be used to simulate the mouse buttons. The approach is based on a monoscopic tracking algorithm which is computationally fast and robust against noise and cluttered backgrounds. Two image streams are processed in parallel exploiting multi-core architectures, and their results are combined to obtain a constrained stereoscopic problem. The system has been implemented and thoroughly tested in an experimental environment where the 3D hand mouse has been used to interact with objects in a virtual reality application. We also provide results about the performances of the tracker, which demonstrate precision and robustness of the proposed syste

    Reproducible Evaluation of Pan-Tilt-Zoom Tracking

    Get PDF
    Tracking with a Pan-Tilt-Zoom (PTZ) camera has been a research topic in computer vision for many years. However, it is very difficult to assess the progress that has been made on this topic because there is no standard evaluation methodology. The difficulty in evaluating PTZ tracking algorithms arises from their dynamic nature. In contrast to other forms of tracking, PTZ tracking involves both locating the target in the image and controlling the motors of the camera to aim it so that the target stays in its field of view. This type of tracking can only be performed online. In this paper, we propose a new evaluation framework based on a virtual PTZ camera. With this framework, tracking scenarios do not change for each experiment and we are able to replicate online PTZ camera control and behavior including camera positioning delays, tracker processing delays, and numerical zoom. We tested our evaluation framework with the Camshift tracker to show its viability and to establish baseline results.Comment: This is an extended version of the 2015 ICIP paper "Reproducible Evaluation of Pan-Tilt-Zoom Tracking

    Gravity optimised particle filter for hand tracking

    Get PDF
    This paper presents a gravity optimised particle filter (GOPF) where the magnitude of the gravitational force for every particle is proportional to its weight. GOPF attracts nearby particles and replicates new particles as if moving the particles towards the peak of the likelihood distribution, improving the sampling efficiency. GOPF is incorporated into a technique for hand features tracking. A fast approach to hand features detection and labelling using convexity defects is also presented. Experimental results show that GOPF outperforms the standard particle filter and its variants, as well as state-of-the-art CamShift guided particle filter using a significantly reduced number of particles

    Moving Object Tracking Using Camshif and SURF Algorithm

    Get PDF
    Penjejakan objek bergerak (moving object tracking) sebagai sebuah permasalahan yang berperan penting dalam bidang computer vision dan secara luas dapat diterapkan dalam banyak aplikasi dunia nyata seperti pengawasan otomatis, human pose estimation, navigasi kendaraan, pemantauan lalu lintas, dan robot vision. Moving object tracking membutuhkan metode yang memiliki akurasi dan ketahanan yang baik terhadap Perubahan yang terjadi pada objek. Penelitian ini membangun sebuah aplikasi untuk membandingkan Algoritma Camshift (Continuosly Adaptive Mean-Shift) dan Algoritma SURF (Speeded Up Robust Feature). Aplikasi dapat melakukan penjejakan dengan menggunakan kedua metode sekaligus. Pengujian dilakukan dengan lima kondisi pergerakan objek yang berbeda pada tiga warna latar belakang berbeda untuk membandingkan waktu komputasi dan akurasi dari kedua metode. Hasil pengujian menunjukkan bahwa akurasi Camshift lebih baik dibanding Surf, sementara untuk waktu komputasi Surf mengungguli Camshift

    Modeling of Human Upper Body for Sign Language Recognition

    Get PDF
    Sign Language Recognition systems require not only the hand motion trajectory to be classified but also facial features, Human Upper Body (HUB) and hand position with respect to other HUB parts. Head, face, forehead, shoulders and chest are very crucial parts that can carry a lot of positioning information of hand gestures in gesture classification. In this paper as the main contribution, a fast and robust search algorithm for HUB parts based on head size has been introduced for real time implementations. Scaling the extracted parts during body orientation was attained using partial estimation of face size. Tracking the extracted parts for front and side view was achieved using CAMSHIFT [24]. The outcome of the system makes it applicable for real-time applications such as Sign Languages Recognition (SLR) systems

    Dynamically parallel CAMSHIFT: GPU accelerated object tracking in digital video

    Get PDF
    The CAMSHIFT algorithm is widely used for tracking dynamically sized and positioned objects in real-time applications. In spite of its extensive study on the platform of sequential CPU, its research on massively parallel Graphical Processing Unit (GPU) platform is quite limited. In this work, we designed and implemented two different parallel algorithms for CAMSHIFT using CUDA. The first design performs calculations on the GPU, but requires iterative data transfers back to the host CPU for condition checking, which bottlenecks the entire program. In the second design, we propose an enhanced parallel reduction-based CAMSHIFT using dynamic parallelism to reduce overhead of data transfers between the CPU and GPU. Test results for a 400 by 400 search window show that the second design is up to five times faster than the first design and nine times faster than a pure CPU implementation. We also investigate the deployment of dynamic parallelism for multiple object tracking using CAMSHIFT --Leaf iv

    Modeling of human upper body for sign language recognition

    Get PDF
    Sign Language Recognition systems require not only the hand motion trajectory to be classified but also facial features, Human Upper Body (HUB) and hand position with respect to other HUB parts. Head, face, forehead, shoulders and chest are very crucial parts that can carry a lot of positioning information of hand gestures in gesture classification. In this paper as the main contribution, a fast and robust search algorithm for HUB parts based on head size has been introduced for real time implementations. Scaling the extracted parts during body orientation was attained using partial estimation of face size. Tracking the extracted parts for front and side view was achieved using CAMSHIFT [24]. The outcome of the system makes it applicable for real-time applications such as Sign Languages Recognition (SLR) systems. Keywords: Human upper body detectio
    corecore