1,870 research outputs found

    Dynamic Vision Sensor Camera Based Bare Hand Gesture Recognition

    Full text link

    New Method for Optimization of License Plate Recognition system with Use of Edge Detection and Connected Component

    Full text link
    License Plate recognition plays an important role on the traffic monitoring and parking management systems. In this paper, a fast and real time method has been proposed which has an appropriate application to find tilt and poor quality plates. In the proposed method, at the beginning, the image is converted into binary mode using adaptive threshold. Then, by using some edge detection and morphology operations, plate number location has been specified. Finally, if the plat has tilt, its tilt is removed away. This method has been tested on another paper data set that has different images of the background, considering distance, and angel of view so that the correct extraction rate of plate reached at 98.66%.Comment: 3rd IEEE International Conference on Computer and Knowledge Engineering (ICCKE 2013), October 31 & November 1, 2013, Ferdowsi Universit Mashha

    Spatial Programming for Industrial Robots through Task Demonstration

    Get PDF
    We present an intuitive system for the programming of industrial robots using markerless gesture recognition and mobile augmented reality in terms of programming by demonstration. The approach covers gesture-based task definition and adaption by human demonstration, as well as task evaluation through augmented reality. A 3D motion tracking system and a handheld device establish the basis for the presented spatial programming system. In this publication, we present a prototype toward the programming of an assembly sequence consisting of several pick-and-place tasks. A scene reconstruction provides pose estimation of known objects with the help of the 2D camera of the handheld. Therefore, the programmer is able to define the program through natural bare-hand manipulation of these objects with the help of direct visual feedback in the augmented reality application. The program can be adapted by gestures and transmitted subsequently to an arbitrary industrial robot controller using a unified interface. Finally, we discuss an application of the presented spatial programming approach toward robot-based welding tasks

    Vision-Based Three Dimensional Hand Interaction In Markerless Augmented Reality Environment

    Get PDF
    Kemunculan realiti tambahan membolehkan objek maya untuk wujud bersama dengan dunia sebenar dan ini memberi kaedah baru untuk berinteraksi dengan objek maya. Sistem realiti tambahan memerlukan penunjuk tertentu, seperti penanda untuk menentukan bagaimana objek maya wujud dalam dunia sebenar. Penunjuk tertentu mesti diperolehi untuk menggunakan sistem realiti tambahan, tetapi susah untuk seseorang mempunyai penunjuk tersebut pada bila-bila masa. Tangan manusia, yang merupakan sebahagian dari badan manusia dapat menyelesaikan masalah ini. Selain itu, tangan boleh digunakan untuk berinteraksi dengan objek maya dalam dunia realiti tambahan. Tesis ini membentangkan sebuah sistem realiti tambahan yang menggunakan tangan terbuka untuk pendaftaran objek maya dalam persekitaran sebenar dan membolehkan pengguna untuk menggunakan tangan yang satu lagi untuk berinteraksi dengan objek maya yang ditambahkan dalam tiga-matra. Untuk menggunakan tangan untuk pendaftaran dan interaksi dalam realiti tambahan, postur dan isyarat tangan pengguna perlu dikesan. The advent of augmented reality (AR) enables virtual objects to be superimposed on the real world and provides a new way to interact with the virtual objects. AR system requires an indicator to determine for how the virtual objects aligned in the real world. The indicator must first be obtained to access to a particular AR system. It may be inconvenient to have the indicator in reach at all time. Human hand, which is part of the human body may be a solution for this. Besides, hand is also a promising tool for interaction with virtual objects in AR environment. This thesis presents a markerless Augmented Reality system which utilizes outstretched hand for registration of virtual objects in the real environment and enables the users to have three dimensional (3D) interaction with the augmented virtual objects. To employ the hand for registration and interaction in AR, hand postures and gestures that the user perform has to be recognized

    GIFT: Gesture-Based Interaction by Fingers Tracking, an Interaction Technique for Virtual Environment

    Get PDF
    Three Dimensional (3D) interaction is the plausible human interaction inside a Virtual Environment (VE). The rise of the Virtual Reality (VR) applications in various domains demands for a feasible 3D interface. Ensuring immersivity in a virtual space, this paper presents an interaction technique where manipulation is performed by the perceptive gestures of the two dominant fingers; thumb and index. The two fingertip-thimbles made of paper are used to trace states and positions of the fingers by an ordinary camera. Based on the positions of the fingers, the basic interaction tasks; selection, scaling, rotation, translation and navigation are performed by intuitive gestures of the fingers. Without keeping a gestural database, the features-free detection of the fingers guarantees speedier interactions. Moreover, the system is user-independent and depends neither on the size nor on the color of the users’ hand. With a case-study project; Interactions by the Gestures of Fingers (IGF) the technique is implemented for evaluation. The IGF application traces gestures of the fingers using the libraries of OpenCV at the back-end. At the front-end, the objects of the VE are rendered accordingly using the Open Graphics Library; OpenGL. The system is assessed in a moderate lighting condition by a group of 15 users. Furthermore, usability of the technique is investigated in games. Outcomes of the evaluations revealed that the approach is suitable for VR applications both in terms of cost and accuracy

    Two Hand Gesture Based 3D Navigation in Virtual Environments

    Get PDF
    Natural interaction is gaining popularity due to its simple, attractive, and realistic nature, which realizes direct Human Computer Interaction (HCI). In this paper, we presented a novel two hand gesture based interaction technique for 3 dimensional (3D) navigation in Virtual Environments (VEs). The system used computer vision techniques for the detection of hand gestures (colored thumbs) from real scene and performed different navigation (forward, backward, up, down, left, and right) tasks in the VE. The proposed technique also allow users to efficiently control speed during navigation. The proposed technique is implemented via a VE for experimental purposes. Forty (40) participants performed the experimental study. Experiments revealed that the proposed technique is feasible, easy to learn and use, having less cognitive load on users. Finally gesture recognition engines were used to assess the accuracy and performance of the proposed gestures. kNN achieved high accuracy rates (95.7%) as compared to SVM (95.3%). kNN also has high performance rates in terms of training time (3.16 secs) and prediction speed (6600 obs/sec) as compared to SVM with 6.40 secs and 2900 obs/sec
    corecore