137,511 research outputs found
Fast and reliable recognition of human motion from motion trajectories using wavelet analysis
Recognition of human motion provides hints to understand human activities and gives opportunities to the development of new human-computer interface. Recent studies, however, are limited to extracting motion history image and recognizing gesture or locomotion of human body parts. Although the approach employed, i.e. the transformation of the 3D space-time (x-y-t) analysis to the 2D image analysis, is faster than analyzing 3D motion feature, it is less accurate and less robust in nature. In this paper, a fast trajectory-classification algorithm for interpreting movement of human body parts using wavelet analysis is proposed to increase the accuracy and robustness of human motion recognition. By tracking human body in real time, the motion trajectory (x-y-t) can be extracted. The motion trajectory is then broken down into wavelets that form a set of wavelet features. Classification based on the wavelet features can then be done to interpret the human motion. An online hand drawing digit recognition system was built using the proposed algorithm. Experiments show that the proposed algorithm is able to recognize digits from human movement accurately in real time.postprintThe 2004 IFIP International Conference on Artificial Intelligence Applications and Innovation, Toulouse, France, 22-27 August 2004. In Proceedings of the IFIP International Conference on Artificial Intelligence Applications and Innovation, 2004, p. 1-1
Recommended from our members
Generating 3D product design models in real-time using hand motion and gesture
This thesis was submitted for the degree of Master of Philosophy and awarded by Brunel University.Three dimensional product design models are widely used in conceptual design and in the early stage of prototyping during the design processes. A product design specification often demands a substantial amount of 3D models to be constructed within a short period of time. Current methods begin with designers sketching product concepts in 2D using pencil and paper, which in turn are then translated into 3D models by a design individual with CAD expertise, using a 3D modelling software package such as Pro Engineer, Solid Works, Auto CAD etc. Several novel methods have been used to incorporate hand motion as a way of interacting with computers. There are three main types of technology available to capture motion data, capable of translating human motion into numeric data which can be read by a computer system. The first being, hand gesture glove-based systems such as “Cyberglove”, these systems are generally used to capture hand gesture and joint angle information. The second is full body motion capture systems, optical and non-optical-based, and finally vision based gesture recognition systems which capture full degree of - freedom (DOF) hand motion estimation. There has yet to be a method using any of the above mentioned input devices to rapidly produce 3D product design models in real time, using hand motion and gestures. In this research, a novel method is presented, using a motion capture system to capture hand gestures and motion in real time, to recreate 3D curves and surfaces, which can be translated into 3D product design models. The main aim of this research is to develop a hand motion and gesture-based rapid 3D product modelling method, allowing designers to interactively sketch out 3D concepts in real time using a virtual workspace.
A database of a number of hand signs was built for both architectural hand signs (preliminary study) and Product Design hand signs. A marker set model with a total of eight markers (five on the left hand and three on right hand/marker pen) was designed and used in the capture of hand gestures with the use of an Optical Motion Capture System. A preliminary testing session was successfully completed to determine whether the Motion Capture system would be suitable for a real-time application, by effectively modelling a train station in an offline state using hand motion and gesture. An OpenGL software application was programmed using C++ and the Microsoft Foundation Classes which was used to communicate and pass information of captured motion from the EVaRT system to the user
Real-time motion-based hand gestures recognition from time-of-flight video
The final publication is available at Springer via http://dx.doi.org/10.1007/s11265-015-1090-5This paper presents an innovative solution based on Time-Of-Flight (TOF)
video technology to motion patterns detection for real-time dynamic hand gesture
recognition. The resulting system is able to detect motion-based hand gestures getting
as input depth images. The recognizable motion patterns are modeled on the basis of
the human arm anatomy and its degrees of freedom, generating a collection of synthetic
motion patterns that is compared with the captured input patterns in order to finally
classify the input gesture. For the evaluation of our system a significant collection of
gestures has been compiled, getting results for 3D pattern classification as well as a
comparison with the results using only 2D informatio
Robust Hand Motion Capture and Physics-Based Control for Grasping in Real Time
Hand motion capture technologies are being explored due to high demands in the fields such as video game, virtual reality, sign language recognition, human-computer interaction, and robotics. However, existing systems suffer a few limitations, e.g. they are high-cost (expensive capture devices), intrusive (additional wear-on sensors or complex configurations), and restrictive (limited motion varieties and restricted capture space). This dissertation mainly focus on exploring algorithms and applications for the hand motion capture system that is low-cost, non-intrusive, low-restriction, high-accuracy, and robust.
More specifically, we develop a realtime and fully-automatic hand tracking system using a low-cost depth camera. We first introduce an efficient shape-indexed cascaded pose regressor that directly estimates 3D hand poses from depth images. A unique property of our hand pose regressor is to utilize a low-dimensional parametric hand geometric model to learn 3D shape-indexed features robust to variations in hand shapes, viewpoints and hand poses. We further introduce a hybrid tracking scheme that effectively complements our hand pose regressor with model-based hand tracking. In addition, we develop a rapid 3D hand shape modeling method that uses a small number of depth images to accurately construct a subject-specific skinned mesh model for hand tracking. This step not only automates the whole tracking system but also improves the robustness and accuracy of model-based tracking and hand pose regression.
Additionally, we also propose a physically realistic human grasping synthesis method that is capable to grasp a wide variety of objects. Given an object to be grasped, our method is capable to compute required controls (e.g. forces and torques) that advance the simulation to achieve realistic grasping. Our method combines the power of data-driven synthesis and physics-based grasping control. We first introduce a data-driven method to synthesize a realistic grasping motion from large sets of prerecorded grasping motion data. And then we transform the synthesized kinematic motion to a physically realistic one by utilizing our online physics-based motion control method. In addition, we also provide a performance interface which allows the user to act out before a depth camera to control a virtual object
Robust Hand Motion Capture and Physics-Based Control for Grasping in Real Time
Hand motion capture technologies are being explored due to high demands in the fields such as video game, virtual reality, sign language recognition, human-computer interaction, and robotics. However, existing systems suffer a few limitations, e.g. they are high-cost (expensive capture devices), intrusive (additional wear-on sensors or complex configurations), and restrictive (limited motion varieties and restricted capture space). This dissertation mainly focus on exploring algorithms and applications for the hand motion capture system that is low-cost, non-intrusive, low-restriction, high-accuracy, and robust.
More specifically, we develop a realtime and fully-automatic hand tracking system using a low-cost depth camera. We first introduce an efficient shape-indexed cascaded pose regressor that directly estimates 3D hand poses from depth images. A unique property of our hand pose regressor is to utilize a low-dimensional parametric hand geometric model to learn 3D shape-indexed features robust to variations in hand shapes, viewpoints and hand poses. We further introduce a hybrid tracking scheme that effectively complements our hand pose regressor with model-based hand tracking. In addition, we develop a rapid 3D hand shape modeling method that uses a small number of depth images to accurately construct a subject-specific skinned mesh model for hand tracking. This step not only automates the whole tracking system but also improves the robustness and accuracy of model-based tracking and hand pose regression.
Additionally, we also propose a physically realistic human grasping synthesis method that is capable to grasp a wide variety of objects. Given an object to be grasped, our method is capable to compute required controls (e.g. forces and torques) that advance the simulation to achieve realistic grasping. Our method combines the power of data-driven synthesis and physics-based grasping control. We first introduce a data-driven method to synthesize a realistic grasping motion from large sets of prerecorded grasping motion data. And then we transform the synthesized kinematic motion to a physically realistic one by utilizing our online physics-based motion control method. In addition, we also provide a performance interface which allows the user to act out before a depth camera to control a virtual object
Human-Machine Interface for Remote Training of Robot Tasks
Regardless of their industrial or research application, the streamlining of
robot operations is limited by the proximity of experienced users to the actual
hardware. Be it massive open online robotics courses, crowd-sourcing of robot
task training, or remote research on massive robot farms for machine learning,
the need to create an apt remote Human-Machine Interface is quite prevalent.
The paper at hand proposes a novel solution to the programming/training of
remote robots employing an intuitive and accurate user-interface which offers
all the benefits of working with real robots without imposing delays and
inefficiency. The system includes: a vision-based 3D hand detection and gesture
recognition subsystem, a simulated digital twin of a robot as visual feedback,
and the "remote" robot learning/executing trajectories using dynamic motion
primitives. Our results indicate that the system is a promising solution to the
problem of remote training of robot tasks.Comment: Accepted in IEEE International Conference on Imaging Systems and
Techniques - IST201
Augmented Reality for Information Kiosk
Nowadays people widely use internet for purchasing a home, car, furniture etc. In order to obtain information for purchasing that product user prefer advertisements, pamphlets, and various sources or obtain the information by means of Salesperson. Though, to receiving such product information on computer or any device, users have to use lots of mouse and keyboard actions again and again, which is wastage of time and inconvenience. This will reduce the amount of time to gather particular information regarding the particular product. User is also unable to determine its inner dimensions through images. These dimensions can be predicted by using 3D motion tracking of human movements and Augmented Reality. Based on 3D motion tracking of human movements and Augmented Reality application, we introduce a such kind of interaction that is not seen before . In the proposed system, the main aim is to demonstrate that with better interaction features in showrooms as well as online shopping could improve sales by demonstrating the purchasing item more wider. With the help of the our project the customer will be able to view his choices on screen according to him and thereby can make better decisions. In this paper, we proposed hand gesture detection and recognition method to detect hand movements , and then through the hand gestures, control commands are sent to the system that enable user to retrieve data and access from Information Kiosk for better purchase decision. Keywords: 3D motion tracking, Augmented Reality, Hand Gestures, Information Kiosk. Introduction
Augmented Reality for Information Kiosk
Nowadays people widely use internet for purchasing a home, car, furniture etc. In order to obtain information for purchasing that product user prefer advertisements, pamphlets, and various sources or obtain the information by means of Salesperson. Though, to receiving such product information on computer or any device, users have to use lots of mouse and keyboard actions again and again, which is wastage of time and inconvenience. This will reduce the amount of time to gather particular information regarding the particular product. User is also unable to determine its inner dimensions through images. These dimensions can be predicted by using 3D motion tracking of human movements and Augmented Reality. Based on 3D motion tracking of human movements and Augmented Reality application, we introduce a such kind of interaction that is not seen before . In the proposed system, the main aim is to demonstrate that with better interaction features in showrooms as well as online shopping could improve sales by demonstrating the purchasing item more wider. With the help of the our project the customer will be able to view his choices on screen according to him and thereby can make better decisions. In this paper, we proposed hand gesture detection and recognition method to detect hand movements , and then through the hand gestures, control commands are sent to the system that enable user to retrieve data and access from Information Kiosk for better purchase decision. Keywords: 3D motion tracking, Augmented Reality, Hand Gestures, Information Kiosk. Introductio
3D hand tracking.
The hand is often considered as one of the most natural and intuitive interaction modalities for human-to-human interaction. In human-computer interaction (HCI), proper 3D hand tracking is the first step in developing a more intuitive HCI system which can be used in applications such as gesture recognition, virtual object manipulation and gaming. However, accurate 3D hand tracking, remains a challenging problem due to the hand’s deformation, appearance similarity, high inter-finger occlusion and complex articulated motion. Further, 3D hand tracking is also interesting from a theoretical point of view as it deals with three major areas of computer vision- segmentation (of hand), detection (of hand parts), and tracking (of hand). This thesis proposes a region-based skin color detection technique, a model-based and an appearance-based 3D hand tracking techniques to bring the human-computer interaction applications one step closer. All techniques are briefly described below. Skin color provides a powerful cue for complex computer vision applications. Although skin color detection has been an active research area for decades, the mainstream technology is based on individual pixels. This thesis presents a new region-based technique for skin color detection which outperforms the current state-of-the-art pixel-based skin color detection technique on the popular Compaq dataset (Jones & Rehg 2002). The proposed technique achieves 91.17% true positive rate with 13.12% false negative rate on the Compaq dataset tested over approximately 14,000 web images. Hand tracking is not a trivial task as it requires tracking of 27 degreesof- freedom of hand. Hand deformation, self occlusion, appearance similarity and irregular motion are major problems that make 3D hand tracking a very challenging task. This thesis proposes a model-based 3D hand tracking technique, which is improved by using proposed depth-foreground-background ii feature, palm deformation module and context cue. However, the major problem of model-based techniques is, they are computationally expensive. This can be overcome by discriminative techniques as described below. Discriminative techniques (for example random forest) are good for hand part detection, however they fail due to sensor noise and high interfinger occlusion. Additionally, these techniques have difficulties in modelling kinematic or temporal constraints. Although model-based descriptive (for example Markov Random Field) or generative (for example Hidden Markov Model) techniques utilize kinematic and temporal constraints well, they are computationally expensive and hardly recover from tracking failure. This thesis presents a unified framework for 3D hand tracking, using the best of both methodologies, which out performs the current state-of-the-art 3D hand tracking techniques. The proposed 3D hand tracking techniques in this thesis can be used to extract accurate hand movement features and enable complex human machine interaction such as gaming and virtual object manipulation
hand gesture modeling and recognition for human and robot interactive assembly using hidden markov models
Gesture recognition is essential for human and robot collaboration. Within an industrial hybrid assembly cell, the performance of such a system significantly affects the safety of human workers. This work presents an approach to recognizing hand gestures accurately during an assembly task while in collaboration with a robot co-worker. We have designed and developed a sensor system for measuring natural human-robot interactions. The position and rotation information of a human worker's hands and fingertips are tracked in 3D space while completing a task. A modified chain-code method is proposed to describe the motion trajectory of the measured hands and fingertips. The Hidden Markov Model (HMM) method is adopted to recognize patterns via data streams and identify workers' gesture patterns and assembly intentions. The effectiveness of the proposed system is verified by experimental results. The outcome demonstrates that the proposed system is able to automatically segment the data streams and recognize the gesture patterns thus represented with a reasonable accuracy ratio
- …