22 research outputs found
Developing context sensitive HMM gesture recognition
We are interested in methods for building cognitive vision systems to understand activities of expert operators for our ActIPret System. Our approach to the gesture recognition required here is to learn the generic models and develop methods for contextual bias of the visual interpretation in the online system. The paper first introduces issues in the development of such flexible and robust gesture learning and recognition, with a brief discussion of related research. Second, the computational model for the Hidden Markov Model (HMM) is described and results with varying amounts of noise in the training and testing phases are given. Third, extensions of this work to allow both top-down bias in the contextual processing and bottom-up augmentation by moment to moment observation of the hand trajectory are described
Developing task-specific RBF hand gesture recognition
In this paper we develop hand gesture learning and recognition techniques to be used in advanced vision applications, such as the ActIPret system for understanding the activities of expert operators for education and training. Radial Basis Function (RBF) networks have been developed for reactive vision tasks and work well, exhibiting fast learning and classification. Specific extensions of our existing work to allow more general 3-D activity analysis reported here are:
1) action-based representation in a hand frame-of-reference by pre-processing of the trajectory data;
2) adaptation of the time-delay RBF network scheme to use this relative velocity information from the 3-D trajectory information in gesture recognition; and
3) development of multi-task support in the classifications by exploiting prototype similarities extracted from different combinations of direction (target tower) and height (target pod) for the hand trajectory
The Role of Task Control and Context in Learning to Recognise Gesture
We are interested in methods for building more intelligent cognitive vision systems in our ActIPret project. The aim of this project is understanding the activities of expert operators for teaching and education. Our approach is to learn models for the components and later the task and context of the visual processing in the ActIPret system. The paper first introduces general issues and some approaches for the example of gesture learning and recognition. Second, aspects of our cognitive vision framework are described as they are relevant to the evaluation of the two approaches tested here. Third, the computational models for the time delay RBF (TDRBF) network and Hidden Markov Model (HMM) are described and results given. Finally, extensions of this work and conclusions for system integration of the results are discussed in the light of task-based control and contextual processing