241 research outputs found
Recommended from our members
Learning Silhouette Features for Control of Human Motion
We present a vision-based performance interface for controlling animated human characters. The system interactively combines information about the user's motion contained in silhouettes from three viewpoints with domain knowledge contained in a motion capture database to produce an animation of high quality. Such an interactive system might be useful for authoring, for teleconferencing, or as a control interface for a character in a game. In our implementation, the user performs in front of three video cameras; the resulting silhouettes are used to estimate his orientation and body configuration based on a set of discriminative local features. Those features are selected by a machine-learning algorithm during a preprocessing step. Sequences of motions that approximate the user's actions are extracted from the motion database and scaled in time to match the speed of the user's motion. We use swing dancing, a complex human motion, to demonstrate the effectiveness of our approach. We compare our results to those obtained with a set of global features, Hu moments, and ground truth measurements from a motion capture system.Engineering and Applied Science
The Meaning of Action:a review on action recognition and mapping
In this paper, we analyze the different approaches taken to date within the computer vision, robotics and artificial intelligence communities for the representation, recognition, synthesis and understanding of action. We deal with action at different levels of complexity and provide the reader with the necessary related literature references. We put the literature references further into context and outline a possible interpretation of action by taking into account the different aspects of action recognition, action synthesis and task-level planning
Recognizing complex faces and gaits via novel probabilistic models
In the field of computer vision, developing automated systems to recognize people
under unconstrained scenarios is a partially solved problem. In unconstrained sce-
narios a number of common variations and complexities such as occlusion, illumi-
nation, cluttered background and so on impose vast uncertainty to the recognition
process. Among the various biometrics that have been emerging recently, this
dissertation focus on two of them namely face and gait recognition.
Firstly we address the problem of recognizing faces with major occlusions amidst
other variations such as pose, scale, expression and illumination using a novel
PRObabilistic Component based Interpretation Model (PROCIM) inspired by key
psychophysical principles that are closely related to reasoning under uncertainty.
The model basically employs Bayesian Networks to establish, learn, interpret and
exploit intrinsic similarity mappings from the face domain. Then, by incorporating
e cient inference strategies, robust decisions are made for successfully recognizing
faces under uncertainty. PROCIM reports improved recognition rates over recent
approaches.
Secondly we address the newly upcoming gait recognition problem and show that
PROCIM can be easily adapted to the gait domain as well. We scienti cally
de ne and formulate sub-gaits and propose a novel modular training scheme to
e ciently learn subtle sub-gait characteristics from the gait domain. Our results
show that the proposed model is robust to several uncertainties and yields sig-
ni cant recognition performance. Apart from PROCIM, nally we show how a
simple component based gait reasoning can be coherently modeled using the re-
cently prominent Markov Logic Networks (MLNs) by intuitively fusing imaging,
logic and graphs.
We have discovered that face and gait domains exhibit interesting similarity map-
pings between object entities and their components. We have proposed intuitive
probabilistic methods to model these mappings to perform recognition under vari-
ous uncertainty elements. Extensive experimental validations justi es the robust-
ness of the proposed methods over the state-of-the-art techniques.
Data-Driven Grasp Synthesis - A Survey
We review the work on data-driven grasp synthesis and the methodologies for
sampling and ranking candidate grasps. We divide the approaches into three
groups based on whether they synthesize grasps for known, familiar or unknown
objects. This structure allows us to identify common object representations and
perceptual processes that facilitate the employed data-driven grasp synthesis
technique. In the case of known objects, we concentrate on the approaches that
are based on object recognition and pose estimation. In the case of familiar
objects, the techniques use some form of a similarity matching to a set of
previously encountered objects. Finally for the approaches dealing with unknown
objects, the core part is the extraction of specific features that are
indicative of good grasps. Our survey provides an overview of the different
methodologies and discusses open problems in the area of robot grasping. We
also draw a parallel to the classical approaches that rely on analytic
formulations.Comment: 20 pages, 30 Figures, submitted to IEEE Transactions on Robotic
High-Level Descriptors for Fall Event Detection Supported by a Multi-Stream Network
The need for assertive video classification has been increasingly in demand. Especially for detecting endangering situations, it is crucial to have a quick response to avoid triggering more serious problems. During this work, we target video classification concerning falls. Our study focuses on the use of high-level descriptors able to correctly characterize the event. These descriptor results will serve as inputs to a multi-stream architecture of VGG-16 networks. Therefore, our proposal is based on the analysis of the best combination of high-level extracted features for the binary classification of videos. This approach was tested on three known datasets, and has proven to yield similar results as other more consuming methods found in the literature
Gait Analysis and Recognition for Automated Visual Surveillance
Human motion analysis has received a great attention from researchers in the last decade due to its potential use in different applications such as automated visual surveillance. This field of research focuses on the perception and recognition of human activities, including people identification. We explore a new approach for walking pedestrian detection in an unconstrained outdoor environment. The proposed algorithm is based on gait motion as the rhythm of the footprint pattern of walking people is considered the stable and characteristic feature for the classification of moving objects. The novelty of our approach is motivated by the latest research for people identification using gait. The experimental results confirmed the robustness of our method to discriminate between single walking subject, groups of people and vehicles with a successful detection rate of 100%. Furthermore, the results revealed the potential of our method to extend visual surveillance systems to recognize walking people. Furthermore, we propose a new approach to extract human joints (vertex positions) using a model-based method. The spatial templates describing the human gait motion are produced via gait analysis performed on data collected from manual labeling. The Elliptic Fourier Descriptors are used to represent the motion models in a parametric form. The heel strike data is exploited to reduce the dimensionality of the parametric models. People walk normal to the viewing plane, as major gait information is available in a sagittal view. The ankle, knee and hip joints are successfully extracted with high accuracy for indoor and outdoor data. In this way, we have established a baseline analysis which can be deployed in recognition, marker-less analysis and other areas. The experimental results confirmed the robustness of the model-based approach to recognise walking subjects with a correct classification rate of 95% using purely the dynamic features derived from the joint motion. Therefore, this confirms the early psychological theories claiming that the discriminative features for motion perception and people recognition are embedded in gait kinematics. Furthermore, to quantify the intrusive nature of gait recognition we explore the effects of the different covariate factors on the performance of gait recognition. The covariate factors include footwear, clothing, carrying conditions and walking speed. As far as the author can determine, this is the first major study of its kind in this field to analyse the covariate factors using a model-based method
- …