12 research outputs found

    Flow Lookup and Biological Motion Perception

    Get PDF
    Optical flow in monocular video can serve as a key for recognizing and tracking the three-dimensional pose of human subjects. In comparison with prior work using silhouettes as a key for pose lookup, flow data contains richer information and in experiments can successfully track more difficult sequences. Furthermore, flow recognition is powerful enough to model human abilities in perceiving biological motion from sparse input. The experiments described herein show that a tracker using flow moment lookup can reconstruct a common biological motion (walking) from images containing only point light sources attached to the joints of the moving subject

    Combination of Annealing Particle Filter and Belief Propagation for 3D Upper Body Tracking

    Get PDF
    3D upper body pose estimation is a topic greatly studied by the computer vision society because it is useful in a great number of applications, mainly for human robots interactions including communications with companion robots. However there is a challenging problem: the complexity of classical algorithms that increases exponentially with the dimension of the vectors’ state becomes too difficult to handle. To tackle this problem, we propose a new approach that combines several annealing particle filters defined independently for each limb and belief propagation method to add geometrical constraints between individual filters. Experimental results on a real human gestures sequence will show that this combined approach leads to reliable results

    Viewpoint Independent Human Motion Analysis in Man-made Environments

    Full text link

    Image based human body rendering via regression & MRF energy minimization

    Get PDF
    A machine learning method for synthesising human images is explored to create new images without relying on 3D modelling. Machine learning allows the creation of new images through prediction from existing data based on the use of training images. In the present study, image synthesis is performed at two levels: contour and pixel. A class of learning-based methods is formulated to create object contours from the training image for the synthetic image that allow pixel synthesis within the contours in the second level. The methods rely on applying robust object descriptions, dynamic learning models after appropriate motion segmentation, and machine learning-based frameworks. Image-based human image synthesis using machine learning is a research focus that has recently gained considerable attention in the field of computer graphics. It makes use of techniques from image/motion analysis in computer vision. The problem lies in the estimation of methods for image-based object configuration (i.e. segmentation, contour outline). Using the results of these analysis methods as bases, the research adopts the machine learning approach, in which human images are synthesised by executing the synthesis of contour and pixels through the learning from training image. Firstly, thesis shows how an accurate silhouette is distilled using developed background subtraction for accuracy and efficiency. The traditional vector machine approach is used to avoid ambiguities within the regression process. Images can be represented as a class of accurate and efficient vectors for single images as well as sequences. Secondly, the framework is explored using a unique view of machine learning methods, i.e., support vector regression (SVR), to obtain the convergence result of vectors for contour allocation. The changing relationship between the synthetic image and the training image is expressed as a vector and represented in functions. Finally, a pixel synthesis is performed based on belief propagation. This thesis proposes a novel image-based rendering method for colour image synthesis using SVR and belief propagation for generalisation to enable the prediction of contour and colour information from input colour images. The methods rely on using appropriately defined and robust input colour images, optimising the input contour images within a sparse SVR framework. Firstly, the thesis shows how contour can effectively and efficiently be predicted from small numbers of input contour images. In addition, the thesis exploits the sparse properties of SVR efficiency, and makes use of SVR to estimate regression function. The image-based rendering method employed in this study enables contour synthesis for the prediction of small numbers of input source images. This procedure avoids the use of complex models and geometry information. Secondly, the method used for human body contour colouring is extended to define eight differently connected pixels, and construct a link distance field via the belief propagation method. The link distance, which acts as the message in propagation, is transformed by improving the low-envelope method in fast distance transform. Finally, the methodology is tested by considering human facial and human body clothing information. The accuracy of the test results for the human body model confirms the efficiency of the proposed method.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Estimating 3D Body Pose using Uncalibrated Cameras

    No full text
    An approach for estimating 3D body pose from multiple, uncalibrated views is proposed. First, a mapping from image features to 2D body joint locations is computed using a statistical framework that yields a set of several body pose hypotheses. The concept of a “virtual camera ” is introduced that makes this mapping invariant to translation, image-plane rotation, and scaling of the input. As a consequence, the calibration matrices (intrinsics) of the virtual cameras can be considered completely known, and their poses are known up to a single angular displacement parameter. Given pose hypotheses obtained in the multiple virtual camera views, the recovery of 3D body pose and camera relative orientations is formulated as a stochastic optimization problem. An Expectation-Maximization algorithm is derived that can obtain the locally most likely (self-consistent) combination of body pose hypotheses. Performance of the approach is evaluated with synthetic sequences as well as real video sequences of human motion. 1

    Bridging the Gap between Detection and Tracking for 3D Human Motion Recovery

    Get PDF
    The aim of this thesis is to build a system able to automatically and robustly track human motion in 3–D starting from monocular input. To this end two approaches are introduced, which tackle two different types of motion: The first is useful to analyze activities for which a characteristic pose, or key-pose, can be detected, as for example in the walking case. On the other hand the second can be used for cases in which such pose is not defined but there is a clear relation between some easily measurable image quantities and the body configuration, as for example in the skating case where the trajectory followed by a subject is highly correlated to how the subject articulates. In the first proposed technique we combine detection and tracking techniques to achieve robust 3D motion recovery of people seen from arbitrary viewpoints by a single and potentially moving camera. We rely on detecting key postures, which can be done reliably, using a motion model to infer 3D poses between consecutive detections, and finally refining them over the whole sequence using a generative model. We demonstrate our approach in the cases of golf motions filmed using a static camera and walking motions acquired using a potentially moving one. We will show that this approach, although monocular, is both metrically accurate because it integrates information over many frames and robust because it can recover from a few misdetections. The second approach is based on the fact that the articulated body models used to represent human motion typically have many degrees of freedom, usually expressed as joint angles that are highly correlated. The true range of motion can therefore be represented by latent variables that span a low-dimensional space. This has often been used to make motion tracking easier. However, learning the latent space in a problem independent way makes it non trivial to initialize the tracking process by picking appropriate initial values for the latent variables, and thus for the pose. In this thesis, it will be shown that by directly using observable quantities as latent variables, this issue can be eliminated

    Tracking and indexing of human actions in video image sequences

    Get PDF
    Master'sMASTER OF ENGINEERIN
    corecore