205 research outputs found

    Graphical Image Rendering: Modeling, Animation of Facial or Wild Images

    Get PDF
    In this comparative study, we intend to analyse different methodologies to perform 3-Dimensional modeling and printing, by using raw images as input without any supervision by a human. Since the input consists of only raw images, the foundation of the methods is finding symmetry in images. But the images that seem symmetric are not symmetric due to the perspective effect and utterance of other factors. The method uses factors like depth, albedo, point of view, and lighting from the input image to formulate 3D shapes. A 3D template model with feature points is created, and by deforming the 3D template model, a 3D model of the subject is then reconstructed from orthogonal photos. The number and locations of the proper amount of feature points are derived. Procrustes Analysis and Radial Basis Functions (RBFs) are used for the deformation. Images are then mapped onto the mesh following the deformations for realistic visualization. Characterization of the input image shows an asymmetric cause of shading, lighting, and albedo rendering the symmetry of images. The experiments show that using these methods can give exact 3D shapes of objects like human faces, cars, and cats

    Parallelization Strategies for Markerless Human Motion Capture

    Get PDF
    Markerless Motion Capture (MMOCAP) is the problem of determining the pose of a person from images captured by one or several cameras simultaneously without using markers on the subject. Evaluation of the solutions is frequently the most time-consuming task, making most of the proposed methods inapplicable in real-time scenarios. This paper presents an efficient approach to parallelize the evaluation of the solutions in CPUs and GPUs. Our proposal is experimentally compared on six sequences of the HumanEva-I dataset using the CMAES algorithm. Multiple algorithm’s configurations were tested to analyze the best trade-off in regard to the accuracy and computing time. The proposed methods obtain speedups of 8× in multi-core CPUs, 30× in a single GPU and up to 110× using 4 GPU

    Comparing Evolutionary Algorithms and Particle Filters for Markerless Human Motion Capture

    Get PDF
    Markerless Human Motion Capture is the problem of determining the joints’ angles of a three-dimensional articulated body model that best matches current and past observations acquired by video cameras. The problem of Markerless Human Motion Capture is high-dimensional and requires the use of models with a considerable number of degrees of freedom to appropriately adapt to the human anatomy. Particle filters have become the most popular approach for Markerless Human Motion Capture, despite their difficulty to cope with high-dimensional problems. Although several solutions have been proposed to improve their performance, they still suffer from the curse of dimensionality. As a consequence, it is normally required to impose mobility limitations in the body models employed, or to exploit the hierarchical nature of the human skeleton by partitioning the problem into smaller ones. Evolutionary algorithms, though, are powerful methods for solving continuous optimization problems, specially the high-dimensional ones. Yet, few works have tackled Markerless Human Motion Capture using them. This paper evaluates the performance of three of the most competitive algorithms in continuous optimization – Covariance Matrix Adaptation Evolutionary Strategy, Differential Evolution and Particle Swarm Optimization – with two of the most relevant particle filters proposed in the literature, namely the Annealed Particle Filter and the Partitioned Sampling Annealed Particle Filter. The algorithms have been experimentally compared in the public dataset HumanEva-I by employing two body models with different complexities. Our work also analyzes the performance of the algorithms in hierarchical and holistic approaches, i.e., with and without partitioning the search space. Non-parametric tests run on the results have shown that: (i) the evolutionary algorithms employed outperform their particle filter counterparts in all the cases tested; (ii) they can deal with high-dimensional models thus leading to better accuracy; and (iii) the hierarchical strategy surpasses the holistic one

    Vision-based 3D Pose Retrieval and Reconstruction

    Get PDF
    The people analysis and the understandings of their motions are the key components in many applications like sports sciences, biomechanics, medical rehabilitation, animated movie productions and the game industry. In this context, retrieval and reconstruction of the articulated 3D human poses are considered as the significant sub-elements. In this dissertation, we address the problem of retrieval and reconstruction of the 3D poses from a monocular video or even from a single RGB image. We propose a few data-driven pipelines to retrieve and reconstruct the 3D poses by exploiting the motion capture data as a prior. The main focus of our proposed approaches is to bridge the gap between the separate media of the 3D marker-based recording and the capturing of motions or photographs using a simple RGB camera. In principal, we leverage both media together efficiently for 3D pose estimation. We have shown that our proposed methodologies need not any synchronized 3D-2D pose-image pairs to retrieve and reconstruct the final 3D poses, and are flexible enough to capture motion in any studio-like indoor environment or outdoor natural environment. In first part of the dissertation, we propose model based approaches for full body human motion reconstruction from the video input by employing just 2D joint positions of the four end effectors and the head. We resolve the 3D-2D pose-image cross model correspondence by developing an intermediate container the knowledge base through the motion capture data which contains information about how people move. It includes the 3D normalized pose space and the corresponding synchronized 2D normalized pose space created by utilizing a number of virtual cameras. We first detect and track the features of these five joints from the input motion sequences using SURF, MSER and colorMSER feature detectors, which vote for the possible 2D locations for these joints in the video. The extraction of suitable feature sets from both, the input control signals and the motion capture data, enables us to retrieve the closest instances from the motion capture dataset through employing the fast searching and retrieval techniques. We develop a graphical structure online lazy neighbourhood graph in order to make the similarity search more accurate and robust by deploying the temporal coherence of the input control signals. The retrieved prior poses are exploited further in order to stabilize the feature detection and tracking process. Finally, the 3D motion sequences are reconstructed by a non-linear optimizer that takes into account multiple energy terms. We evaluate our approaches with a series of experiment scenarios designed in terms of performing actors, camera viewpoints and the noisy inputs. Only a little preprocessing is needed by our methods and the reconstruction processes run close to real time. The second part of the dissertation is dedicated to 3D human pose estimation from a monocular single image. First, we propose an efficient 3D pose retrieval strategy which leads towards a novel data driven approach to reconstruct a 3D human pose from a monocular still image. We design and devise multiple feature sets for global similarity search. At runtime, we search for the similar poses from a motion capture dataset in a definite feature space made up of specific joints. We introduce two-fold method for camera estimation, where we exploit the view directions at which we perform sampling of the MoCap dataset as well as the MoCap priors to minimize the projection error. We also benefit from the MoCap priors and the joints' weights in order to learn a low-dimensional local 3D pose model which is constrained further by multiple energies to infer the final 3D human pose. We thoroughly evaluate our approach on synthetically generated examples, the real internet images and the hand-drawn sketches. We achieve state-of-the-arts results when the test and MoCap data are from the same dataset and obtain competitive results when the motion capture data is taken from a different dataset. Second, we propose a dual source approach for 3D pose estimation from a single RGB image. One major challenge for 3D pose estimation from a single RGB image is the acquisition of sufficient training data. In particular, collecting large amounts of training data that contain unconstrained images and are annotated with accurate 3D poses is infeasible. We therefore propose to use two independent training sources. The first source consists of images with annotated 2D poses and the second source consists of accurate 3D motion capture data. To integrate both sources, we propose a dual-source approach that combines 2D pose estimation with efficient and robust 3D pose retrieval. In our experiments, we show that our approach achieves state-of-the-art results and is even competitive when the skeleton structures of the two sources differ substantially. In the last part of the dissertation, we focus on how the different techniques, developed for the human motion capturing, retrieval and reconstruction can be adapted to handle the quadruped motion capture data and which new applications may appear. We discuss some particularities which must be considered during capturing the large animal motions. For retrieval, we derive the suitable feature sets in order to perform fast searches into the MoCap dataset for similar motion segments. At the end, we present a data-driven approach to reconstruct the quadruped motions from the video input data

    Using biomechanical constraints to improve video-based motion capture

    Get PDF
    In motion capture applications whose aim is to recover human body postures from various input, the high dimensionality of the problem makes it desirable to reduce the size of the search-space by eliminating a priori impossible configurations. This can be carried out by constraining the posture recovery process in various ways. Most recent work in this area has focused on applying camera viewpoint-related constraints to eliminate erroneous solutions. When camera calibration parameters are available, they provide an extremely efficient tool for disambiguating not only posture estimation, but also 3D reconstruction and data segmentation. Increased robustness is indeed to be gained from enforcing such constraints, which we prove in the context of an optical motion capture framework. Our contribution in this respect resides in having applied such constraints consistently to each main step involved in a motion capture process, namely marker reconstruction and segmentation, followed by posture recovery. These steps are made inter-dependent, where each one constrains the other. A more application-independent approach is to encode constraints directly within the human body model, such as limits on the rotational joints. This being an almost unexplored research subject, our efforts were mainly directed at determining a new method for measuring, representing and applying such joint limits. To the present day, the few existing range of motion boundary representations present severe drawbacks that call for an alternative formulation. The joint limits paradigm we propose not only overcomes these drawbacks, but also allows to capture intra- and inter-joint rotation dependencies, these being essential to realistic joint motion representation. The range of motion boundary is defined by an implicit surface, its analytical expression enabling us to readily establish whether a given joint rotation is valid or not. Furthermore, its continuous and differentiable nature provides us with a means of elegantly incorporating such a constraint within an optimisation process for posture recovery. Applying constrained optimisation to our body model and stereo data extracted from video sequence, we demonstrate the clearly resulting decrease in posture estimation errors. As a bonus, we have integrated our joint limits representation in character animation packages to show how motion can be naturally constrained in this manner

    Cascaded deep monocular 3D human pose estimation with evolutionary training data

    Full text link
    End-to-end deep representation learning has achieved remarkable accuracy for monocular 3D human pose estimation, yet these models may fail for unseen poses with limited and fixed training data. This paper proposes a novel data augmentation method that: (1) is scalable for synthesizing massive amount of training data (over 8 million valid 3D human poses with corresponding 2D projections) for training 2D-to-3D networks, (2) can effectively reduce dataset bias. Our method evolves a limited dataset to synthesize unseen 3D human skeletons based on a hierarchical human representation and heuristics inspired by prior knowledge. Extensive experiments show that our approach not only achieves state-of-the-art accuracy on the largest public benchmark, but also generalizes significantly better to unseen and rare poses. Code, pre-trained models and tools are available at this HTTPS URL.Comment: Accepted to CVPR 2020 as Oral Presentatio

    Inferring Human Pose and Motion from Images

    No full text
    As optical gesture recognition technology advances, touchless human computer interfaces of the future will soon become a reality. One particular technology, markerless motion capture, has gained a large amount of attention, with widespread application in diverse disciplines, including medical science, sports analysis, advanced user interfaces, and virtual arts. However, the complexity of human anatomy makes markerless motion capture a non-trivial problem: I) parameterised pose configuration exhibits high dimensionality, and II) there is considerable ambiguity in surjective inverse mapping from observation to pose configuration spaces with a limited number of camera views. These factors together lead to multimodality in high dimensional space, making markerless motion capture an ill-posed problem. This study challenges these difficulties by introducing a new framework. It begins with automatically modelling specific subject template models and calibrating posture at the initial stage. Subsequent tracking is accomplished by embedding naturally-inspired global optimisation into the sequential Bayesian filtering framework. Tracking is enhanced by several robust evaluation improvements. Sparsity of images is managed by compressive evaluation, further accelerating computational efficiency in high dimensional space

    Articulated human tracking and behavioural analysis in video sequences

    Get PDF
    Recently, there has been a dramatic growth of interest in the observation and tracking of human subjects through video sequences. Arguably, the principal impetus has come from the perceived demand for technological surveillance, however applications in entertainment, intelligent domiciles and medicine are also increasing. This thesis examines human articulated tracking and the classi cation of human movement, rst separately and then as a sequential process. First, this thesis considers the development and training of a 3D model of human body structure and dynamics. To process video sequences, an observation model is also designed with a multi-component likelihood based on edge, silhouette and colour. This is de ned on the articulated limbs, and visible from a single or multiple cameras, each of which may be calibrated from that sequence. Second, for behavioural analysis, we develop a methodology in which actions and activities are described by semantic labels generated from a Movement Cluster Model (MCM). Third, a Hierarchical Partitioned Particle Filter (HPPF) was developed for human tracking that allows multi-level parameter search consistent with the body structure. This tracker relies on the articulated motion prediction provided by the MCM at pose or limb level. Fourth, tracking and movement analysis are integrated to generate a probabilistic activity description with action labels. The implemented algorithms for tracking and behavioural analysis are tested extensively and independently against ground truth on human tracking and surveillance datasets. Dynamic models are shown to predict and generate synthetic motion, while MCM recovers both periodic and non-periodic activities, de ned either on the whole body or at the limb level. Tracking results are comparable with the state of the art, however the integrated behaviour analysis adds to the value of the approach.Overseas Research Students Awards Scheme (ORSAS

    3D model-based human motion capture

    Get PDF
    Master'sMASTER OF ENGINEERIN
    corecore