4,722 research outputs found

    MonoPerfCap: Human Performance Capture from Monocular Video

    Full text link
    We present the first marker-less approach for temporally coherent 3D performance capture of a human with general clothing from monocular video. Our approach reconstructs articulated human skeleton motion as well as medium-scale non-rigid surface deformations in general scenes. Human performance capture is a challenging problem due to the large range of articulation, potentially fast motion, and considerable non-rigid deformations, even from multi-view data. Reconstruction from monocular video alone is drastically more challenging, since strong occlusions and the inherent depth ambiguity lead to a highly ill-posed reconstruction problem. We tackle these challenges by a novel approach that employs sparse 2D and 3D human pose detections from a convolutional neural network using a batch-based pose estimation strategy. Joint recovery of per-batch motion allows to resolve the ambiguities of the monocular reconstruction problem based on a low dimensional trajectory subspace. In addition, we propose refinement of the surface geometry based on fully automatically extracted silhouettes to enable medium-scale non-rigid alignment. We demonstrate state-of-the-art performance capture results that enable exciting applications such as video editing and free viewpoint video, previously infeasible from monocular video. Our qualitative and quantitative evaluation demonstrates that our approach significantly outperforms previous monocular methods in terms of accuracy, robustness and scene complexity that can be handled.Comment: Accepted to ACM TOG 2018, to be presented on SIGGRAPH 201

    On Acquisition and Analysis of a Dataset Comprising of Gait, Ear and Semantic data

    No full text
    In outdoor scenarios such as surveillance where there is very little control over the environments, complex computer vision algorithms are often required for analysis. However constrained environments, such as walkways in airports where the surroundings and the path taken by individuals can be controlled, provide an ideal application for such systems. Figure 1.1 depicts an idealised constrained environment. The path taken by the subject is restricted to a narrow path and once inside is in a volume where lighting and other conditions are controlled to facilitate biometric analysis. The ability to control the surroundings and the flow of people greatly simplifes the computer vision task, compared to typical unconstrained environments. Even though biometric datasets with greater than one hundred people are increasingly common, there is still very little known about the inter and intra-subject variation in many biometrics. This information is essential to estimate the recognition capability and limits of automatic recognition systems. In order to accurately estimate the inter- and the intra- class variance, substantially larger datasets are required [40]. Covariates such as facial expression, headwear, footwear type, surface type and carried items are attracting increasing attention; although considering the potentially large impact on an individuals biometrics, large trials need to be conducted to establish how much variance results. This chapter is the first description of the multibiometric data acquired using the University of Southampton's Multi-Biometric Tunnel [26, 37]; a biometric portal using automatic gait, face and ear recognition for identification purposes. The tunnel provides a constrained environment and is ideal for use in high throughput security scenarios and for the collection of large datasets. We describe the current state of data acquisition of face, gait, ear, and semantic data and present early results showing the quality and range of data that has been collected. The main novelties of this dataset in comparison with other multi-biometric datasets are: 1. gait data exists for multiple views and is synchronised, allowing 3D reconstruction and analysis; 2. the face data is a sequence of images allowing for face recognition in video; 3. the ear data is acquired in a relatively unconstrained environment, as a subject walks past; and 4. the semantic data is considerably more extensive than has been available previously. We shall aim to show the advantages of this new data in biometric analysis, though the scope for such analysis is considerably greater than time and space allows for here

    DeepSketch2Face: A Deep Learning Based Sketching System for 3D Face and Caricature Modeling

    Get PDF
    Face modeling has been paid much attention in the field of visual computing. There exist many scenarios, including cartoon characters, avatars for social media, 3D face caricatures as well as face-related art and design, where low-cost interactive face modeling is a popular approach especially among amateur users. In this paper, we propose a deep learning based sketching system for 3D face and caricature modeling. This system has a labor-efficient sketching interface, that allows the user to draw freehand imprecise yet expressive 2D lines representing the contours of facial features. A novel CNN based deep regression network is designed for inferring 3D face models from 2D sketches. Our network fuses both CNN and shape based features of the input sketch, and has two independent branches of fully connected layers generating independent subsets of coefficients for a bilinear face representation. Our system also supports gesture based interactions for users to further manipulate initial face models. Both user studies and numerical results indicate that our sketching system can help users create face models quickly and effectively. A significantly expanded face database with diverse identities, expressions and levels of exaggeration is constructed to promote further research and evaluation of face modeling techniques.Comment: 12 pages, 16 figures, to appear in SIGGRAPH 201

    Towards real-time body pose estimation for presenters in meeting environments

    Get PDF
    This paper describes a computer vision-based approach to body pose estimation.\ud The algorithm can be executed in real-time and processes low resolution,\ud monocular image sequences. A silhouette is extracted and matched against a\ud projection of a 16 DOF human body model. In addition, skin color is used to\ud locate hands and head. No detailed human body model is needed. We evaluate the\ud approach both quantitatively using synthetic image sequences and qualitatively\ud on video test data of short presentations. The algorithm is developed with the\ud aim of using it in the context of a meeting room where the poses of a presenter\ud have to be estimated. The results can be applied in the domain of virtual\ud environments

    Silhouette-based gait recognition using Procrustes shape analysis and elliptic Fourier descriptors

    Get PDF
    This paper presents a gait recognition method which combines spatio-temporal motion characteristics, statistical and physical parameters (referred to as STM-SPP) of a human subject for its classification by analysing shape of the subject's silhouette contours using Procrustes shape analysis (PSA) and elliptic Fourier descriptors (EFDs). STM-SPP uses spatio-temporal gait characteristics and physical parameters of human body to resolve similar dissimilarity scores between probe and gallery sequences obtained by PSA. A part-based shape analysis using EFDs is also introduced to achieve robustness against carrying conditions. The classification results by PSA and EFDs are combined, resolving tie in ranking using contour matching based on Hu moments. Experimental results show STM-SPP outperforms several silhouette-based gait recognition methods

    BodyNet: Volumetric Inference of 3D Human Body Shapes

    Get PDF
    Human shape estimation is an important task for video editing, animation and fashion industry. Predicting 3D human body shape from natural images, however, is highly challenging due to factors such as variation in human bodies, clothing and viewpoint. Prior methods addressing this problem typically attempt to fit parametric body models with certain priors on pose and shape. In this work we argue for an alternative representation and propose BodyNet, a neural network for direct inference of volumetric body shape from a single image. BodyNet is an end-to-end trainable network that benefits from (i) a volumetric 3D loss, (ii) a multi-view re-projection loss, and (iii) intermediate supervision of 2D pose, 2D body part segmentation, and 3D pose. Each of them results in performance improvement as demonstrated by our experiments. To evaluate the method, we fit the SMPL model to our network output and show state-of-the-art results on the SURREAL and Unite the People datasets, outperforming recent approaches. Besides achieving state-of-the-art performance, our method also enables volumetric body-part segmentation.Comment: Appears in: European Conference on Computer Vision 2018 (ECCV 2018). 27 page
    corecore