396 research outputs found

    A Generative Model of People in Clothing

    Full text link
    We present the first image-based generative model of people in clothing for the full body. We sidestep the commonly used complex graphics rendering pipeline and the need for high-quality 3D scans of dressed people. Instead, we learn generative models from a large image database. The main challenge is to cope with the high variance in human pose, shape and appearance. For this reason, pure image-based approaches have not been considered so far. We show that this challenge can be overcome by splitting the generating process in two parts. First, we learn to generate a semantic segmentation of the body and clothing. Second, we learn a conditional model on the resulting segments that creates realistic images. The full model is differentiable and can be conditioned on pose, shape or color. The result are samples of people in different clothing items and styles. The proposed model can generate entirely new people with realistic clothing. In several experiments we present encouraging results that suggest an entirely data-driven approach to people generation is possible

    Predicting realistic and precise human body models under clothing based on orthogonal-view photos

    Get PDF
    6th International Conference on Applied Human Factors and Ergonomics (AHFE 2015) and the Affiliated Conferences, AHFE 2015, Las Vegas, USA, 26-30 Jul 20152015-2016 > Academic research: refereed > Refereed conference paperVersion of RecordPublishe

    Thermal-Kinect Fusion Scanning System for Bodyshape Inpainting and Estimation under Clothing

    Get PDF
    In today\u27s interactive world 3D body scanning is necessary in the field of making virtual avatar, apparel industry, physical health assessment and so on. 3D scanners that are used in this process are very costly and also requires subject to be nearly naked or wear a special tight fitting cloths. A cost effective 3D body scanning system which can estimate body parameters under clothing will be the best solution in this regard. In our experiment we build such a body scanning system by fusing Kinect depth sensor and a Thermal camera. Kinect can sense the depth of the subject and create a 3D point cloud out of it. Thermal camera can sense the body heat of a person under clothing. Fusing these two sensors\u27 images could produce a thermal mapped 3D point cloud of the subject and from that body parameters could be estimated even under various cloths. Moreover, this fusion system is also a cost effective one. In our experiment, we introduce a new pipeline for working with our fusion scanning system, and estimate and recover body shape under clothing. We capture Thermal-Kinect fusion images of the subjects with different clothing and produce both full and partial 3D point clouds. To recover the missing parts from our low resolution scan we fit parametric human model on our images and perform boolean operations with our scan data. Further, we measure our final 3D point cloud scan to estimate the body parameters and compare it with the ground truth. We achieve a minimum average error rate of 0.75 cm comparing to other approaches

    Sparse Feature Extraction for Activity Detection Using Low-Resolution IR Streams

    Get PDF
    In this paper, we propose an ultra-low-resolution infrared (IR) images based activity recognition method which is suitable for monitoring in elderly care-house and modern smart home. The focus is on the analysis of sequences of IR frames, including single subject doing daily activities. The pixels are considered as independent variables because of the lacking of spatial dependencies between pixels in the ultra-low resolution image. Therefore, our analysis is based on the temporal variation of the pixels in vectorised sequences of several IR frames, which results in a high dimensional feature space and an "n<; <; p" problem. Two different sparse analysis strategies are used and compared: Sparse Discriminant Analysis (SDA) and Sparse Principal Component Analysis (SPCA). The extracted sparse features are tested with four widely used classifiers: Support Vector Machines (SVM), Random Forests (RF), K-Nearest Neighbours (KNN) and Logistic Regression (LR). To prove the availability of the sparse features, we also compare the classification results of the noisy data based sparse features and non-sparse based features respectively. The comparison shows the superiority of sparse methods in terms of noise tolerance and accuracy

    Anatomical Mirroring: Real-time User-specific Anatomy in Motion Using a Commodity Depth Camera

    Get PDF
    International audienceThis paper presents a mirror-like augmented reality (AR) system to display the internal anatomy of a user. Using a single Microsoft V2.0 Kinect, we animate in real-time a user-specific internal anatomy according to the user’s motion and we superimpose it onto the user’s color map. The user can visualize his anatomy moving as if he was able to look inside his own body in real-time. A new calibration procedure to set up and attach a user-specific anatomy to the Kinect body tracking skeleton is introduced. At calibration time, the bone lengths are estimated using a set of poses. By using Kinect data as input, the practical limitation of skin correspondance in prior work is overcome. The generic 3D anatomical model is attached to the internal anatomy registration skeleton, and warped on the depth image using a novel elastic deformer, subject to a closest-point registration force and anatomical constraints. The noise in Kinect outputs precludes any realistic human display. Therefore, a novel filter to reconstruct plausible motions based onfixed length bones as well as realistic angular degrees of freedom (DOFs) and limits is introduced to enforce anatomical plausibility. Anatomical constraints applied to the Kinect body tracking skeleton joints are used to maximize the physical plausibility of the anatomy motion, while minimizing the distance to the raw data. At run-time,a simulation loop is used to attract the bones towards the raw data, and skinning shaders efficiently drag the resulting anatomy to the user’s tracked motion.Our user-specific internal anatomy model is validated by comparing the skeleton with segmented MRI images. A user study is established to evaluate the believability of the animated anatomy
    corecore