219 research outputs found

    4DHumanOutfit: a multi-subject 4D dataset of human motion sequences in varying outfits exhibiting large displacements

    Full text link
    This work presents 4DHumanOutfit, a new dataset of densely sampled spatio-temporal 4D human motion data of different actors, outfits and motions. The dataset is designed to contain different actors wearing different outfits while performing different motions in each outfit. In this way, the dataset can be seen as a cube of data containing 4D motion sequences along 3 axes with identity, outfit and motion. This rich dataset has numerous potential applications for the processing and creation of digital humans, e.g. augmented reality, avatar creation and virtual try on. 4DHumanOutfit is released for research purposes at https://kinovis.inria.fr/4dhumanoutfit/. In addition to image data and 4D reconstructions, the dataset includes reference solutions for each axis. We present independent baselines along each axis that demonstrate the value of these reference solutions for evaluation tasks

    The use of motion capture in non-realistic animation

    Get PDF
    The Use of Motion Capture in Non-realistic Animation explores the possibility of creating non-realistic animation through the use of motion capture. In this study we look to the particularities of cartoony/non-realistic animation while trying to as-certain if it is viable to create this type of animation through the process of motion capture. This dissertation will, firstly, expose the historical, theoretical, technical and artistic context. There will be a brief description of important landmarks and general overview of the history of animation. There will also be an explanation of how animators’ will to mimic real life motion, led to the invention of several technologies in order to achieve this goal. Next we will describe the several stages that compose the motion capture process. Lastly there will be a comparison be-tween key-frame animation and motion capture animation techniques and also the analysis of several examples of films where motion capture was used. Finally there will be a description of the production phases of an animated short film called Na-poleon’s Unsung Battle. In this film the majority of its animated content was obtained through the use of motion capture while aiming for a cartoony/non-realistic style of animation. There is still margin for improvement on the final results but there is also proof that it is possible to obtain a non-realistic style of animation while using motion capture technology. The questions that remain are: is it time effective and can the process be optimized for this less than common use

    Everybody Dance Now

    Full text link
    This paper presents a simple method for "do as I do" motion transfer: given a source video of a person dancing, we can transfer that performance to a novel (amateur) target after only a few minutes of the target subject performing standard moves. We approach this problem as video-to-video translation using pose as an intermediate representation. To transfer the motion, we extract poses from the source subject and apply the learned pose-to-appearance mapping to generate the target subject. We predict two consecutive frames for temporally coherent video results and introduce a separate pipeline for realistic face synthesis. Although our method is quite simple, it produces surprisingly compelling results (see video). This motivates us to also provide a forensics tool for reliable synthetic content detection, which is able to distinguish videos synthesized by our system from real data. In addition, we release a first-of-its-kind open-source dataset of videos that can be legally used for training and motion transfer.Comment: In ICCV 201

    Correspondence-free online human motion retargeting

    Get PDF
    We present a novel data-driven framework for unsupervised human motion retargeting which animates a target body shape with a source motion. This allows to retarget motions between different characters by animating a target subject with a motion of a source subject. Our method is correspondence-free, i.e. neither spatial correspondences between the source and target shapes nor temporal correspondences between different frames of the source motion are required. Our proposed method directly animates a target shape with arbitrary sequences of humans in motion, possibly captured using 4D acquisition platforms or consumer devices. Our framework takes into account longterm temporal context of 1 second during retargeting while accounting for surface details. To achieve this, we take inspiration from two lines of existing work: skeletal motion retargeting, which leverages long-term temporal context at the cost of surface detail, and surface-based retargeting, which preserves surface details without considering longterm temporal context. We unify the advantages of these works by combining a learnt skinning field with a skeletal retargeting approach. During inference, our method runs online, i.e. the input can be processed in a serial way, and retargeting is performed in a single forward pass per frame. Experiments show that including long-term temporal context during training improves the method's accuracy both in terms of the retargeted skeletal motion and the detail preservation. Furthermore, our method generalizes well on unobserved motions and body shapes. We demonstrate that the proposed framework achieves state-of-the-art results on two test datasets

    Multi-Character Motion Retargeting for Large Scale Changes

    Get PDF
    • …
    corecore