392 research outputs found

    High quality dynamic reflectance and surface reconstruction from video

    Get PDF
    The creation of high quality animations of real-world human actors has long been a challenging problem in computer graphics. It involves the modeling of the shape of the virtual actors, creating their motion, and the reproduction of very fine dynamic details. In order to render the actor under arbitrary lighting, it is required that reflectance properties are modeled for each point on the surface. These steps, that are usually performed manually by professional modelers, are time consuming and cumbersome. In this thesis, we show that algorithmic solutions for some of the problems that arise in the creation of high quality animation of real-world people are possible using multi-view video data. First, we present a novel spatio-temporal approach to create a personalized avatar from multi-view video data of a moving person. Thereafter, we propose two enhancements to a method that captures human shape, motion and reflectance properties of amoving human using eightmulti-view video streams. Afterwards we extend this work, and in order to add very fine dynamic details to the geometric models, such as wrinkles and folds in the clothing, we make use of the multi-view video recordings and present a statistical method that can passively capture the fine-grain details of time-varying scene geometry. Finally, in order to reconstruct structured shape and animation of the subject from video, we present a dense 3D correspondence finding method that enables spatiotemporally coherent reconstruction of surface animations directly frommulti-view video data. These algorithmic solutions can be combined to constitute a complete animation pipeline for acquisition, reconstruction and rendering of high quality virtual actors from multi-view video data. They can also be used individually in a system that require the solution of a specific algorithmic sub-problem. The results demonstrate that using multi-view video data it is possible to find the model description that enables realistic appearance of animated virtual actors under different lighting conditions and exhibits high quality dynamic details in the geometry.Die Entwicklung hochqualitativer Animationen von menschlichen Schauspielern ist seit langem ein schwieriges Problem in der Computergrafik. Es beinhaltet das Modellieren einer dreidimensionaler Abbildung des Akteurs, seiner Bewegung und die Wiedergabe sehr feiner dynamischer Details. Um den Schauspieler unter einer beliebigen Beleuchtung zu rendern, mĂŒssen auch die Reflektionseigenschaften jedes einzelnen Punktes modelliert werden. Diese Schritte, die gewöhnlich manuell von Berufsmodellierern durchgefĂŒhrt werden, sind zeitaufwendig und beschwerlich. In dieser These schlagen wir algorithmische Lösungen fĂŒr einige der Probleme vor, die in der Entwicklung solch hochqualitativen Animationen entstehen. Erstens prĂ€sentieren wir einen neuartigen, rĂ€umlich-zeitlichen Ansatz um einen Avatar von Mehransicht-Videodaten einer bewegenden Person zu schaffen. Danach beschreiben wir einen videobasierten Modelierungsansatz mit Hilfe einer animierten Schablone eines menschlichen Körpers. Unter Zuhilfenahme einer handvoll synchronisierter Videoaufnahmen berechnen wir die dreidimensionale Abbildung, seine Bewegung und Reflektionseigenschaften der OberflĂ€che. Um sehr feine dynamische Details, wie Runzeln und Falten in der Kleidung zu den geometrischen Modellen hinzuzufĂŒgen, zeigen wir eine statistische Methode, die feinen Details der zeitlich variierenden Szenegeometrie passiv erfassen kann. Und schließlich zeigen wir eine Methode, die dichte 3D Korrespondenzen findet, um die strukturierte Abbildung und die zugehörige Bewegung aus einem Video zu extrahieren. Dies ermöglicht eine rĂ€umlich-zeitlich zusammenhĂ€ngende Rekonstruktion von OberflĂ€chenanimationen direkt aus Mehransicht-Videodaten. Diese algorithmischen Lösungen können kombiniert eingesetzt werden, um eine Animationspipeline fĂŒr die Erfassung, die Rekonstruktion und das Rendering von Animationen hoher QualitĂ€t aus Mehransicht-Videodaten zu ermöglichen. Sie können auch einzeln in einem System verwendet werden, das nach einer Lösung eines spezifischen algorithmischen Teilproblems verlangt. Das Ergebnis ist eine Modelbeschreibung, das realistisches Erscheinen von animierten virtuellen Schauspielern mit dynamischen Details von hoher QualitĂ€t unter verschiedenen LichtverhĂ€ltnissen ermöglicht

    Motion capture based on RGBD data from multiple sensors for avatar animation

    Get PDF
    With recent advances in technology and emergence of affordable RGB-D sensors for a wider range of users, markerless motion capture has become an active field of research both in computer vision and computer graphics. In this thesis, we designed a POC (Proof of Concept) for a new tool that enables us to perform motion capture by using a variable number of commodity RGB-D sensors of different brands and technical specifications on constraint-less layout environments. The main goal of this work is to provide a tool with motion capture capabilities by using a handful of RGB-D sensors, without imposing strong requirements in terms of lighting, background or extension of the motion capture area. Of course, the number of RGB-D sensors needed is inversely proportional to their resolution, and directly proportional to the size of the area to track to. Built on top of the OpenNI 2 library, we made this POC compatible with most of the nonhigh-end RGB-D sensors currently available in the market. Due to the lack of resources on a single computer, in order to support more than a couple of sensors working simultaneously, we need a setup composed of multiple computers. In order to keep data coherency and synchronization across sensors and computers, our tool makes use of a semi-automatic calibration method and a message-oriented network protocol. From color and depth data given by a sensor, we can also obtain a 3D pointcloud representation of the environment. By combining pointclouds from multiple sensors, we can collect a complete and animated 3D pointcloud that can be visualized from any viewpoint. Given a 3D avatar model and its corresponding attached skeleton, we can use an iterative optimization method (e.g. Simplex) to find a fit between each pointcloud frame and a skeleton configuration, resulting in 3D avatar animation when using such skeleton configurations as key frames

    From motion capture to interactive virtual worlds : towards unconstrained motion-capture algorithms for real-time performance-driven character animation

    Get PDF
    This dissertation takes performance-driven character animation as a representative application and advances motion capture algorithms and animation methods to meet its high demands. Existing approaches have either coarse resolution and restricted capture volume, require expensive and complex multi-camera systems, or use intrusive suits and controllers. For motion capture, set-up time is reduced using fewer cameras, accuracy is increased despite occlusions and general environments, initialization is automated, and free roaming is enabled by egocentric cameras. For animation, increased robustness enables the use of low-cost sensors input, custom control gesture definition is guided to support novice users, and animation expressiveness is increased. The important contributions are: 1) an analytic and differentiable visibility model for pose optimization under strong occlusions, 2) a volumetric contour model for automatic actor initialization in general scenes, 3) a method to annotate and augment image-pose databases automatically, 4) the utilization of unlabeled examples for character control, and 5) the generalization and disambiguation of cyclical gestures for faithful character animation. In summary, the whole process of human motion capture, processing, and application to animation is advanced. These advances on the state of the art have the potential to improve many interactive applications, within and outside virtual reality.Diese Arbeit befasst sich mit Performance-driven Character Animation, insbesondere werden Motion Capture-Algorithmen entwickelt um den hohen Anforderungen dieser Beispielanwendung gerecht zu werden. Existierende Methoden haben entweder eine geringe Genauigkeit und einen eingeschrĂ€nkten Aufnahmebereich oder benötigen teure Multi-Kamera-Systeme, oder benutzen störende Controller und spezielle AnzĂŒge. FĂŒr Motion Capture wird die Setup-Zeit verkĂŒrzt, die Genauigkeit fĂŒr Verdeckungen und generelle Umgebungen erhöht, die Initialisierung automatisiert, und BewegungseinschrĂ€nkung verringert. FĂŒr Character Animation wird die Robustheit fĂŒr ungenaue Sensoren erhöht, Hilfe fĂŒr benutzerdefinierte Gestendefinition geboten, und die AusdrucksstĂ€rke der Animation verbessert. Die wichtigsten BeitrĂ€ge sind: 1) ein analytisches und differenzierbares Sichtbarkeitsmodell fĂŒr Rekonstruktionen unter starken Verdeckungen, 2) ein volumetrisches Konturenmodell fĂŒr automatische Körpermodellinitialisierung in genereller Umgebung, 3) eine Methode zur automatischen Annotation von Posen und Augmentation von Bildern in großen Datenbanken, 4) das Nutzen von Beispielbewegungen fĂŒr Character Animation, und 5) die Generalisierung und Übertragung von zyklischen Gesten fĂŒr genaue Charakteranimation. Es wird der gesamte Prozess erweitert, von Motion Capture bis hin zu Charakteranimation. Die Verbesserungen sind fĂŒr viele interaktive Anwendungen geeignet, innerhalb und außerhalb von virtueller RealitĂ€t

    From motion capture to interactive virtual worlds : towards unconstrained motion-capture algorithms for real-time performance-driven character animation

    Get PDF
    This dissertation takes performance-driven character animation as a representative application and advances motion capture algorithms and animation methods to meet its high demands. Existing approaches have either coarse resolution and restricted capture volume, require expensive and complex multi-camera systems, or use intrusive suits and controllers. For motion capture, set-up time is reduced using fewer cameras, accuracy is increased despite occlusions and general environments, initialization is automated, and free roaming is enabled by egocentric cameras. For animation, increased robustness enables the use of low-cost sensors input, custom control gesture definition is guided to support novice users, and animation expressiveness is increased. The important contributions are: 1) an analytic and differentiable visibility model for pose optimization under strong occlusions, 2) a volumetric contour model for automatic actor initialization in general scenes, 3) a method to annotate and augment image-pose databases automatically, 4) the utilization of unlabeled examples for character control, and 5) the generalization and disambiguation of cyclical gestures for faithful character animation. In summary, the whole process of human motion capture, processing, and application to animation is advanced. These advances on the state of the art have the potential to improve many interactive applications, within and outside virtual reality.Diese Arbeit befasst sich mit Performance-driven Character Animation, insbesondere werden Motion Capture-Algorithmen entwickelt um den hohen Anforderungen dieser Beispielanwendung gerecht zu werden. Existierende Methoden haben entweder eine geringe Genauigkeit und einen eingeschrĂ€nkten Aufnahmebereich oder benötigen teure Multi-Kamera-Systeme, oder benutzen störende Controller und spezielle AnzĂŒge. FĂŒr Motion Capture wird die Setup-Zeit verkĂŒrzt, die Genauigkeit fĂŒr Verdeckungen und generelle Umgebungen erhöht, die Initialisierung automatisiert, und BewegungseinschrĂ€nkung verringert. FĂŒr Character Animation wird die Robustheit fĂŒr ungenaue Sensoren erhöht, Hilfe fĂŒr benutzerdefinierte Gestendefinition geboten, und die AusdrucksstĂ€rke der Animation verbessert. Die wichtigsten BeitrĂ€ge sind: 1) ein analytisches und differenzierbares Sichtbarkeitsmodell fĂŒr Rekonstruktionen unter starken Verdeckungen, 2) ein volumetrisches Konturenmodell fĂŒr automatische Körpermodellinitialisierung in genereller Umgebung, 3) eine Methode zur automatischen Annotation von Posen und Augmentation von Bildern in großen Datenbanken, 4) das Nutzen von Beispielbewegungen fĂŒr Character Animation, und 5) die Generalisierung und Übertragung von zyklischen Gesten fĂŒr genaue Charakteranimation. Es wird der gesamte Prozess erweitert, von Motion Capture bis hin zu Charakteranimation. Die Verbesserungen sind fĂŒr viele interaktive Anwendungen geeignet, innerhalb und außerhalb von virtueller RealitĂ€t

    3D object reconstruction using computer vision : reconstruction and characterization applications for external human anatomical structures

    Get PDF
    Tese de doutoramento. Engenharia InformĂĄtica. Faculdade de Engenharia. Universidade do Porto. 201

    AFFECT-PRESERVING VISUAL PRIVACY PROTECTION

    Get PDF
    The prevalence of wireless networks and the convenience of mobile cameras enable many new video applications other than security and entertainment. From behavioral diagnosis to wellness monitoring, cameras are increasing used for observations in various educational and medical settings. Videos collected for such applications are considered protected health information under privacy laws in many countries. Visual privacy protection techniques, such as blurring or object removal, can be used to mitigate privacy concern, but they also obliterate important visual cues of affect and social behaviors that are crucial for the target applications. In this dissertation, we propose to balance the privacy protection and the utility of the data by preserving the privacy-insensitive information, such as pose and expression, which is useful in many applications involving visual understanding. The Intellectual Merits of the dissertation include a novel framework for visual privacy protection by manipulating facial image and body shape of individuals, which: (1) is able to conceal the identity of individuals; (2) provide a way to preserve the utility of the data, such as expression and pose information; (3) balance the utility of the data and capacity of the privacy protection. The Broader Impacts of the dissertation focus on the significance of privacy protection on visual data, and the inadequacy of current privacy enhancing technologies in preserving affect and behavioral attributes of the visual content, which are highly useful for behavior observation in educational and medical settings. This work in this dissertation represents one of the first attempts in achieving both goals simultaneously

    Physics-based Reconstruction and Animation of Humans

    Get PDF
    Creating digital representations of humans is of utmost importance for applications ranging from entertainment (video games, movies) to human-computer interaction and even psychiatrical treatments. What makes building credible digital doubles difficult is the fact that the human vision system is very sensitive to perceiving the complex expressivity and potential anomalies in body structures and motion. This thesis will present several projects that tackle these problems from two different perspectives: lightweight acquisition and physics-based simulation. It starts by describing a complete pipeline that allows users to reconstruct fully rigged 3D facial avatars using video data coming from a handheld device (e.g., smartphone). The avatars use a novel two-scale representation composed of blendshapes and dynamic detail maps. They are constructed through an optimization that integrates feature tracking, optical flow, and shape from shading. Continuing along the lines of accessible acquisition systems, we discuss a framework for simultaneous tracking and modeling of articulated human bodies from RGB-D data. We show how semantic information can be extracted from the scanned body shapes. In the second half of the thesis, we will deviate from using standard linear reconstruction and animation models, and rather focus on exploiting physics-based techniques that are able to incorporate complex phenomena such as dynamics, collision response and incompressibility of the materials. The first approach we propose assumes that each 3D scan of an actor records his body in a physical steady state and uses a process called inverse physics to extract a volumetric physics-ready anatomical model of him. By using biologically-inspired growth models for the bones, muscles and fat, our method can obtain realistic anatomical reconstructions that can be later on animated using external tracking data such as the one resulting from tracking motion capture markers. This is then extended to a novel physics-based approach for facial reconstruction and animation. We propose a facial animation model which simulates biomechanical muscle contractions in a volumetric head model in order to create the facial expressions seen in the input scans. We then show how this approach allows for new avenues of dynamic artistic control, simulation of corrective facial surgery, and interaction with external forces and objects

    Whole-Body Motion Capture and Beyond: From Model-Based Inference to Learning-Based Regression

    Get PDF
    Herkömmliche markerlose Motion Capture (MoCap)-Methoden sind zwar effektiv und erfolgreich, haben aber mehrere EinschrĂ€nkungen: 1) Sie setzen ein charakterspezifi-sches Körpermodell voraus und erlauben daher keine vollautomatische Pipeline und keine Verallgemeinerung ĂŒber verschiedene Korperformen; 2) es werden keine Objekte verfolgt, mit denen Menschen interagieren, wĂ€hrend in der RealitĂ€t die Interaktion zwischen Menschen und Objekten allgegenwĂ€rtig ist; 3) sie sind in hohem Maße von ausgeklĂŒgelten Optimierungen abhĂ€ngig, die eine gute Initialisierung und starke PrioritĂ€ten erfordern. Dieser Prozess kann sehr zeitaufwĂ€ndig sein. In dieser Arbeit befassen wir uns mit allen oben genannten Problemen. ZunĂ€chst schlagen wir eine vollautomatische Methode zur genauen 3D-Rekonstruktion des menschlichen Körpers aus RGB-Videos mit mehreren Ansichten vor. Wir verarbeiten alle RGB-Videos vor, um 2D-Keypoints und Silhouetten zu erhalten. Dann passen wir modell in zwei aufeinander folgenden Schritten an die 2D-Messungen an. In der ersten Phase werden die Formparameter und die Posenparameter der SMPL nacheinander und bildweise geschtĂ€zt. In der zweiten Phase wird eine Reihe von Einzelbildern gemeinsam mit der zusĂ€tzlichen DCT-Priorisierung (Discrete Cosine Transformation) verfeinert. Unsere Methode kann verschiedene Körperformen und schwierige Posen ohne menschliches Zutun verarbeiten. Dann erweitern wir das MoCap-System, um die Verfolgung von starren Objekten zu unterstutzen, mit denen die Testpersonen interagieren. Unser System besteht aus 6 RGB-D Azure-Kameras. ZunĂ€chst werden alle RGB-D Videos vorverarbeitet, indem Menschen und Objekte segmentiert und 2D-Körpergelenke erkannt werden. Das SMPL-X Modell wird hier eingesetzt, um die Handhaltung besser zu erfassen. Das SMPL-XModell wird in 2D-Keypoints und akkumulierte Punktwolken eingepasst. Wir zeigen, dass die Körperhaltung wichtige Informationen fĂŒr eine bessere Objektverfolgung liefert. Anschließend werden die Körper- und Objektposen gemeinsam mit Kontakt- und Durch-dringungsbeschrankungen optimiert. Mit diesem Ansatz haben wir den ersten Mensch-Objekt-Interaktionsdatensatz mit natĂŒrlichen RGB-Bildern und angemessenen Körper und Objektbewegungsinformationen erfasst. Schließlich prĂ€sentieren wir das erste praktische, leichtgewichtige MoCap-System, das nur 6 Inertialmesseinheiten (IMUs) benötigt. Unser Ansatz basiert auf bi-direktionalen rekurrenten neuronalen Netzen (Bi-RNN). Das Netzwerk soll die zeitliche AbhĂ€ngigkeit besser ausnutzen, indem es vergangene und zukĂŒnftige Teilmessungen der IMUs zu- sammenfasst. Um das Problem der Datenknappheit zu lösen, erstellen wir synthetische Daten aus archivierten MoCap-Daten. Insgesamt lĂ€uft unser System 10 Mal schneller als die Optimierungsmethode und ist numerisch genauer. Wir zeigen auch, dass es möglich ist, die AktivitĂ€t der Testperson abzuschĂ€tzen, indem nur die IMU Messung der Smart-watch, die die Testperson trĂ€gt, betrachtet wird. Zusammenfassend lĂ€sst sich sagen, dass wir die markerlose MoCap-Methode weiter-entwickelt haben, indem wir das erste automatische und dennoch genaue System beisteuerten, die MoCap-Methoden zur UnterstĂŒtzung der Verfolgung starrer Objekte erweiterten und einen praktischen und leichtgewichtigen Algorithmus mit 6 IMUs vorschlugen. Wir glauben, dass unsere Arbeit die markerlose MoCap billiger und praktikabler macht und somit den Endnutzern fur den taglichen Gebrauch nĂ€her bringt.Though effective and successful, traditional marker-less Motion Capture (MoCap) methods suffer from several limitations: 1) they presume a character-specific body model, thus they do not permit a fully automatic pipeline and generalization over diverse body shapes; 2) no objects humans interact with are tracked, while in reality interaction between humans and objects is ubiquitous; 3) they heavily rely on a sophisticated optimization process, which needs a good initialization and strong priors. This process can be slow. We address all the aforementioned issues in this thesis, as described below. Firstly we propose a fully automatic method to accurately reconstruct a 3D human body from multi-view RGB videos, the typical setup for MoCap systems. We pre-process all RGB videos to obtain 2D keypoints and silhouettes. Then we fit the SMPL body model into the 2D measurements in two successive stages. In the first stage, the shape and pose parameters of SMPL are estimated frame-wise sequentially. In the second stage, a batch of frames are refined jointly with an extra DCT prior. Our method can naturally handle different body shapes and challenging poses without human intervention. Then we extend this system to support tracking of rigid objects the subjects interact with. Our setup consists of 6 Azure Kinect cameras. Firstly we pre-process all the videos by segmenting humans and objects and detecting 2D body joints. We adopt the SMPL-X model here to capture body and hand pose. The model is fitted to 2D keypoints and point clouds. Then the body poses and object poses are jointly updated with contact and interpenetration constraints. With this approach, we capture a novel human-object interaction dataset with natural RGB images and plausible body and object motion information. Lastly, we present the first practical and lightweight MoCap system that needs only 6 IMUs. Our approach is based on Bi-directional RNNs. The network can make use of temporal information by jointly reasoning about past and future IMU measurements. To handle the data scarcity issue, we create synthetic data from archival MoCap data. Overall, our system runs ten times faster than traditional optimization-based methods, and is numerically more accurate. We also show it is feasible to estimate which activity the subject is doing by only observing the IMU measurement from a smartwatch worn by the subject. This not only can be useful for a high-level semantic understanding of the human behavior, but also alarms the public of potential privacy concerns. In summary, we advance marker-less MoCap by contributing the first automatic yet accurate system, extending the MoCap methods to support rigid object tracking, and proposing a practical and lightweight algorithm via 6 IMUs. We believe our work makes marker-less and IMUs-based MoCap cheaper and more practical, thus closer to end-users for daily usage

    Deep Learning-Based Human Pose Estimation: A Survey

    Full text link
    Human pose estimation aims to locate the human body parts and build human body representation (e.g., body skeleton) from input data such as images and videos. It has drawn increasing attention during the past decade and has been utilized in a wide range of applications including human-computer interaction, motion analysis, augmented reality, and virtual reality. Although the recently developed deep learning-based solutions have achieved high performance in human pose estimation, there still remain challenges due to insufficient training data, depth ambiguities, and occlusion. The goal of this survey paper is to provide a comprehensive review of recent deep learning-based solutions for both 2D and 3D pose estimation via a systematic analysis and comparison of these solutions based on their input data and inference procedures. More than 240 research papers since 2014 are covered in this survey. Furthermore, 2D and 3D human pose estimation datasets and evaluation metrics are included. Quantitative performance comparisons of the reviewed methods on popular datasets are summarized and discussed. Finally, the challenges involved, applications, and future research directions are concluded. We also provide a regularly updated project page: \url{https://github.com/zczcwh/DL-HPE

    Electronic Imaging & the Visual Arts. EVA 2013 Florence

    Get PDF
    Important Information Technology topics are presented: multimedia systems, data-bases, protection of data, access to the content. Particular reference is reserved to digital images (2D, 3D) regarding Cultural Institutions (Museums, Libraries, Palace – Monuments, Archaeological Sites). The main parts of the Conference Proceedings regard: Strategic Issues, EC Projects and Related Networks & Initiatives, International Forum on “Culture & Technology”, 2D – 3D Technologies & Applications, Virtual Galleries – Museums and Related Initiatives, Access to the Culture Information. Three Workshops are related to: International Cooperation, Innovation and Enterprise, Creative Industries and Cultural Tourism
    • 

    corecore