2,168 research outputs found

    Extraction robuste de primitives géométriques 3D dans un nuage de points et alignement basé sur les primitives

    Get PDF
    Dans ce projet, nous étudions les problèmes de rétro-ingénierie et de contrôle de la qualité qui jouent un rôle important dans la fabrication industrielle. La rétro-ingénierie tente de reconstruire un modèle 3D à partir de nuages de points, qui s’apparente au problème de la reconstruction de la surface 3D. Le contrôle de la qualité est un processus dans lequel la qualité de tous les facteurs impliqués dans la production est abordée. En fait, les systèmes ci-dessus nécessitent beaucoup d’intervention de la part d’un utilisateur expérimenté, résultat souhaité est encore loin soit une automatisation complète du processus. Par conséquent, de nombreux défis doivent encore être abordés pour atteindre ce résultat hautement souhaitable en production automatisée. La première question abordée dans la thèse consiste à extraire les primitives géométriques 3D à partir de nuages de points. Un cadre complet pour extraire plusieurs types de primitives à partir de données 3D est proposé. En particulier, une nouvelle méthode de validation est proposée pour évaluer la qualité des primitives extraites. À la fin, toutes les primitives présentes dans le nuage de points sont extraites avec les points de données associés et leurs paramètres descriptifs. Ces résultats pourraient être utilisés dans diverses applications telles que la reconstruction de scènes on d’édifices, la géométrie constructive et etc. La seconde question traiée dans ce travail porte sur l’alignement de deux ensembles de données 3D à l’aide de primitives géométriques, qui sont considérées comme un nouveau descripteur robuste. L’idée d’utiliser les primitives pour l’alignement arrive à surmonter plusieurs défis rencontrés par les méthodes d’alignement existantes. Ce problème d’alignement est une étape essentielle dans la modélisation 3D, la mise en registre, la récupération de modèles. Enfin, nous proposons également une méthode automatique pour extraire les discontinutés à partir de données 3D d’objets manufacturés. En intégrant ces discontinutés au problème d’alignement, il est possible d’établir automatiquement les correspondances entre primitives en utilisant l’appariement de graphes relationnels avec attributs. Nous avons expérimenté tous les algorithmes proposés sur différents jeux de données synthétiques et réelles. Ces algorithmes ont non seulement réussi à accomplir leur tâches avec succès mais se sont aussi avérés supérieus aux méthodes proposées dans la literature. Les résultats présentés dans le thèse pourraient s’avérér utilises à plusieurs applications.In this research project, we address reverse engineering and quality control problems that play significant roles in industrial manufacturing. Reverse engineering attempts to rebuild a 3D model from the scanned data captured from a object, which is the problem similar to 3D surface reconstruction. Quality control is a process in which the quality of all factors involved in production is monitored and revised. In fact, the above systems currently require significant intervention from experienced users, and are thus still far from being fully automated. Therefore, many challenges still need to be addressed to achieve the desired performance for automated production. The first proposition of this thesis is to extract 3D geometric primitives from point clouds for reverse engineering and surface reconstruction. A complete framework to extract multiple types of primitives from 3D data is proposed. In particular, a novel validation method is also proposed to assess the quality of the extracted primitives. At the end, all primitives present in the point cloud are extracted with their associated data points and descriptive parameters. These results could be used in various applications such as scene and building reconstruction, constructive solid geometry, etc. The second proposition of the thesis is to align two 3D datasets using the extracted geometric primitives, which is introduced as a novel and robust descriptor. The idea of using primitives for alignment is addressed several challenges faced by existing registration methods. This alignment problem is an essential step in 3D modeling, registration and model retrieval. Finally, an automatic method to extract sharp features from 3D data of man-made objects is also proposed. By integrating the extracted sharp features into the alignment framework, it is possible implement automatic assignment of primitive correspondences using attribute relational graph matching. Each primitive is considered as a node of the graph and an attribute relational graph is created to provide a structural and relational description between primitives. We have experimented all the proposed algorithms on different synthetic and real scanned datasets. Our algorithms not only are successful in completing their tasks with good results but also outperform other methods. We believe that the contribution of them could be useful in many applications

    Vehicle localization with enhanced robustness for urban automated driving

    Get PDF

    Efficient Dense Registration, Segmentation, and Modeling Methods for RGB-D Environment Perception

    Get PDF
    One perspective for artificial intelligence research is to build machines that perform tasks autonomously in our complex everyday environments. This setting poses challenges to the development of perception skills: A robot should be able to perceive its location and objects in its surrounding, while the objects and the robot itself could also be moving. Objects may not only be composed of rigid parts, but could be non-rigidly deformable or appear in a variety of similar shapes. Furthermore, it could be relevant to the task to observe object semantics. For a robot acting fluently and immediately, these perception challenges demand efficient methods. This theses presents novel approaches to robot perception with RGB-D sensors. It develops efficient registration, segmentation, and modeling methods for scene and object perception. We propose multi-resolution surfel maps as a concise representation for RGB-D measurements. We develop probabilistic registration methods that handle rigid scenes, scenes with multiple rigid parts that move differently, and scenes that undergo non-rigid deformations. We use these methods to learn and perceive 3D models of scenes and objects in both static and dynamic environments. For learning models of static scenes, we propose a real-time capable simultaneous localization and mapping approach. It aligns key views in RGB-D video using our rigid registration method and optimizes the pose graph of the key views. The acquired models are then perceived in live images through detection and tracking within a Bayesian filtering framework. An assumption frequently made for environment mapping is that the observed scene remains static during the mapping process. Through rigid multi-body registration, we take advantage of releasing this assumption: Our registration method segments views into parts that move independently between the views and simultaneously estimates their motion. Within simultaneous motion segmentation, localization, and mapping, we separate scenes into objects by their motion. Our approach acquires 3D models of objects and concurrently infers hierarchical part relations between them using probabilistic reasoning. It can be applied for interactive learning of objects and their part decomposition. Endowing robots with manipulation skills for a large variety of objects is a tedious endeavor if the skill is programmed for every instance of an object class. Furthermore, slight deformations of an instance could not be handled by an inflexible program. Deformable registration is useful to perceive such shape variations, e.g., between specific instances of a tool. We develop an efficient deformable registration method and apply it for the transfer of robot manipulation skills between varying object instances. On the object-class level, we segment images using random decision forest classifiers in real-time. The probabilistic labelings of individual images are fused in 3D semantic maps within a Bayesian framework. We combine our object-class segmentation method with simultaneous localization and mapping to achieve online semantic mapping in real-time. The methods developed in this thesis are evaluated in experiments on publicly available benchmark datasets and novel own datasets. We publicly demonstrate several of our perception approaches within integrated robot systems in the mobile manipulation context.Effiziente Dichte Registrierungs-, Segmentierungs- und Modellierungsmethoden für die RGB-D Umgebungswahrnehmung In dieser Arbeit beschäftigen wir uns mit Herausforderungen der visuellen Wahrnehmung für intelligente Roboter in Alltagsumgebungen. Solche Roboter sollen sich selbst in ihrer Umgebung zurechtfinden, und Wissen über den Verbleib von Objekten erwerben können. Die Schwierigkeit dieser Aufgaben erhöht sich in dynamischen Umgebungen, in denen ein Roboter die Bewegung einzelner Teile differenzieren und auch wahrnehmen muss, wie sich diese Teile bewegen. Bewegt sich ein Roboter selbständig in dieser Umgebung, muss er auch seine eigene Bewegung von der Veränderung der Umgebung unterscheiden. Szenen können sich aber nicht nur durch die Bewegung starrer Teile verändern. Auch die Teile selbst können ihre Form in nicht-rigider Weise ändern. Eine weitere Herausforderung stellt die semantische Interpretation von Szenengeometrie und -aussehen dar. Damit intelligente Roboter unmittelbar und flüssig handeln können, sind effiziente Algorithmen für diese Wahrnehmungsprobleme erforderlich. Im ersten Teil dieser Arbeit entwickeln wir effiziente Methoden zur Repräsentation und Registrierung von RGB-D Messungen. Zunächst stellen wir Multi-Resolutions-Oberflächenelement-Karten (engl. multi-resolution surfel maps, MRSMaps) als eine kompakte Repräsentation von RGB-D Messungen vor, die unseren effizienten Registrierungsmethoden zugrunde liegt. Bilder können effizient in dieser Repräsentation aggregiert werde, wobei auch mehrere Bilder aus verschiedenen Blickpunkten integriert werden können, um Modelle von Szenen und Objekte aus vielfältigen Ansichten darzustellen. Für die effiziente, robuste und genaue Registrierung von MRSMaps wird eine Methode vorgestellt, die Rigidheit der betrachteten Szene voraussetzt. Die Registrierung schätzt die Kamerabewegung zwischen den Bildern und gewinnt ihre Effizienz durch die Ausnutzung der kompakten multi-resolutionalen Darstellung der Karten. Die Registrierungsmethode erzielt hohe Bildverarbeitungsraten auf einer CPU. Wir demonstrieren hohe Effizienz, Genauigkeit und Robustheit unserer Methode im Vergleich zum bisherigen Stand der Forschung auf Vergleichsdatensätzen. In einem weiteren Registrierungsansatz lösen wir uns von der Annahme, dass die betrachtete Szene zwischen Bildern statisch ist. Wir erlauben nun, dass sich rigide Teile der Szene bewegen dürfen, und erweitern unser rigides Registrierungsverfahren auf diesen Fall. Unser Ansatz segmentiert das Bild in Bereiche einzelner Teile, die sich unterschiedlich zwischen Bildern bewegen. Wir demonstrieren hohe Segmentierungsgenauigkeit und Genauigkeit in der Bewegungsschätzung unter Echtzeitbedingungen für die Verarbeitung. Schließlich entwickeln wir ein Verfahren für die Wahrnehmung von nicht-rigiden Deformationen zwischen zwei MRSMaps. Auch hier nutzen wir die multi-resolutionale Struktur in den Karten für ein effizientes Registrieren von grob zu fein. Wir schlagen Methoden vor, um aus den geschätzten Deformationen die lokale Bewegung zwischen den Bildern zu berechnen. Wir evaluieren Genauigkeit und Effizienz des Registrierungsverfahrens. Der zweite Teil dieser Arbeit widmet sich der Verwendung unserer Kartenrepräsentation und Registrierungsmethoden für die Wahrnehmung von Szenen und Objekten. Wir verwenden MRSMaps und unsere rigide Registrierungsmethode, um dichte 3D Modelle von Szenen und Objekten zu lernen. Die räumlichen Beziehungen zwischen Schlüsselansichten, die wir durch Registrierung schätzen, werden in einem Simultanen Lokalisierungs- und Kartierungsverfahren (engl. simultaneous localization and mapping, SLAM) gegeneinander abgewogen, um die Blickposen der Schlüsselansichten zu schätzen. Für das Verfolgen der Kamerapose bezüglich der Modelle in Echtzeit, kombinieren wir die Genauigkeit unserer Registrierung mit der Robustheit von Partikelfiltern. Zu Beginn der Posenverfolgung, oder wenn das Objekt aufgrund von Verdeckungen oder extremen Bewegungen nicht weiter verfolgt werden konnte, initialisieren wir das Filter durch Objektdetektion. Anschließend wenden wir unsere erweiterten Registrierungsverfahren für die Wahrnehmung in nicht-rigiden Szenen und für die Übertragung von Objekthandhabungsfähigkeiten von Robotern an. Wir erweitern unseren rigiden Kartierungsansatz auf dynamische Szenen, in denen sich rigide Teile bewegen. Die Bewegungssegmente in Schlüsselansichten werden zueinander in Bezug gesetzt, um Äquivalenz- und Teilebeziehungen von Objekten probabilistisch zu inferieren, denen die Segmente entsprechen. Auch hier liefert unsere Registrierungsmethode die Bewegung der Kamera bezüglich der Objekte, die wir in einem SLAM Verfahren optimieren. Aus diesen Blickposen wiederum können wir die Bewegungssegmente in dichten Objektmodellen vereinen. Objekte einer Klasse teilen oft eine gemeinsame Topologie von funktionalen Elementen, die durch Formkorrespondenzen ermittelt werden kann. Wir verwenden unsere deformierbare Registrierung, um solche Korrespondenzen zu finden und die Handhabung eines Objektes durch einen Roboter auf neue Objektinstanzen derselben Klasse zu übertragen. Schließlich entwickeln wir einen echtzeitfähigen Ansatz, der Kategorien von Objekten in RGB-D Bildern erkennt und segmentiert. Die Segmentierung basiert auf Ensemblen randomisierter Entscheidungsbäume, die Geometrie- und Texturmerkmale zur Klassifikation verwenden. Wir fusionieren Segmentierungen von Einzelbildern einer Szene aus mehreren Ansichten in einer semantischen Objektklassenkarte mit Hilfe unseres SLAM-Verfahrens. Die vorgestellten Methoden werden auf öffentlich verfügbaren Vergleichsdatensätzen und eigenen Datensätzen evaluiert. Einige unserer Ansätze wurden auch in integrierten Robotersystemen für mobile Objekthantierungsaufgaben öffentlich demonstriert. Sie waren ein wichtiger Bestandteil für das Gewinnen der RoboCup-Roboterwettbewerbe in der RoboCup@Home Liga in den Jahren 2011, 2012 und 2013

    Mobile Robots

    Get PDF
    The objective of this book is to cover advances of mobile robotics and related technologies applied for multi robot systems' design and development. Design of control system is a complex issue, requiring the application of information technologies to link the robots into a single network. Human robot interface becomes a demanding task, especially when we try to use sophisticated methods for brain signal processing. Generated electrophysiological signals can be used to command different devices, such as cars, wheelchair or even video games. A number of developments in navigation and path planning, including parallel programming, can be observed. Cooperative path planning, formation control of multi robotic agents, communication and distance measurement between agents are shown. Training of the mobile robot operators is very difficult task also because of several factors related to different task execution. The presented improvement is related to environment model generation based on autonomous mobile robot observations

    ONLINE HIERARCHICAL MODELS FOR SURFACE RECONSTRUCTION

    Get PDF
    Applications based on three-dimensional object models are today very common, and can be found in many fields as design, archeology, medicine, and entertainment. A digital 3D model can be obtained by means of physical object measurements performed by using a 3D scanner. In this approach, an important step of the 3D model building process consists of creating the object's surface representation from a cloud of noisy points sampled on the object itself. This process can be viewed as the estimation of a function from a finite subset of its points. Both in statistics and machine learning this is known as a regression problem. Machine learning views the function estimation as a learning problem to be addressed by using computational intelligence techniques: the points represent a set of examples and the surface to be reconstructed represents the law that has generated them. On the other hand, in many applications the cloud of sampled points may become available only progressively during system operation. The conventional approaches to regression are therefore not suited to deal efficiently with this operating condition. The aim of the thesis is to introduce innovative approaches to the regression problem suited for achieving high reconstruction accuracy, while limiting the computational complexity, and appropriate for online operation. Two classical computational intelligence paradigms have been considered as basic tools to address the regression problem: namely the Radial Basis Functions and the Support Vector Machines. The original and innovative aspect introduced by this thesis is the extension of these tools toward a multi-scale incremental structure, based on hierarchical schemes and suited for online operation. This allows for obtaining modular, scalable, accurate and efficient modeling procedures with training algorithms appropriate for dealing with online learning. Radial Basis Function Networks have a fast configuration procedure that, operating locally, does not require iterative algorithms. On the other side, the computational complexity of the configuration procedure of Support Vector Machines is independent from the number of input variables. These two approaches have been considered in order to analyze advantages and limits of each of them due to the differences in their intrinsic nature

    Evolutionary-based global localization and mapping of three dimensional environments

    Get PDF
    A fully autonomous robot must obtain and interpret information about the environment to execute several tasks. The mobile robot mapping or SLAM problem is closely related to these abilities. It consists of interpreting the information perceived by its sensors in order to build map and localize itself in it. There are many other robot skills that depend on this task; thus, it is one of the most important problems to be solved by a truly autonomous robot. The objective of this work is to design various specific tools related to the mapping problem in order to improve the autonomy of MANFRED-2, which is a mobile robot fully developed by the Robotics Lab research group of the Systems Engineering and Automation Department of the Carlos III University of Madrid. The localization problem in mobile robotics can be defined as the search of the robot's coordinates in a known environment. If there is no information about the initial location, we are talking about global localization. In this work, we have developed an algorithm that solves this problem in a three-dimensional environment using Differential Evolution, which is a particle-based evolutionary algorithm that evolves in time to the solution that yields the cost function lowest value. The proposed method has many features that make it very robust and reliable: thresholding and discarding mechanisms, different cost functions, effective convergence criteria, and so on. The resulting global localization module has been tested in numerous experiments. The high accuracy of the method allows its application in manipulation tasks. If the environment information is given by laser readings, it is essential to correct the local errors between pairs of scans to improve the map quality, which is called registration or scan matching. We have implemented a scan matching algorithm for three-dimensional environments. It is also based on the Differential Evolution method. The high accuracy and computational effi ciency of the proposed method have been demonstrated with experimental results. The last problem addressed here consists of detecting when the robot is navigating through a known place (loop detection). After that, the accumulated error can be minimized to give consistency to the global map (loop closure). We have developed a loop detection method that compares features extracted from two different scans to obtain a loop indicator. This approach allows the introduction of very different characteristics in the descriptor. First, the surface features include the geometric forms of the scan (lines, planes, and spheres). Second, the numerical features describe other several properties (volume, average range, curvature, etc.). The algorithm has been tested with real data to demonstrate its effi ciency. All true loops are correctly detected and no false detections are appreciated when the mobile robot is covering a long trajectory. The results are similar or even better than those obtained by other research groups. In addition, it is a more versatile method because it admits a wide variety of scan properties and different weights in the comparison formula. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Un robot completamente autónomo debe ser capaz de obtener e interpretar la información del entorno para ejecutar diversas tareas. El problema de mapeado o SLAM para robots móviles está estrechamente relacionado con estas habilidades. Consiste en interpretar la infomació percibida por sus sensores para construir un mapa y localizarse. Hay muchas otras tareas que dependen del mapeado, luego este es uno de los problemas más importantes para un robot móvil. El objetivo de este trabajo es el desarrollo de varias herramientas específicas relacionadas con el mapeado de entornos tridimensionales. Con ellas se mejorar a la autonomía del robot manipulador MANFRED-2, que es un robot móvil desarrollado íntegramente en el Robotics Lab del Departamento de Ingeniería de Sistemas y Automática de la Universidad Carlos III de Madrid. El problema de localización para un robot móvil puede ser de nido como la búsqueda de las coordenadas del robot dentro de un entorno conocido. Si no hay información sobre la localización inicial, el problema se denomina localización global. En este trabajo se ha desarrollado un módulo que soluciona este problema para entornos tridimensionales utilizando el algoritmo Differential Evolution, el cual es un filtro evolutivo basado en part culas que evolucionan con el tiempo hacia la solución que tiene asociado un mejor valor para una función de coste dada. El algoritmo desarrollado tiene diversas características que lo hacen muy robusto y fiable: mecanismos de umbralización y descarte, diferentes funciones de coste, criterios de convergencia efectivos, etc. El módulo de localización global se ha probado en m últiples experimentos. La elevada precisión de este método permite que el robot sea utilizado en tareas de manipulación. Si la información del entorno viene dada por barridos de un láser, es muy importante que se pueda corregir el error local entre pares de barridos para mejorar la calidad del mapa. Este proceso se conoce como registro o scan matching. Hemos implementado un algoritmo que resuelve este problema en entornos tridimensionales. Est a tambi en basado en el Differential Evolution. Si se elige la función de forma adecuada es posible resolver el problema de scan matching utilizando este método. La elevada precisión y la eficiencia computacional se han demostrado en los resultados experimentales. El último problema abordado aquí consiste en detectar cuando el robot está navegando por un entorno conocido. Después de esto se podrá minimizar el error acumulado para aumentar la consistencia del mapa. La tarea de detecci on se llama usualmente detección de bucles, mientras que la minimización del error es el cierre del bucle. Se ha desarrollado un algoritmo de detección que extrae las características más importantes de dos barridos del láser para obtener un indicador que es usado como umbral para detectar si el robot está en un lugar que ha visitado previamente. Nuestro método permite tener en cuenta características muy diferentes. Primero, las caractrísticas de superficie permiten incluir las formas geométricas presentes en el barrido (líneas, planos y esferas). Segundo, las características numéricas permiten describir diversas propiedades (volumen, rango medio, curvatura, etc.). El algoritmo ha sido probado con datos reales para demostrar su eficiencia. Todos los bucles son detectados correctamente y no se aprecian falsos positivos cuando el robot está navegando por una trayectoria larga con varios bucles. Los resultados son parecidos o mejores que los que obtienen otros grupos de investigación. Además, este es un m etodo muy versátil pues admite multitud de variables y diferentes pesos en la fórmula de comparación

    Correntropy: Answer to non-Gaussian noise in modern SLAM applications?

    Get PDF
    The problem of non-Gaussian noise/outliers has been intrinsic in modern Simultaneous Localization and Mapping (SLAM) applications. Despite numerous algorithms in SLAM, it has become crucial to address this problem in the realm of modern robotics applications. This work focuses on addressing the above-mentioned problem by incorporating the usage of correntropy in SLAM. Before correntropy, multiple attempts of dealing with non-Gaussian noise have been proposed with significant progress over time but the underlying assumption of Gaussianity might not be enough in real-life applications in robotics.Most of the modern SLAM algorithms propose the `best' estimates given a set of sensor measurements. Apart from addressing the non-Gaussian problems in a SLAM system, our work attempts to address the more complex part concerning SLAM: (a) If one of the sensors gives faulty measurements over time (`Faulty' measurements can be non-Gaussian in nature), how should a SLAM framework adapt to such scenarios? (b) In situations where there is a manual intervention or a 3rd party attacker tries to change the measurements and affect the overall estimate of the SLAM system, how can a SLAM system handle such situations?(addressing the Self Security aspect of SLAM). Given these serious situations how should a modern SLAM system handle the issue of the previously mentioned problems in (a) and (b)? We explore the idea of correntropy in addressing the above-mentioned problems in popular filtering-based approaches like Kalman Filters(KF) and Extended Kalman Filters(EKF), which highlights the `Localization' part in SLAM. Later on, we propose a framework of fusing the odometeries computed individually from a stereo sensor and Lidar sensor (Iterative Closest point Algorithm (ICP) based odometry). We describe the effectiveness of using correntropy in this framework, especially in situations where a 3rd party attacker attempts to corrupt the Lidar computed odometry. We extend the usage of correntropy in the `Mapping' part of the SLAM (Registration), which is the highlight of our work. Although registration is a well-established problem, earlier approaches to registration are very inefficient with large rotations and translation. In addition, when the 3D datasets used for alignment are corrupted with non-Gaussian noise (shot/impulse noise), prior state-of-the-art approaches fail. Our work has given birth to another variant of ICP, which we name as Correntropy Similarity Matrix ICP (CoSM-ICP), which is robust to large translation and rotations as well as to shot/impulse noise. We verify through results how well our variant of ICP outperforms the other variants under large rotations and translations as well as under large outliers/non-Gaussian noise. In addition, we deploy our CoSM algorithm in applications where we compute the extrinsic calibration of the Lidar-Stereo sensor as well as Lidar-Camera calibration using a planar checkerboard in a single frame. In general, through results, we verify how efficiently our approach of using correntropy can be used in tackling non-Gaussian noise/shot noise/impulse noise in robotics applications

    Realtime Face Tracking and Animation

    Get PDF
    Capturing and processing human geometry, appearance, and motion is at the core of computer graphics, computer vision, and human-computer interaction. The high complexity of human geometry and motion dynamics, and the high sensitivity of the human visual system to variations and subtleties in faces and bodies make the 3D acquisition and reconstruction of humans in motion a challenging task. Digital humans are often created through a combination of 3D scanning, appearance acquisition, and motion capture, leading to stunning results in recent feature films. However, these methods typically require complex acquisition systems and substantial manual post-processing. As a result, creating and animating high-quality digital avatars entails long turn-around times and substantial production costs. Recent technological advances in RGB-D devices, such as Microsoft Kinect, brought new hopes for realtime, portable, and affordable systems allowing to capture facial expressions as well as hand and body motions. RGB-D devices typically capture an image and a depth map. This permits to formulate the motion tracking problem as a 2D/3D non-rigid registration of a deformable model to the input data. We introduce a novel face tracking algorithm that combines geometry and texture registration with pre-recorded animation priors in a single optimization. This led to unprecedented face tracking quality on a low cost consumer level device. The main drawback of this approach in the context of consumer applications is the need for an offline user-specific training. Robust and efficient tracking is achieved by building an accurate 3D expression model of the user's face who is scanned in a predefined set of facial expressions. We extended this approach removing the need of a user-specific training or calibration, or any other form of manual assistance, by modeling online a 3D user-specific dynamic face model. In complement of a realtime face tracking and modeling algorithm, we developed a novel system for animation retargeting that allows learning a high-quality mapping between motion capture data and arbitrary target characters. We addressed one of the main challenges of existing example-based retargeting methods, the need for a large number of accurate training examples to define the correspondence between source and target expression spaces. We showed that this number can be significantly reduced by leveraging the information contained in unlabeled data, i.e. facial expressions in the source or target space without corresponding poses. Finally, we present a novel realtime physics-based animation technique allowing to simulate a large range of deformable materials such as fat, flesh, hair, or muscles. This approach could be used to produce more lifelike animations by enhancing the animated avatars with secondary effects. We believe that the realtime face tracking and animation pipeline presented in this thesis has the potential to inspire numerous future research in the area of computer-generated animation. Already, several ideas presented in thesis have been successfully used in industry and this work gave birth to the startup company faceshift AG
    • …
    corecore