7 research outputs found

    Data-driven methods for interactive visual content creation and manipulation

    Get PDF
    Software tools for creating and manipulating visual content --- be they for images, video or 3D models --- are often difficult to use and involve a lot of manual interaction at several stages of the process. Coupled with long processing and acquisition times, content production is rather costly and poses a potential barrier to many applications. Although cameras now allow anyone to easily capture photos and video, tools for manipulating such media demand both artistic talent and technical expertise. However, at the same time, vast corpuses with existing visual content such as Flickr, YouTube or Google 3D Warehouse are now available and easily accessible. This thesis proposes a data-driven approach to tackle the above mentioned problems encountered in content generation. To this end, statistical models trained on semantic knowledge harvested from existing visual content corpuses are created. Using these models, we then develop tools which are easy to learn and use, even by novice users, but still produce high-quality content. These tools have intuitive interfaces, and enable the user to have precise and flexible control. Specifically, we apply our models to create tools to simplify the tasks of video manipulation, 3D modeling and material assignment to 3D objects.Softwarewerkzeuge zum Erstellen und Bearbeiten von visuellen Inhalten --- seien es Bilder, Videos oder 3D-Modelle --- sind häufig schwierig zu bedienen und erfordern viel manuelle Interaktion an verschiedenen Stellen des Verfahrens. In Verbindung mit langen Bearbeitungs- und Erfassungszeiten ist die Erzeugung von Inhalten eher aufwendig und stellt ein potentielles Hindernis für viele Anwendungen dar. Obwohl heute Kameras jedem Anwender auf einfache Art und Weise erlauben Bilder und Videos aufzunehmen, erfordern Werkzeuge zur Bearbeitung dieser sowohl künstlerisches Talent, als auch technische Kompetenz. Gleichzeitig sind riesige Korpora mit bereits vorhandenen visuellen Inhalten, wie zum Beispiel Flickr, Youtube oder Google 3D Warehouse, verfügbar und leicht zugänglich. Diese Arbeit stellt einen datengetriebenen Ansatz vor, der die erwähnten Probleme der Inhaltserzeugung behandelt. Zu diesem Zweck werden statistische Modelle erzeugt, die auf semantischem Wissen trainiert worden sind, welches aus bestehenden Korpora von visuellen Inhalten gesammelt worden ist. Durch die Verwendung dieser Modelle ist es möglich Werkzeuge zu entwickeln, die sogar von unerfahrenen Anwendern einfach zu erlernen und zu benutzen sind, aber dennoch qualitativ hochwertige Inhalte produzieren. Diese Werkzeuge haben intuitive Benutzeroberflächen und geben dem Benutzer eine präzise und flexible Kontrolle. Insbesondere werden die Modelle eingesetzt, um Werkzeuge zu erzeugen, die Aufgaben Videobearbeitung, 3D-Modellerstellung und Materialzuweisung zu 3D-Modellen vereinfachen

    Data-driven methods for interactive visual content creation and manipulation

    Get PDF
    Software tools for creating and manipulating visual content --- be they for images, video or 3D models --- are often difficult to use and involve a lot of manual interaction at several stages of the process. Coupled with long processing and acquisition times, content production is rather costly and poses a potential barrier to many applications. Although cameras now allow anyone to easily capture photos and video, tools for manipulating such media demand both artistic talent and technical expertise. However, at the same time, vast corpuses with existing visual content such as Flickr, YouTube or Google 3D Warehouse are now available and easily accessible. This thesis proposes a data-driven approach to tackle the above mentioned problems encountered in content generation. To this end, statistical models trained on semantic knowledge harvested from existing visual content corpuses are created. Using these models, we then develop tools which are easy to learn and use, even by novice users, but still produce high-quality content. These tools have intuitive interfaces, and enable the user to have precise and flexible control. Specifically, we apply our models to create tools to simplify the tasks of video manipulation, 3D modeling and material assignment to 3D objects.Softwarewerkzeuge zum Erstellen und Bearbeiten von visuellen Inhalten --- seien es Bilder, Videos oder 3D-Modelle --- sind häufig schwierig zu bedienen und erfordern viel manuelle Interaktion an verschiedenen Stellen des Verfahrens. In Verbindung mit langen Bearbeitungs- und Erfassungszeiten ist die Erzeugung von Inhalten eher aufwendig und stellt ein potentielles Hindernis für viele Anwendungen dar. Obwohl heute Kameras jedem Anwender auf einfache Art und Weise erlauben Bilder und Videos aufzunehmen, erfordern Werkzeuge zur Bearbeitung dieser sowohl künstlerisches Talent, als auch technische Kompetenz. Gleichzeitig sind riesige Korpora mit bereits vorhandenen visuellen Inhalten, wie zum Beispiel Flickr, Youtube oder Google 3D Warehouse, verfügbar und leicht zugänglich. Diese Arbeit stellt einen datengetriebenen Ansatz vor, der die erwähnten Probleme der Inhaltserzeugung behandelt. Zu diesem Zweck werden statistische Modelle erzeugt, die auf semantischem Wissen trainiert worden sind, welches aus bestehenden Korpora von visuellen Inhalten gesammelt worden ist. Durch die Verwendung dieser Modelle ist es möglich Werkzeuge zu entwickeln, die sogar von unerfahrenen Anwendern einfach zu erlernen und zu benutzen sind, aber dennoch qualitativ hochwertige Inhalte produzieren. Diese Werkzeuge haben intuitive Benutzeroberflächen und geben dem Benutzer eine präzise und flexible Kontrolle. Insbesondere werden die Modelle eingesetzt, um Werkzeuge zu erzeugen, die Aufgaben Videobearbeitung, 3D-Modellerstellung und Materialzuweisung zu 3D-Modellen vereinfachen

    3D facial performance capture from monocular RGB video.

    Get PDF
    3D facial performance capture is an essential technique for animation production in featured films, video gaming, human computer interaction, VR/AR asset creation and digital heritage, which all have huge impact on our daily life. Traditionally, dedicated hardware such as depth sensors, laser scanners and camera arrays have been developed to acquire depth information for such purpose. However, such sophisticated instruments can only be operated by trained professionals. In recent years, the wide spread availability of mobile devices, and the increased interest of casual untrained users in applications such as image, video editing, virtual and facial model creation, have sparked interest in 3D facial reconstruction from 2D RGB input. Due to the depth ambiguity and facial appearance variation, 3D facial performance capture and modelling from 2D images are inherently ill-posed problems. However, with strong prior knowledge of the human face, it is possible to accurately infer the true 3D facial shape and performance from multiple observations captured with different viewing angles. Various 3D from 2D methods have been proposed and proven to work well in controlled environments. Nevertheless there are still many unexplored issues in uncontrolled in-the-wild environments. In order to achieve the same level of performance in controlled environments, interfering factors in uncontrolled environments such as varying illumination, partial occlusion and facial variation not captured by prior knowledge would require the development of new techniques. This thesis addresses existing challenges and proposes novel methods involving 2D landmark detection, 3D facial reconstruction and 3D performance tracking, which are validated through theoretical research and experimental studies. 3D facial performance tracking is a multidisciplinary problem involving many areas such as computer vision, computer graphics and machine learning. To deal with the large variations within a single image, we present new machine learning techniques for facial landmark detection based on our observation of the facial features in challenging scenarios to increase the robustness. To take advantage of the evidence aggregated from multiple observations, we present new robust and efficient optimisation techniques that impose consistency constrains that help filter out outliers. To exploit the person-specific model generation, temporal and spatial coherence in continuous video input, we present new methods to improve the performance via optimisation. In order to track the 3D facial performance, the fundamental prerequisite for good results is the accurate underlying 3D model of the actor. In this thesis, we present new methods that are targeted at 3D facial geometry reconstruction, which are more efficient than existing generic 3D geometry reconstruction methods. Evaluation and validation were obtained and analysed from substantial experiment, which shows the proposed methods in this thesis outperform the state-of-the-art methods and enable us to generate high quality results with less constraints

    Handbook of Digital Face Manipulation and Detection

    Get PDF
    This open access book provides the first comprehensive collection of studies dealing with the hot topic of digital face manipulation such as DeepFakes, Face Morphing, or Reenactment. It combines the research fields of biometrics and media forensics including contributions from academia and industry. Appealing to a broad readership, introductory chapters provide a comprehensive overview of the topic, which address readers wishing to gain a brief overview of the state-of-the-art. Subsequent chapters, which delve deeper into various research challenges, are oriented towards advanced readers. Moreover, the book provides a good starting point for young researchers as well as a reference guide pointing at further literature. Hence, the primary readership is academic institutions and industry currently involved in digital face manipulation and detection. The book could easily be used as a recommended text for courses in image processing, machine learning, media forensics, biometrics, and the general security area

    Handbook of Digital Face Manipulation and Detection

    Get PDF
    This open access book provides the first comprehensive collection of studies dealing with the hot topic of digital face manipulation such as DeepFakes, Face Morphing, or Reenactment. It combines the research fields of biometrics and media forensics including contributions from academia and industry. Appealing to a broad readership, introductory chapters provide a comprehensive overview of the topic, which address readers wishing to gain a brief overview of the state-of-the-art. Subsequent chapters, which delve deeper into various research challenges, are oriented towards advanced readers. Moreover, the book provides a good starting point for young researchers as well as a reference guide pointing at further literature. Hence, the primary readership is academic institutions and industry currently involved in digital face manipulation and detection. The book could easily be used as a recommended text for courses in image processing, machine learning, media forensics, biometrics, and the general security area

    Gaze-Based Human-Robot Interaction by the Brunswick Model

    Get PDF
    We present a new paradigm for human-robot interaction based on social signal processing, and in particular on the Brunswick model. Originally, the Brunswick model copes with face-to-face dyadic interaction, assuming that the interactants are communicating through a continuous exchange of non verbal social signals, in addition to the spoken messages. Social signals have to be interpreted, thanks to a proper recognition phase that considers visual and audio information. The Brunswick model allows to quantitatively evaluate the quality of the interaction using statistical tools which measure how effective is the recognition phase. In this paper we cast this theory when one of the interactants is a robot; in this case, the recognition phase performed by the robot and the human have to be revised w.r.t. the original model. The model is applied to Berrick, a recent open-source low-cost robotic head platform, where the gazing is the social signal to be considered
    corecore