Volumetric intelligence: A framework for the creation of interactive volumetric captured characters

Abstract

Virtual simulation of human faces and facial movements has challenged media artists and computer scientists since the first realistic 3D renderings of a human face by Fred Parke in 1972. Today, a range of software and techniques are available for modelling virtual characters and their facial behavior in immersive environments, such as computer games or storyworlds. However, applying these techniques often requires large teams with multidisciplinary expertise, extensive amount of manual labor, as well as financial conditions that are not typically available for individual media artists. This thesis work demonstrates how an individual artist may create humanlike virtual characters – specifically their facial behavior – in a relatively fast and automated manner. The method is based on volumetric capturing, or photogrammetry, of a set of facial expressions from a real person using a multi-camera setup, and further applying open source and accessible 3D reconstruction and re-topology techniques and software. Furthermore, the study discusses possibilities of utilizing contemporary game engines and applications for building settings that allow real-time interaction between the user and virtual characters. The thesis documents an innovative framework for the creation of a virtual character captured from a real person, that can be presented and driven in real-time, without the need of a specialized team, high budget or intensive manual labor. This workflow is suitable for research groups, independent teams and individuals seeking for the creation of immersive and real-time experiences and experiments using virtual humanlike characters

    Similar works