6,232 research outputs found
A new automated workflow for 3D character creation based on 3D scanned data
In this paper we present a new workflow allowing the creation of 3D characters in an automated way that does not require the expertise of an animator. This workflow is based of the acquisition of real human data captured by 3D body scanners, which is them processed to generate firstly animatable body meshes, secondly skinned body meshes and finally textured 3D garments
Recommended from our members
A methodology for feature based 3D face modelling from photographs
In this paper, a new approach to modelling 3D faces based on 2D images is introduced. Here 3D faces are created using two photographs from which we extract facial features based on image manipulation techniques. Through the image manipulation techniques we extract the crucial feature lines of the face in two views. These are then used in modifying a template base mesh which is created in 3D. This base mesh, which has been designed by keeping facial animation in mind, is then subdivided to provide the level of detail required. The methodology, as it stands, is semi-automatic whereby our goal is to automate this process in order to provide an inexpensive and expedient way of producing realistic face models intended for animation purposes. Thus, we show how image manipulation techniques can be used to create binary images which can in turn be used in manipulating a base mesh that can be adapted to a given facial geometry. In order to explain our approach more clearly we discuss a series of examples where we create 3D facial geometry of individuals given the corresponding image data
Applications of Face Analysis and Modeling in Media Production
Facial expressions play an important role in day-by-day communication as well as media production. This article surveys automatic facial analysis and modeling methods using computer vision techniques and their applications for media production. The authors give a brief overview of the psychology of face perception and then describe some of the applications of computer vision and pattern recognition applied to face recognition in media production. This article also covers the automatic generation of face models, which are used in movie and TV productions for special effects in order to manipulate people's faces or combine real actors with computer graphics
ICface: Interpretable and Controllable Face Reenactment Using GANs
This paper presents a generic face animator that is able to control the pose
and expressions of a given face image. The animation is driven by human
interpretable control signals consisting of head pose angles and the Action
Unit (AU) values. The control information can be obtained from multiple sources
including external driving videos and manual controls. Due to the interpretable
nature of the driving signal, one can easily mix the information between
multiple sources (e.g. pose from one image and expression from another) and
apply selective post-production editing. The proposed face animator is
implemented as a two-stage neural network model that is learned in a
self-supervised manner using a large video collection. The proposed
Interpretable and Controllable face reenactment network (ICface) is compared to
the state-of-the-art neural network-based face animation techniques in multiple
tasks. The results indicate that ICface produces better visual quality while
being more versatile than most of the comparison methods. The introduced model
could provide a lightweight and easy to use tool for a multitude of advanced
image and video editing tasks.Comment: Accepted in WACV-202
Recommended from our members
Highly automated method for facial expression synthesis
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The synthesis of realistic facial expressions has been an unexplored area for computer graphics scientists. Over the last three decades, several different construction methods have been formulated in order to obtain natural graphic results. Despite these advancements, though, current techniques still require costly resources, heavy user intervention and specific training and outcomes are still not completely realistic. This thesis, therefore, aims to achieve an automated synthesis that will produce realistic facial expressions at a low cost.
This thesis, proposes a highly automated approach for achieving a realistic facial
expression synthesis, which allows for enhanced performance in speed (3 minutes
processing time maximum) and quality with a minimum of user intervention. It will also demonstrate a highly technical and automated method of facial feature detection, by allowing users to obtain their desired facial expression synthesis with minimal
physical input. Moreover, it will describe a novel approach to the normalization of the
illumination settings values between source and target images, thereby allowing the
algorithm to work accurately, even in different lighting conditions.
Finally, we will present the results obtained from the proposed techniques, together with our conclusions, at the end of the paper
Volumetric intelligence: A framework for the creation of interactive volumetric captured characters
Virtual simulation of human faces and facial movements has challenged media artists and computer scientists since the first realistic 3D renderings of a human face by Fred Parke in 1972. Today, a range of software and techniques are available for modelling virtual characters and their facial behavior in immersive environments, such as computer games or storyworlds. However, applying these techniques often requires large teams with multidisciplinary expertise, extensive amount of manual labor, as well as financial conditions that are not typically available for individual media artists.
This thesis work demonstrates how an individual artist may create humanlike virtual characters â specifically their facial behavior â in a relatively fast and automated manner. The method is based on volumetric capturing, or photogrammetry, of a set of facial expressions from a real person using a multi-camera setup, and further applying open source and accessible 3D reconstruction and re-topology techniques and software. Furthermore, the study discusses possibilities of utilizing contemporary game engines and applications for building settings that allow real-time interaction between the user and virtual characters.
The thesis documents an innovative framework for the creation of a virtual character captured from a real person, that can be presented and driven in real-time, without the need of a specialized team, high budget or intensive manual labor. This workflow is suitable for research groups, independent teams and individuals seeking for the creation of immersive and real-time experiences and experiments using virtual humanlike characters
Requirements for Topology in 3D GIS
Topology and its various benefits are well understood within the context of 2D Geographical Information Systems. However, requirements in three-dimensional (3D) applications have yet to be defined, with factors such as lack of users' familiarity with the potential of such systems impeding this process. In this paper, we identify and review a number of requirements for topology in 3D applications. The review utilises existing topological frameworks and data models as a starting point. Three key areas were studied for the purposes of requirements identification, namely existing 2D topological systems, requirements for visualisation in 3D and requirements for 3D analysis supported by topology. This was followed by analysis of application areas such as earth sciences and urban modelling which are traditionally associated with GIS, as well as others including medical, biological and chemical science. Requirements for topological functionality in 3D were then grouped and categorised. The paper concludes by suggesting that these requirements can be used as a basis for the implementation of topology in 3D. It is the aim of this review to serve as a focus for further discussion and identification of additional applications that would benefit from 3D topology. © 2006 The Authors. Journal compilation © 2006 Blackwell Publishing Ltd
- âŠ