2,545 research outputs found
Sketching-out virtual humans: From 2d storyboarding to immediate 3d character animation
Virtual beings are playing a remarkable role in today’s public entertainment, while ordinary users are still treated as audiences due to the lack of appropriate expertise, equipment, and computer skills. In this paper, we present a fast and intuitive storyboarding interface, which enables users to sketch-out 3D virtual humans, 2D/3D animations, and character intercommunication. We devised an intuitive “stick figurefleshing-outskin mapping” graphical animation pipeline, which realises the whole process of key framing, 3D pose reconstruction, virtual human modelling, motion path/timing control, and the final animation synthesis by almost pure 2D sketching. A “creative model-based method” is developed, which emulates a human perception process, to generate the 3D human bodies of variational sizes, shapes, and fat distributions. Meanwhile, our current system also supports the sketch-based crowd animation and the storyboarding of the 3D multiple character intercommunication. This system has been formally tested by various users on Tablet PC. After minimal training, even a beginner can create vivid virtual humans and animate them within minutes
Sketching-out virtual humans: A smart interface for human modelling and animation
In this paper, we present a fast and intuitive interface for sketching out
3D virtual humans and animation. The user draws stick figure key frames first and
chooses one for “fleshing-out” with freehand body contours. The system
automatically constructs a plausible 3D skin surface from the rendered figure, and
maps it onto the posed stick figures to produce the 3D character animation. A
“creative model-based method” is developed, which performs a human perception
process to generate 3D human bodies of various body sizes, shapes and fat
distributions. In this approach, an anatomical 3D generic model has been created with
three distinct layers: skeleton, fat tissue, and skin. It can be transformed sequentially
through rigid morphing, fatness morphing, and surface fitting to match the original
2D sketch. An auto-beautification function is also offered to regularise the 3D
asymmetrical bodies from users’ imperfect figure sketches. Our current system
delivers character animation in various forms, including articulated figure animation,
3D mesh model animation, 2D contour figure animation, and even 2D NPR animation
with personalised drawing styles. The system has been formally tested by various
users on Tablet PC. After minimal training, even a beginner can create vivid virtual
humans and animate them within minutes
CNN-based Real-time Dense Face Reconstruction with Inverse-rendered Photo-realistic Face Images
With the powerfulness of convolution neural networks (CNN), CNN based face
reconstruction has recently shown promising performance in reconstructing
detailed face shape from 2D face images. The success of CNN-based methods
relies on a large number of labeled data. The state-of-the-art synthesizes such
data using a coarse morphable face model, which however has difficulty to
generate detailed photo-realistic images of faces (with wrinkles). This paper
presents a novel face data generation method. Specifically, we render a large
number of photo-realistic face images with different attributes based on
inverse rendering. Furthermore, we construct a fine-detailed face image dataset
by transferring different scales of details from one image to another. We also
construct a large number of video-type adjacent frame pairs by simulating the
distribution of real video data. With these nicely constructed datasets, we
propose a coarse-to-fine learning framework consisting of three convolutional
networks. The networks are trained for real-time detailed 3D face
reconstruction from monocular video as well as from a single image. Extensive
experimental results demonstrate that our framework can produce high-quality
reconstruction but with much less computation time compared to the
state-of-the-art. Moreover, our method is robust to pose, expression and
lighting due to the diversity of data.Comment: Accepted by IEEE Transactions on Pattern Analysis and Machine
Intelligence, 201
Computer-assisted animation creation techniques for hair animation and shade, highlight, and shadow
制度:新 ; 報告番号:甲3062号 ; 学位の種類:博士(工学) ; 授与年月日:2010/2/25 ; 早大学位記番号:新532
A 3D Pipeline for 2D Pixel Art Animation
Aquest document presenta un informe exhaustiu sobre un projecte destinat a desenvolupar un procés automatitzat per a la creació d'animacions 2D a partir de models 3D utilitzant Blender. L'objectiu principal del projecte és millorar les tècniques existents i reduir la necessitat que els artistes realitzin tasques repetitives en el procés de producció d'animació. El projecte implica el disseny i desenvolupament d'un complement per a Blender, programat en Python, que es va desenvolupar per ser eficient i reduir les tasques intensives en temps que solen caracteritzar algunes etapes en el procés d'animació. El complement suporta tres estils específics d'animació: l'art de píxel, "cel shader", i "cel shader" amb contorns, i es pot expandir per suportar una àmplia gamma d'estils. El complement també és de codi obert, permetent una major col·laboració i potencials contribucions per part de la comunitat. Malgrat els problemes trobats, el projecte ha estat exitós en aconseguir els seus objectius, i els resultats mostren que el complement pot aconseguir resultats similars als adquirits amb eines similars i animació tradicional. El treball futur inclou mantenir el complement actualitzat amb les últimes versions de Blender, publicar-lo a GitHub i mercats de complements de Blender, així com afegir nous estils d'art.This document presents a comprehensive report on a project aimed at developing an automated process for creating 2D animations from 3D models using Blender. The project's main goal is to improve upon existing techniques and reduce the need for artists to do clerical tasks in the animation production process. The project involves the design and development of a plugin for Blender, coded in Python, which was developed to be efficient and reduce time-intensive tasks that usually characterise some stages in the animation process. The plugin supports three specific styles of animation: pixel art, cel shading, and cel shading with outlines, and can be expanded to support a wider range of styles. The plugin is also open-source, allowing for greater collaboration and potential contributions from the community. Despite the challenges faced, the project was successful in achieving its goals, and the results show that the plugin could achieve results similar to those acquired with similar tools and traditional animation. The future work includes keeping the plugin up-to-date with the latest versions of Blender, publishing it on GitHub and Blender plugin markets, as well as adding new art styles
A study of how Chinese ink painting features can be applied to 3D scenes and models in real-time rendering
Past research findings addressed mature techniques for non-photorealistic rendering. However, research findings indicate that there is little information dealing with efficient methods to simulate Chinese ink painting features in rendering 3D scenes. Considering that Chinese ink painting has achieved many worldwide awards, the potential to effectively and automatically develop 3D animations and games in this style indicates a need for the development of appropriate technology for the future market.
The goal of this research is about rendering 3D meshes in a Chinese ink painting style which is both appealing and realistic. Specifically, how can the output image appear similar to a hand-drawn Chinese ink painting. And how efficient does the rendering pipeline have to be to result in a real-time scene.
For this study the researcher designed two rendering pipelines for static objects and moving objects in the final scene. The entire rendering process includes interior shading, silhouette extracting, textures integrating, and background rendering. Methodology involved the use of silhouette detection, multiple rendering passes, Gaussian blur for anti-aliasing, smooth step functions, and noise textures for simulating ink textures. Based on the output of each rendering pipeline, rendering process of the scene with best looking of Chinese ink painting style is illustrated in detail.
The speed of the rendering pipeline proposed by this research was tested. The framerate of the final scenes created with this pipeline was higher than 30fps, a level considered to be real-time. One can conclude that the main objective of the research study was met even though other methods for generating Chinese ink painting rendering are available and should be explored
- …