685 research outputs found

    A new automated workflow for 3D character creation based on 3D scanned data

    Get PDF
    In this paper we present a new workflow allowing the creation of 3D characters in an automated way that does not require the expertise of an animator. This workflow is based of the acquisition of real human data captured by 3D body scanners, which is them processed to generate firstly animatable body meshes, secondly skinned body meshes and finally textured 3D garments

    3D performance capture for facial animation

    Get PDF
    This work describes how a photogrammetry based 3D capture system can be used as an input device for animation. The 3D Dynamic Capture System is used to capture the motion of a human face, which is extracted from a sequence of 3D models captured at TV frame rate. Initially the positions of a set of landmarks on the face are extracted. These landmarks are then used to provide motion data in two different ways. First, a high level description of the movements is extracted, and these can be used as input to a procedural animation package (i.e. CreaToon). Second the landmarks can be used as registration points for a conformation process where the model to be animated is modified to match the captured model. This approach gives a new sequence of models, which have the structure of the drawn model but the movement of the captured sequence

    Senescence: An Aging based Character Simulation Framework

    Get PDF
    The \u27Senescence\u27 framework is a character simulation plug-in for Maya that can be used for rigging and skinning muscle deformer based humanoid characters with support for aging. The framework was developed using Python, Maya Embedded Language and PyQt. The main targeted users for this framework are the Character Technical Directors, Technical Artists, Riggers and Animators from the production pipeline of Visual Effects Studios. The characters that were simulated using \u27Senescence\u27 were studied using a survey to understand how well the intended age was perceived by the audience. The results of the survey could not reject one of our null hypotheses which means that the difference in the simulated age groups of the character is not perceived well by the participants. But, there is a difference in the perception of simulated age in the character between an Animator and a Non-Animator. Therefore, the difference in the simulated character\u27s age was perceived by an untrained audience, but the audience was unable to relate it to a specific age group

    Generating Continual Human Motion in Diverse 3D Scenes

    Full text link
    We introduce a method to synthesize animator guided human motion across 3D scenes. Given a set of sparse (3 or 4) joint locations (such as the location of a person's hand and two feet) and a seed motion sequence in a 3D scene, our method generates a plausible motion sequence starting from the seed motion while satisfying the constraints imposed by the provided keypoints. We decompose the continual motion synthesis problem into walking along paths and transitioning in and out of the actions specified by the keypoints, which enables long generation of motions that satisfy scene constraints without explicitly incorporating scene information. Our method is trained only using scene agnostic mocap data. As a result, our approach is deployable across 3D scenes with various geometries. For achieving plausible continual motion synthesis without drift, our key contribution is to generate motion in a goal-centric canonical coordinate frame where the next immediate target is situated at the origin. Our model can generate long sequences of diverse actions such as grabbing, sitting and leaning chained together in arbitrary order, demonstrated on scenes of varying geometry: HPS, Replica, Matterport, ScanNet and scenes represented using NeRFs. Several experiments demonstrate that our method outperforms existing methods that navigate paths in 3D scenes

    What do Collaborations with the Arts Have to Say About Human-Robot Interaction?

    Get PDF
    This is a collection of papers presented at the workshop What Do Collaborations with the Arts Have to Say About HRI , held at the 2010 Human-Robot Interaction Conference, in Osaka, Japan

    Computer-assisted animation creation techniques for hair animation and shade, highlight, and shadow

    Get PDF
    制度:新 ; 報告番号:甲3062号 ; 学位の種類:博士(工学) ; 授与年月日:2010/2/25 ; 早大学位記番号:新532

    A 3D Pipeline for 2D Pixel Art Animation

    Get PDF
    Aquest document presenta un informe exhaustiu sobre un projecte destinat a desenvolupar un procés automatitzat per a la creació d'animacions 2D a partir de models 3D utilitzant Blender. L'objectiu principal del projecte és millorar les tècniques existents i reduir la necessitat que els artistes realitzin tasques repetitives en el procés de producció d'animació. El projecte implica el disseny i desenvolupament d'un complement per a Blender, programat en Python, que es va desenvolupar per ser eficient i reduir les tasques intensives en temps que solen caracteritzar algunes etapes en el procés d'animació. El complement suporta tres estils específics d'animació: l'art de píxel, "cel shader", i "cel shader" amb contorns, i es pot expandir per suportar una àmplia gamma d'estils. El complement també és de codi obert, permetent una major col·laboració i potencials contribucions per part de la comunitat. Malgrat els problemes trobats, el projecte ha estat exitós en aconseguir els seus objectius, i els resultats mostren que el complement pot aconseguir resultats similars als adquirits amb eines similars i animació tradicional. El treball futur inclou mantenir el complement actualitzat amb les últimes versions de Blender, publicar-lo a GitHub i mercats de complements de Blender, així com afegir nous estils d'art.This document presents a comprehensive report on a project aimed at developing an automated process for creating 2D animations from 3D models using Blender. The project's main goal is to improve upon existing techniques and reduce the need for artists to do clerical tasks in the animation production process. The project involves the design and development of a plugin for Blender, coded in Python, which was developed to be efficient and reduce time-intensive tasks that usually characterise some stages in the animation process. The plugin supports three specific styles of animation: pixel art, cel shading, and cel shading with outlines, and can be expanded to support a wider range of styles. The plugin is also open-source, allowing for greater collaboration and potential contributions from the community. Despite the challenges faced, the project was successful in achieving its goals, and the results show that the plugin could achieve results similar to those acquired with similar tools and traditional animation. The future work includes keeping the plugin up-to-date with the latest versions of Blender, publishing it on GitHub and Blender plugin markets, as well as adding new art styles

    A framework for natural animation of digitized models

    No full text
    We present a novel versatile, fast and simple framework to generate highquality animations of scanned human characters from input motion data. Our method is purely mesh-based and, in contrast to skeleton-based animation, requires only a minimum of manual interaction. The only manual step that is required to create moving virtual people is the placement of a sparse set of correspondences between triangles of an input mesh and triangles of the mesh to be animated. The proposed algorithm implicitly generates realistic body deformations, and can easily transfer motions between human erent shape and proportions. erent types of input data, e.g. other animated meshes and motion capture les, in just the same way. Finally, and most importantly, it creates animations at interactive frame rates. We feature two working prototype systems that demonstrate that our method can generate lifelike character animations from both marker-based and marker-less optical motion capture data
    corecore