376 research outputs found

    Doctor of Philosophy in Computing

    Get PDF
    dissertationPhysics-based animation has proven to be a powerful tool for creating compelling animations for film and games. Most techniques in graphics are based on methods developed for predictive simulation for engineering applications; however, the goals for graphics applications are dramatically different than the goals of engineering applications. As a result, most physics-based animation tools are difficult for artists to work with, providing little direct control over simulation results. In this thesis, we describe tools for physics-based animation designed with artist needs and expertise in mind. Most materials can be modeled as elastoplastic: they recover from small deformations, but large deformations permanently alter their rest shape. Unfortunately, large plastic deformations, common in graphical applications, cause simulation instabilities if not addressed. Most elastoplastic simulation techniques in graphics rely on a finite-element approach where objects are discretized into a tetrahedral mesh. Using these approaches, maintaining simulation stability during large plastic flows requires remeshing, a complex and computationally expensive process. We introduce a new point-based approach that does not rely on an explicit mesh and avoids the expense of remeshing. Our approach produces comparable results with much lower implementation complexity. Points are a ubiquitous primitive for many effects, so our approach also integrates well with existing artist pipelines. Next, we introduce a new technique for animating stylized images which we call Dynamic Sprites. Artists can use our tool to create digital assets that interact in a natural, but stylized, way in virtual environments. In order to support the types of nonphysical, exaggerated motions often desired by artists, our approach relies on a heavily modified deformable body simulator, equipped with a set of new intuitive controls and an example-based deformation model. Our approach allows artists to specify how the shape of the object should change as it moves and collides in interactive virtual environments. Finally, we introduce a new technique for animating destructive scenes. Our approach is built on the insight that the most important visual aspects of destruction are plastic deformation and fracture. Like with Dynamic Sprites, we use an example-based model of deformation for intuitive artist control. Our simulator treats objects as rigid when computing dynamics but allows them to deform plastically and fracture in between timesteps based on interactions with the other objects. We demonstrate that our approach can efficiently animate the types of destructive scenes common in film and games. These animation techniques are designed to exploit artist expertise to ease creation of complex animations. By using artist-friendly primitives and allowing artists to provide characteristic deformations as input, our techniques enable artists to create more compelling animations, more easily

    THREE DIMENSIONAL MODELING AND ANIMATION OF FACIAL EXPRESSIONS

    Get PDF
    Facial expression and animation are important aspects of the 3D environment featuring human characters. These animations are frequently used in many kinds of applications and there have been many efforts to increase the realism. Three aspects are still stimulating active research: the detailed subtle facial expressions, the process of rigging a face, and the transfer of an expression from one person to another. This dissertation focuses on the above three aspects. A system for freely designing and creating detailed, dynamic, and animated facial expressions is developed. The presented pattern functions produce detailed and animated facial expressions. The system produces realistic results with fast performance, and allows users to directly manipulate it and see immediate results. Two unique methods for generating real-time, vivid, and animated tears have been developed and implemented. One method is for generating a teardrop that continually changes its shape as the tear drips down the face. The other is for generating a shedding tear, which is a kind of tear that seamlessly connects with the skin as it flows along the surface of the face, but remains an individual object. The methods both broaden CG and increase the realism of facial expressions. A new method to automatically set the bones on facial/head models to speed up the rigging process of a human face is also developed. To accomplish this, vertices that describe the face/head as well as relationships between each part of the face/head are grouped. The average distance between pairs of vertices is used to place the head bones. To set the bones in the face with multi-density, the mean value of the vertices in a group is measured. The time saved with this method is significant. A novel method to produce realistic expressions and animations by transferring an existing expression to a new facial model is developed. The approach is to transform the source model into the target model, which then has the same topology as the source model. The displacement vectors are calculated. Each vertex in the source model is mapped to the target model. The spatial relationships of each mapped vertex are constrained

    Structure evaluation of computer human animation quality

    Get PDF
    The University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophyThis work will give a wide survey for various techniques that are present in the field of character computer animation, which concentrates particularly on those techniques and problems involved in the production of realistic character synthesis and motion. A preliminary user study (including Questionnaire, online publishing such as flicker.com, interview, multiple choice questions, publishing on Android mobile phone, and questionnaire analysis, validation, statistical evaluation, design steps and Character Animation Observation) was conducted to explore design questions, identify users' needs, and obtain a "true story" of quality character animation and the effect of using animation as useful tools in Education. The first set of questionnaires were designed to accommodate the evaluation of animation from candidates from different walks of life, ranging from animators, gamers, teacher assistances (TA), students, teaches, professionals and researchers using and evaluating pre-prepared animated character videos scenarios, and the study outcomes has reviewed the recent advances techniques of character animation, motion editing that enable the control of complex animations by interactively blending, improving and tuning artificial or captured motions. The goal of this work was to augment the students learning intuition by providing ways to make education and learning more interesting, useful and fun objectively, in order to improve students’ respond and understanding to any subject area through the use of animation also by producing the required high quality motion, reaction, interaction and story board to viewers of the motion. We present a variety of different evaluation to the motion quality by measuring user sensitivity, observations to any noticeable artefact, usability, usefulness etc. to derive clear useful guidelines from the results, and discuss several interesting systematic trends we have uncovered in the experimental data. We also present an efficient technique for evaluating the capability of animation influence on education to fulfil the requirements of a given scenario, along with the advantages and the effect on those deficiencies of some methods commonly used to improve animation quality to serve the learning process. Finally, we propose a wide range of extensions and statistical calculation enabled by these evaluation tools, such as Wilcoxon, F-test, T-test, Wondershare Quiz creator (WQC), Chi square and many others explained with full details

    CASA: Category-agnostic Skeletal Animal Reconstruction

    Full text link
    Recovering the skeletal shape of an animal from a monocular video is a longstanding challenge. Prevailing animal reconstruction methods often adopt a control-point driven animation model and optimize bone transforms individually without considering skeletal topology, yielding unsatisfactory shape and articulation. In contrast, humans can easily infer the articulation structure of an unknown animal by associating it with a seen articulated character in their memory. Inspired by this fact, we present CASA, a novel Category-Agnostic Skeletal Animal reconstruction method consisting of two major components: a video-to-shape retrieval process and a neural inverse graphics framework. During inference, CASA first retrieves an articulated shape from a 3D character assets bank so that the input video scores highly with the rendered image, according to a pretrained language-vision model. CASA then integrates the retrieved character into an inverse graphics framework and jointly infers the shape deformation, skeleton structure, and skinning weights through optimization. Experiments validate the efficacy of CASA regarding shape reconstruction and articulation. We further demonstrate that the resulting skeletal-animated characters can be used for re-animation.Comment: Accepted to NeurIPS 202

    Deep Detail Enhancement for Any Garment

    Get PDF
    Creating fine garment details requires significant efforts and huge computational resources. In contrast, a coarse shape may be easy to acquire in many scenarios (e.g., via low-resolution physically-based simulation, linear blend skinning driven by skeletal motion, portable scanners). In this paper, we show how to enhance, in a data-driven manner, rich yet plausible details starting from a coarse garment geometry. Once the parameterization of the garment is given, we formulate the task as a style transfer problem over the space of associated normal maps. In order to facilitate generalization across garment types and character motions, we introduce a patch-based formulation, that produces high-resolution details by matching a Gram matrix based style loss, to hallucinate geometric details (i.e., wrinkle density and shape). We extensively evaluate our method on a variety of production scenarios and show that our method is simple, light-weight, efficient, and generalizes across underlying garment types, sewing patterns, and body motion.Comment: 12 page

    Procedural Generation of 2D Creatures

    Get PDF
    Käesoleva bakalaureusetöö raames arendati 2D olendite genereerimise süsteem ning selle süsteemi implementatsioon programmeerimiskeeles JavaScript. Süsteem tekitab mitmekesiseid olendeid ning nendega seotud andmed, sealhulgas skelett, geomeetria ja tekstuur. Bakalaureusetöö sisaldab süsteemi kirjeldust. Süsteemi iga sammu kohta on välja toodud tähtsamad põhimõtted ning seletatud mõned implementatsiooni üksikasjad.Töös analüüsitakse süsteemi tervikuna ning selle implementatsiooni. Tuuakse välja süsteemi probleemid ning nõrgad kohad ja mõõdetakse implementatsiooni jõudlust. Töö lõpus tuuakse välja süsteemi kasutusvõimalused ja võimalused selle edasi arendamiseks.The purpose of this thesis is the development of a system capable of generating a large variety of 2D creatures and their associated data, such as skeletons, meshes and textures. A JavaScript implementation of the system was developed for this thesis. This thesis contains a description of the developed system and a description of each step of the generation process and its principles with some additional notes about the specifics of the implementation.The creature generation system as a whole and its implementation are analysed and their advantages and drawbacks brought out. The performance of the implementation is also tested. Several possible improvements are proposed at the end of the thesis, as well as possible uses

    Real-time simulation and visualisation of cloth using edge-based adaptive meshes

    Get PDF
    Real-time rendering and the animation of realistic virtual environments and characters has progressed at a great pace, following advances in computer graphics hardware in the last decade. The role of cloth simulation is becoming ever more important in the quest to improve the realism of virtual environments. The real-time simulation of cloth and clothing is important for many applications such as virtual reality, crowd simulation, games and software for online clothes shopping. A large number of polygons are necessary to depict the highly exible nature of cloth with wrinkling and frequent changes in its curvature. In combination with the physical calculations which model the deformations, the effort required to simulate cloth in detail is very computationally expensive resulting in much diffculty for its realistic simulation at interactive frame rates. Real-time cloth simulations can lack quality and realism compared to their offline counterparts, since coarse meshes must often be employed for performance reasons. The focus of this thesis is to develop techniques to allow the real-time simulation of realistic cloth and clothing. Adaptive meshes have previously been developed to act as a bridge between low and high polygon meshes, aiming to adaptively exploit variations in the shape of the cloth. The mesh complexity is dynamically increased or refined to balance quality against computational cost during a simulation. A limitation of many approaches is they do not often consider the decimation or coarsening of previously refined areas, or otherwise are not fast enough for real-time applications. A novel edge-based adaptive mesh is developed for the fast incremental refinement and coarsening of a triangular mesh. A mass-spring network is integrated into the mesh permitting the real-time adaptive simulation of cloth, and techniques are developed for the simulation of clothing on an animated character
    corecore