927 research outputs found

    Chain Shape Matching for Simulating Complex Hairstyles

    Get PDF
    Animations of hair dynamics greatly enrich the visual attractiveness of human characters. Traditional simulation techniques handle hair as clumps or continuum for efficiency; however, the visual quality is limited because they cannot represent the fine-scale motion of individual hair strands. Although a recent mass-spring approach tackled the problem of simulating the dynamics of every strand of hair, it required a complicated setting of springs and suffered from high computational cost. In this paper, we base the animation of hair on such a fine-scale on Lattice Shape Matching (LSM), which has been successfully used for simulating deformable objects. Our method regards each strand of hair as a chain of particles, and computes geometrically derived forces for the chain based on shape matching. Each chain of particles is simulated as an individual strand of hair. Our method can easily handle complex hairstyles such as curly or afro styles in a numerically stable way. While our method is not physically based, our GPU-based simulator achieves visually plausible animations consisting of several tens of thousands of hair strands at interactive rates

    Multilayered visuo-haptic hair simulation

    Get PDF
    Over the last fifteen years, research on hair simulation has made great advances in the domains of modeling, animation and rendering, and is now moving towards more innovative interaction modalities. The combination of visual and haptic interaction within a virtual hairstyling simulation framework represents an important concept evolving in this direction. Our visuo-haptic hair interaction framework consists of two layers which handle the response to the user's interaction at a local level (around the contact area), and at a global level (on the full hairstyle). Two distinct simulation models compute individual and collective hair behavior. Our multilayered approach can be used to efficiently address the specific requirements of haptics and vision. Haptic interaction with both models has been tested with virtual hairstyling tool

    3D hair sketching for real-time hair modeling and dynamic simulations

    Get PDF
    Ankara : The Department of Computer Engineering and the Institute of Engineering and Science of Bilkent University, 2008.Thesis (Master's) -- Bilkent University, 2008.Includes bibliographical references leaves 48-51.Hair has been an active research area in computer graphics society for over a decade. Different approaches have been proposed for different aspects of hair research such as modeling, simulating, animating and rendering. In this thesis, we introduce a sketch-based tool making use of direct manipulation interfaces to create hair models and furthermore simulate the created hair models under physically based constraints in real-time. Throughout the thesis, the created tool will be analyzed with respect to different aspects of the problem such as hair modeling, hair simulation, hair sketching and hair rendering.Aras, RıfatM.S

    3D Hair sketching for real-time dynamic & key frame animations

    Get PDF
    Physically based simulation of human hair is a well studied and well known problem. But the "pure" physically based representation of hair (and other animation elements) is not the only concern of the animators, who want to "control" the creation and animation phases of the content. This paper describes a sketch-based tool, with which a user can both create hair models with different styling parameters and produce animations of these created hair models using physically and key frame-based techniques. The model creation and animation production tasks are all performed with direct manipulation techniques in real-time. © 2008 Springer-Verlag

    THREE DIMENSIONAL MODELING AND ANIMATION OF FACIAL EXPRESSIONS

    Get PDF
    Facial expression and animation are important aspects of the 3D environment featuring human characters. These animations are frequently used in many kinds of applications and there have been many efforts to increase the realism. Three aspects are still stimulating active research: the detailed subtle facial expressions, the process of rigging a face, and the transfer of an expression from one person to another. This dissertation focuses on the above three aspects. A system for freely designing and creating detailed, dynamic, and animated facial expressions is developed. The presented pattern functions produce detailed and animated facial expressions. The system produces realistic results with fast performance, and allows users to directly manipulate it and see immediate results. Two unique methods for generating real-time, vivid, and animated tears have been developed and implemented. One method is for generating a teardrop that continually changes its shape as the tear drips down the face. The other is for generating a shedding tear, which is a kind of tear that seamlessly connects with the skin as it flows along the surface of the face, but remains an individual object. The methods both broaden CG and increase the realism of facial expressions. A new method to automatically set the bones on facial/head models to speed up the rigging process of a human face is also developed. To accomplish this, vertices that describe the face/head as well as relationships between each part of the face/head are grouped. The average distance between pairs of vertices is used to place the head bones. To set the bones in the face with multi-density, the mean value of the vertices in a group is measured. The time saved with this method is significant. A novel method to produce realistic expressions and animations by transferring an existing expression to a new facial model is developed. The approach is to transform the source model into the target model, which then has the same topology as the source model. The displacement vectors are calculated. Each vertex in the source model is mapped to the target model. The spatial relationships of each mapped vertex are constrained

    Computer-assisted animation creation techniques for hair animation and shade, highlight, and shadow

    Get PDF
    制度:新 ; 報告番号:甲3062号 ; 学位の種類:博士(工学) ; 授与年月日:2010/2/25 ; 早大学位記番号:新532

    Dynamic Cloth for the Digital Character

    Get PDF
    Cloth simulation tends to have a lingering reputation for being notoriously complex and therefore casually avoided. Very few artists are enthusiastic about a cloth simulator\u27s primary use, and perhaps even fewer would consider cloth simulation for anything other than clothing. This thesis presents typical practices of cloth simulation based on the artistic perspective of a Cloth Technical Director (TD) who worked on the animated feature film and applied case study, Cloudy with a Chance of Meatballs (2009). Through proof of concept using a generic character, simple props, and commercial software, key techniques are demonstrated to replicate the workflow of clothing the digital character as performed by artists at Sony Pictures Imageworks. The result is a set of methods aided to un-complicate the workflow of clothing the digital character

    Development of 3D models, animations, storyline and dialogue system for interactive English learning mobile game for Russian speaking users

    Get PDF
    https://www.ester.ee/record=b5148750*es

    Drawing from motion capture : developing visual languages of animation

    Get PDF
    The work presented in this thesis aims to explore novel approaches of combining motion capture with drawing and 3D animation. As the art form of animation matures, possibilities of hybrid techniques become more feasible, and crosses between traditional and digital media provide new opportunities for artistic expression. 3D computer animation is used for its keyframing and rendering advancements, that result in complex pipelines where different areas of technical and artistic specialists contribute to the end result. Motion capture is mostly used for realistic animation, more often than not for live-action filmmaking, as a visual effect. Realistic animated films depend on retargeting techniques, designed to preserve actors performances with a high degree of accuracy. In this thesis, we investigate alternative production methods that do not depend on retargeting, and provide animators with greater options for experimentation and expressivity. As motion capture data is a great source for naturalistic movements, we aim to combine it with interactive methods such as digital sculpting and 3D drawing. As drawing is predominately used in preproduction, in both the case of realistic animation and visual effects, we embed it instead to alternative production methods, where artists can benefit from improvisation and expression, while emerging in a three-dimensional environment. Additionally, we apply these alternative methods for the visual development of animation, where they become relevant for the creation of specific visual languages that can be used to articulate concrete ideas for storytelling in animation

    Procedurally Generating Biologically Driven Bird and Non-Avian Dinosaur Feathers

    Get PDF
    A key element in computer-graphics research is representing the world around us, and immense inspiration may be found in nature. Algorithms and procedural models may be developed that can describe the three-dimensional shape of objects and how they interact with light. This thesis focuses particularly on bird and other dinosaur feathers and their structure. More specifically, it addresses the problem of procedurally generating biologically driven geometry for modeling feathers in computer graphics. As opposed to previously published methods for generated feather geometry, data is derived from a myriad of real-world specimens of feathers and used in creating graphical models of feathers. Modeling feathers is of interest both for media production and also for various fields of research such as ornithology, paleontology, and material science. In order to create realistic, computer-graphics feathers, the anatomy of feathers is analyzed in detail with the aim of understanding their structure and variation in order to apply that understanding to modeling. Data concerning the shape of actual feathers was collected and analyzed to drive attribute parameters for modeling accurate synthetic feathers, during which methods for generating geometry informed by the data were investigated. Synthesized image results, capabilities, limitations, and extensions of the developed techniques are presented
    corecore