1,436 research outputs found

    An enhance framework on hair modeling and real-time animation

    Get PDF
    Master'sMASTER OF SCIENC

    Memoir of a marionette

    Get PDF
    My film intends to draw public awareness to the quick disappearing of traditional performing arts due to the gap caused by fast advancement of technology. Film, TV, Internet and video games have diverted public interests from old performing arts. It discusses modernity vs. antiquity. How can we progress without sacrificing all we have inherited from our ancestors. In the film, this idea is demonstrated by two trembling hands trying to reach each other – one from the marionette and the other from the old performer. They all have the same dream: laughter and cheers from children. But before the hands join together, the old performer dies and the marionette is also de-stringed; the link between the performing arts and the performer is broken. Later, when the grandson of the old performer comes to collect his belongings, the marionette is found; but the child, unaware of its value, quickly loses interests and dumps it into a box. Unfortunately such metaphor is daily facts in a fast developing country like China, where I came from. As technology and communication advance, the world is becoming more and more like a small village, we are becoming more and more like each other; but, how can we still maintain our individual identity while being a global villager? Preserving our cultural heritage is one of the answers

    Multilayered visuo-haptic hair simulation

    Get PDF
    Over the last fifteen years, research on hair simulation has made great advances in the domains of modeling, animation and rendering, and is now moving towards more innovative interaction modalities. The combination of visual and haptic interaction within a virtual hairstyling simulation framework represents an important concept evolving in this direction. Our visuo-haptic hair interaction framework consists of two layers which handle the response to the user's interaction at a local level (around the contact area), and at a global level (on the full hairstyle). Two distinct simulation models compute individual and collective hair behavior. Our multilayered approach can be used to efficiently address the specific requirements of haptics and vision. Haptic interaction with both models has been tested with virtual hairstyling tool

    A Tool for Creating Expressive Control Over Fur and Feathers

    Get PDF
    The depiction of body fur and feathers has received relatively abundant focus within the animation production environment and continues to pose significant computational challenges. Tools to control fur and feathers as an expressive characteristic to be used by animators have not been explored as fully as dynamic control systems. This thesis outlines research behind and development of a control system for fur and feathers intended to enable authoring of animation in an interactive software tool common in many animation production environments. The results of this thesis show a control system over fur and feathers as easily used as appendages control to create strong posing, silhouette and timing of animations. The tool created impacts the capacity of more effective and efficient animation of characters that use fur and feathers for expressive communication such as hedgehogs, birds, and cats

    Animating ultra-complex voxel scenes through shell deformation

    Get PDF
    version draft du mémoireInternational audienceVoxel representations have many advantages, such as ordered traversal during rendering and trivial very decent LOD through MIPmap. Special effect companies such Digital Domain or Rhythm&Hues now ex- tensively use voxels engines, for semi-transparent objects such as clouds, avalanches, tornado or explosions, but also for complex solid objects. Several gaming companies are also looking into voxel engines to deal with ever more complex scenes but the main problem when dealing with voxel representations is the amount of data that has to be manipulated. This amount usually prevents animating in real time. To solve these is- sues, ARTIS team developed the Gigavoxels framework: a very powerful voxel engine based on GPU ray-casting, with advanced memory man- agement, so that very complex scenes can be rendered in real-time. The purpose of the TER was to develop a solution for animating voxel objects in real-time, implement it and eventually integrate it to the Gigavoxels framework

    Chain Shape Matching for Simulating Complex Hairstyles

    Get PDF
    Animations of hair dynamics greatly enrich the visual attractiveness of human characters. Traditional simulation techniques handle hair as clumps or continuum for efficiency; however, the visual quality is limited because they cannot represent the fine-scale motion of individual hair strands. Although a recent mass-spring approach tackled the problem of simulating the dynamics of every strand of hair, it required a complicated setting of springs and suffered from high computational cost. In this paper, we base the animation of hair on such a fine-scale on Lattice Shape Matching (LSM), which has been successfully used for simulating deformable objects. Our method regards each strand of hair as a chain of particles, and computes geometrically derived forces for the chain based on shape matching. Each chain of particles is simulated as an individual strand of hair. Our method can easily handle complex hairstyles such as curly or afro styles in a numerically stable way. While our method is not physically based, our GPU-based simulator achieves visually plausible animations consisting of several tens of thousands of hair strands at interactive rates

    Creating hair for a 3D character with Autodesk Maya, XGen and RenderMan

    Get PDF
    In this thesis I make hair for a 3D character for a CG animation. I’m using the following softwares: Autodesk Maya, XGen and RenderMan. Maya is the 3D studio I’ll be working in from to create the hair till rendering it. With XGen I’ll generate and shape the hair strands. I’ll animate the hair with Maya’s dynamic simulation tools. With RenderMan I’ll shade and render the hair. The end result is almost realistic hair in CG animation. My workflow is pretty straightforward. First I explain through my software choices and starting scene. After that, I explain and show the steps, and reasons behind them, of making the hair while creating it. At the end I analyze the result and explain how I would continue the process.Tässä opinnäytetyössä teen 3D hahmolle hiukset lyhyt animaatioon. Käyttäen ohjelmistoja: Autodesk Maya, XGen ja RenderMan. Maya toimii 3D työtilana hiusten muotoilusta kuvantamiseen. XGen:illä generoin ja muotoilen yksittäiset hiukset. Animoin hiukset dynaamisen simulaation keinoin Mayan työkaluilla. Sävytän ja kuvannan hiukset käyttäen RenderMan:iä. Lopputuloksena on melkein realistiset hiukset tietokoneella tehtyyn animaatioon. Työnkulku on mahdollisimman suoralinjainen. Ensin selitän ohjelmisto valintani, sittenaloitus tiedostoon jo asettamani asetukset ja elementit. Tämän jälkeen siirryn hiusten tekoon ja työvaiheiden selittämiseen. Lopuksi pohdin lopputulosta ja kuinka parantaisin sitä

    HeadOn: Real-time Reenactment of Human Portrait Videos

    Get PDF
    We propose HeadOn, the first real-time source-to-target reenactment approach for complete human portrait videos that enables transfer of torso and head motion, face expression, and eye gaze. Given a short RGB-D video of the target actor, we automatically construct a personalized geometry proxy that embeds a parametric head, eye, and kinematic torso model. A novel real-time reenactment algorithm employs this proxy to photo-realistically map the captured motion from the source actor to the target actor. On top of the coarse geometric proxy, we propose a video-based rendering technique that composites the modified target portrait video via view- and pose-dependent texturing, and creates photo-realistic imagery of the target actor under novel torso and head poses, facial expressions, and gaze directions. To this end, we propose a robust tracking of the face and torso of the source actor. We extensively evaluate our approach and show significant improvements in enabling much greater flexibility in creating realistic reenacted output videos.Comment: Video: https://www.youtube.com/watch?v=7Dg49wv2c_g Presented at Siggraph'1
    corecore