155 research outputs found

    Computer-assisted animation creation techniques for hair animation and shade, highlight, and shadow

    Get PDF
    制度:新 ; 報告番号:甲3062号 ; 学位の種類:博士(工学) ; 授与年月日:2010/2/25 ; 早大学位記番号:新532

    Chain Shape Matching for Simulating Complex Hairstyles

    Get PDF
    Animations of hair dynamics greatly enrich the visual attractiveness of human characters. Traditional simulation techniques handle hair as clumps or continuum for efficiency; however, the visual quality is limited because they cannot represent the fine-scale motion of individual hair strands. Although a recent mass-spring approach tackled the problem of simulating the dynamics of every strand of hair, it required a complicated setting of springs and suffered from high computational cost. In this paper, we base the animation of hair on such a fine-scale on Lattice Shape Matching (LSM), which has been successfully used for simulating deformable objects. Our method regards each strand of hair as a chain of particles, and computes geometrically derived forces for the chain based on shape matching. Each chain of particles is simulated as an individual strand of hair. Our method can easily handle complex hairstyles such as curly or afro styles in a numerically stable way. While our method is not physically based, our GPU-based simulator achieves visually plausible animations consisting of several tens of thousands of hair strands at interactive rates

    Expressive rendering of mountainous terrain

    Get PDF
    technical reportPainters and cartographers have developed artistic landscape rendering techniques for centuries. Such renderings can visualize complex three-dimensional landscapes in a pleasing and understandable way. In this work we examine a particular type of artistic depiction, panorama maps, in terms of function and style, and we develop methods to automatically generate panorama map reminiscent renderings from GIS data. In particular, we develop image-based procedural surface textures for mountainous terrain. Our methods use the structural information present in the terrain and are developed with perceptual metrics and artistic considerations in mind

    Expressive rendering of animated hair

    Get PDF
    National audienceHair simulation is one of the crucial elements of a character realism in video games as well as animated movies. It is also one of the most challenging because of its complex nature. A simulation model needs to be able to handle hair fibers or wisp interaction while keeping the desired rendering style. During the past few years intensive work has been done in this field. Most of the authors have tried to render and animate hair as realistically as possible. Impressive results have been obtained and computation times have been reduced. Nevertheless this level of realism is not always desired by the animator. Most animated characters are represented with a hair model only composed of a few hair wisps or clumps in other words the individual hair fibers are not even accounted for. Only little work has been done to animate and render non-photorealistic hair for cel-characters1 . The goal of this work is to design an expressive rendering technique for a realistic animation of hair. This project is a part of an ANR research program for a joint industrial project with two production studios: Neomis Animation and BeeLight, two other INRIA project-teams: Bipop and Evasion and a CNRS lab (Institut Jean Le Rond d'Alembert de l'Université Pierre et Marie Curie). The aim of this project is to provide hair rendering and animating tools for movie making. According to the discussions we had with artists from Neomis studio, it appears that an animator will expect realism of hair motion combined with an expressive rendering technique that is dedicated to animated movies

    Mobile Wound Assessment and 3D Modeling from a Single Image

    Get PDF
    The prevalence of camera-enabled mobile phones have made mobile wound assessment a viable treatment option for millions of previously difficult to reach patients. We have designed a complete mobile wound assessment platform to ameliorate the many challenges related to chronic wound care. Chronic wounds and infections are the most severe, costly and fatal types of wounds, placing them at the center of mobile wound assessment. Wound physicians assess thousands of single-view wound images from all over the world, and it may be difficult to determine the location of the wound on the body, for example, if the wound is taken at close range. In our solution, end-users capture an image of the wound by taking a picture with their mobile camera. The wound image is segmented and classified using modern convolution neural networks, and is stored securely in the cloud for remote tracking. We use an interactive semi-automated approach to allow users to specify the location of the wound on the body. To accomplish this we have created, to the best our knowledge, the first 3D human surface anatomy labeling system, based off the current NYU and Anatomy Mapper labeling systems. To interactively view wounds in 3D, we have presented an efficient projective texture mapping algorithm for texturing wounds onto a 3D human anatomy model. In so doing, we have demonstrated an approach to 3D wound reconstruction that works even for a single wound image

    Algebraic Smooth Occluding Contours

    Full text link
    Computing occluding contours is a key building block of non-photorealistic rendering, but producing contours with consistent visibility has been notoriously challenging. This paper describes the first general-purpose smooth surface construction for which the occluding contours can be computed in closed form. For a given input mesh and camera viewpoint, we produce a G1G^1 piecewise-quadratic surface approximating the mesh. We show how the image-space occluding contours of this representation may then be described as piecewise rational curves. We show that this method produces smooth contours with consistent visibility much more efficiently than the state-of-the-art.Comment: 10 pages, 11 figure

    Semantics of Pictorial Space

    Get PDF

    Surface Shape Perception in Volumetric Stereo Displays

    Get PDF
    In complex volume visualization applications, understanding the displayed objects and their spatial relationships is challenging for several reasons. One of the most important obstacles is that these objects can be translucent and can overlap spatially, making it difficult to understand their spatial structures. However, in many applications, for example medical visualization, it is crucial to have an accurate understanding of the spatial relationships among objects. The addition of visual cues has the potential to help human perception in these visualization tasks. Descriptive line elements, in particular, have been found to be effective in conveying shape information in surface-based graphics as they sparsely cover a geometrical surface, consistently following the geometry. We present two approaches to apply such line elements to a volume rendering process and to verify their effectiveness in volume-based graphics. This thesis reviews our progress to date in this area and discusses its effects and limitations. Specifically, it examines the volume renderer implementation that formed the foundation of this research, the design of the pilot study conducted to investigate the effectiveness of this technique, the results obtained. It further discusses improvements designed to address the issues revealed by the statistical analysis. The improved approach is able to handle visualization targets with general shapes, thus making it more appropriate to real visualization applications involving complex objects

    Physics-based Reconstruction and Animation of Humans

    Get PDF
    Creating digital representations of humans is of utmost importance for applications ranging from entertainment (video games, movies) to human-computer interaction and even psychiatrical treatments. What makes building credible digital doubles difficult is the fact that the human vision system is very sensitive to perceiving the complex expressivity and potential anomalies in body structures and motion. This thesis will present several projects that tackle these problems from two different perspectives: lightweight acquisition and physics-based simulation. It starts by describing a complete pipeline that allows users to reconstruct fully rigged 3D facial avatars using video data coming from a handheld device (e.g., smartphone). The avatars use a novel two-scale representation composed of blendshapes and dynamic detail maps. They are constructed through an optimization that integrates feature tracking, optical flow, and shape from shading. Continuing along the lines of accessible acquisition systems, we discuss a framework for simultaneous tracking and modeling of articulated human bodies from RGB-D data. We show how semantic information can be extracted from the scanned body shapes. In the second half of the thesis, we will deviate from using standard linear reconstruction and animation models, and rather focus on exploiting physics-based techniques that are able to incorporate complex phenomena such as dynamics, collision response and incompressibility of the materials. The first approach we propose assumes that each 3D scan of an actor records his body in a physical steady state and uses a process called inverse physics to extract a volumetric physics-ready anatomical model of him. By using biologically-inspired growth models for the bones, muscles and fat, our method can obtain realistic anatomical reconstructions that can be later on animated using external tracking data such as the one resulting from tracking motion capture markers. This is then extended to a novel physics-based approach for facial reconstruction and animation. We propose a facial animation model which simulates biomechanical muscle contractions in a volumetric head model in order to create the facial expressions seen in the input scans. We then show how this approach allows for new avenues of dynamic artistic control, simulation of corrective facial surgery, and interaction with external forces and objects

    Perceptually-motivated, interactive rendering and editing of global illumination

    Get PDF
    This thesis proposes several new perceptually-motivated techniques to synthesize, edit and enhance depiction of three-dimensional virtual scenes. Finding algorithms that fit the perceptually economic middle ground between artistic depiction and full physical simulation is the challenge taken in this work. First, we will present three interactive global illumination rendering approaches that are inspired by perception to efficiently depict important light transport. Those methods have in common to compute global illumination in large and fully dynamic scenes allowing for light, geometry, and material changes at interactive or real-time rates. Further, this thesis proposes a tool to edit reflections, that allows to bend physical laws to match artistic goals by exploiting perception. Finally, this work contributes a post-processing operator that depicts high contrast scenes in the same way as artists do, by simulating it "seen'; through a dynamic virtual human eye in real-time.Diese Arbeit stellt eine Anzahl von Algorithmen zur Synthese, Bearbeitung und verbesserten Darstellung von virtuellen drei-dimensionalen Szenen vor. Die Herausforderung liegt dabei in der Suche nach Ausgewogenheit zwischen korrekter physikalischer Berechnung und der künstlerischen, durch die Gesetze der menschlichen Wahrnehmung motivierten Praxis. Zunächst werden drei Verfahren zur Bild-Synthese mit globaler Beleuchtung vorgestellt, deren Gemeinsamkeit in der effizienten Handhabung großer und dynamischer virtueller Szenen liegt, in denen sich Geometrie, Materialen und Licht frei verändern lassen. Darauffolgend wird ein Werkzeug zum Editieren von Reflektionen in virtuellen Szenen das die menschliche Wahrnehmung ausnutzt um künstlerische Vorgaben umzusetzen, vorgestellt. Die Arbeit schließt mit einem Filter am Ende der Verarbeitungskette, der den wahrgenommen Kontrast in einem Bild erhöht, indem er die Entstehung von Glanzeffekten im menschlichen Auge nachbildet
    corecore