2,916 research outputs found

    Photo-Realistic Rendering of Fiber Assemblies

    Get PDF
    In this thesis we introduce a novel uniform formalism for light scattering from filaments, the Bidirectional Fiber Scattering Distribution Function (BFSDF). Similar to the role of the Bidirectional Surface Scattering Reflectance Distribution Function (BSSRDF) for surfaces, the BFSDF can be seen as a general approach for describing light scattering from filaments. Based on this theoretical foundation, approximations for various levels of abstraction are derived allowing for efficient and accurate rendering of fiber assemblies, such as hair or fur. In this context novel rendering techniques accounting for all prominent effects of local and global illumination are presented. Moreover, physically-based analytical BFSDF models for human hair and other kinds of fibers are derived. Finally, using the model for human hair we make a first step towards image-based BFSDF reconstruction, where optical properties of a single strand are estimated from "synthetic photographs" (renderings) a full hairstyle

    Multilayered visuo-haptic hair simulation

    Get PDF
    Over the last fifteen years, research on hair simulation has made great advances in the domains of modeling, animation and rendering, and is now moving towards more innovative interaction modalities. The combination of visual and haptic interaction within a virtual hairstyling simulation framework represents an important concept evolving in this direction. Our visuo-haptic hair interaction framework consists of two layers which handle the response to the user's interaction at a local level (around the contact area), and at a global level (on the full hairstyle). Two distinct simulation models compute individual and collective hair behavior. Our multilayered approach can be used to efficiently address the specific requirements of haptics and vision. Haptic interaction with both models has been tested with virtual hairstyling tool

    Non-photorealistic rendering: a critical examination and proposed system.

    Get PDF
    In the first part of the program the emergent field of Non-Photorealistic Rendering is explored from a cultural perspective. This is to establish a clear understanding of what Non-Photorealistic Rendering (NPR) ought to be in its mature form in order to provide goals and an overall infrastructure for future development. This thesis claims that unless we understand and clarify NPR's relationship with other media (photography, photorealistic computer graphics and traditional media) we will continue to manufacture "new solutions" to computer based imaging which are confused and naive in their goals. Such solutions will be rejected by the art and design community, generally condemned as novelties of little cultural worth ( i.e. they will not sell). This is achieved by critically reviewing published systems that are naively described as Non-photorealistic or "painterly" systems. Current practices and techniques are criticised in terms of their low ability to articulate meaning in images; solutions to this problem are given. A further argument claims that NPR, while being similar to traditional "natural media" techniques in certain aspects, is fundamentally different in other ways. This similarity has lead NPR to be sometimes proposed as "painting simulation" — something it can never be. Methods for avoiding this position are proposed. The similarities and differences to painting and drawing are presented and NPR's relationship to its other counterpart, Photorealistic Rendering (PR), is then delineated. It is shown that NPR is paradigmatically different to other forms of representation — i.e. it is not an "effect", but rather something basically different. The benefits of NPR in its mature form are discussed in the context of Architectural Representation and Design in general. This is done in conjunction with consultations with designers and architects. From this consultation a "wish-list" of capabilities is compiled by way of a requirements capture for a proposed system. A series of computer-based experiments resulting in the systems "Expressive Marks" and 'Magic Painter" are carried out; these practical experiments add further understanding to the problems of NPR. The exploration concludes with a prototype system "Piranesi" which is submitted as a good overall solution to the problem of NPR. In support of this written thesis are : - • The Expressive Marks system • Magic Painter system • The Piranesi system (which includes the EPixel and Sketcher systems) • A large portfolio of images generated throughout the exploration

    Analysis domain model for shared virtual environments

    Get PDF
    The field of shared virtual environments, which also encompasses online games and social 3D environments, has a system landscape consisting of multiple solutions that share great functional overlap. However, there is little system interoperability between the different solutions. A shared virtual environment has an associated problem domain that is highly complex raising difficult challenges to the development process, starting with the architectural design of the underlying system. This paper has two main contributions. The first contribution is a broad domain analysis of shared virtual environments, which enables developers to have a better understanding of the whole rather than the part(s). The second contribution is a reference domain model for discussing and describing solutions - the Analysis Domain Model

    Visual wetness perception based on image color statistics

    Get PDF
    Color vision provides humans and animals with the abilities to discriminate colors based on the wavelength composition of light and to determine the location and identity of objects of interest in cluttered scenes (e.g., ripe fruit among foliage). However, we argue that color vision can inform us about much more than color alone. Since a trichromatic image carries more information about the optical properties of a scene than a monochromatic image does, color can help us recognize complex material qualities. Here we show that human vision uses color statistics of an image for the perception of an ecologically important surface condition (i.e., wetness). Psychophysical experiments showed that overall enhancement of chromatic saturation, combined with a luminance tone change that increases the darkness and glossiness of the image, tended to make dry scenes look wetter. Theoretical analysis along with image analysis of real objects indicated that our image transformation, which we call the wetness enhancing transformation, is consistent with actual optical changes produced by surface wetting. Furthermore, we found that the wetness enhancing transformation operator was more effective for the images with many colors (large hue entropy) than for those with few colors (small hue entropy). The hue entropy may be used to separate surface wetness from other surface states having similar optical properties. While surface wetness and surface color might seem to be independent, there are higher order color statistics that can influence wetness judgments, in accord with the ecological statistics. The present findings indicate that the visual system uses color image statistics in an elegant way to help estimate the complex physical status of a scene

    Visual Prototyping of Cloth

    Get PDF
    Realistic visualization of cloth has many applications in computer graphics. An ongoing research problem is how to best represent and capture appearance models of cloth, especially when considering computer aided design of cloth. Previous methods can be used to produce highly realistic images, however, possibilities for cloth-editing are either restricted or require the measurement of large material databases to capture all variations of cloth samples. We propose a pipeline for designing the appearance of cloth directly based on those elements that can be changed within the production process. These are optical properties of fibers, geometrical properties of yarns and compositional elements such as weave patterns. We introduce a geometric yarn model, integrating state-of-the-art textile research. We further present an approach to reverse engineer cloth and estimate parameters for a procedural cloth model from single images. This includes the automatic estimation of yarn paths, yarn widths, their variation and a weave pattern. We demonstrate that we are able to match the appearance of original cloth samples in an input photograph for several examples. Parameters of our model are fully editable, enabling intuitive appearance design. Unfortunately, such explicit fiber-based models can only be used to render small cloth samples, due to large storage requirements. Recently, bidirectional texture functions (BTFs) have become popular for efficient photo-realistic rendering of materials. We present a rendering approach combining the strength of a procedural model of micro-geometry with the efficiency of BTFs. We propose a method for the computation of synthetic BTFs using Monte Carlo path tracing of micro-geometry. We observe that BTFs usually consist of many similar apparent bidirectional reflectance distribution functions (ABRDFs). By exploiting structural self-similarity, we can reduce rendering times by one order of magnitude. This is done in a process we call non-local image reconstruction, which has been inspired by non-local means filtering. Our results indicate that synthesizing BTFs is highly practical and may currently only take a few minutes for small BTFs. We finally propose a novel and general approach to physically accurate rendering of large cloth samples. By using a statistical volumetric model, approximating the distribution of yarn fibers, a prohibitively costly, explicit geometric representation is avoided. As a result, accurate rendering of even large pieces of fabrics becomes practical without sacrificing much generality compared to fiber-based techniques

    Realistic Hair Simulation: Animation and Rendering

    Get PDF
    International audienceThe last five years have seen a profusion of innovative solutions to one of the most challenging tasks in character synthesis: hair simulation. This class covers both recent and novel research ideas in hair animation and rendering, and presents time tested industrial practices that resulted in spectacular imagery

    Final Report to NSF of the Standards for Facial Animation Workshop

    Get PDF
    The human face is an important and complex communication channel. It is a very familiar and sensitive object of human perception. The facial animation field has increased greatly in the past few years as fast computer graphics workstations have made the modeling and real-time animation of hundreds of thousands of polygons affordable and almost commonplace. Many applications have been developed such as teleconferencing, surgery, information assistance systems, games, and entertainment. To solve these different problems, different approaches for both animation control and modeling have been developed

    Appearance Modeling of Living Human Tissues

    Get PDF
    This is the peer reviewed version of the following article: Nunes, A.L.P., Maciel, A., Meyer, G.W., John, N.W., Baranoski, G.V.G., & Walter, M. (2019). Appearance Modeling of Living Human Tissues, Computer Graphics Forum, which has been published in final form at https://doi.org/10.1111/cgf.13604. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Self-ArchivingThe visual fidelity of realistic renderings in Computer Graphics depends fundamentally upon how we model the appearance of objects resulting from the interaction between light and matter reaching the eye. In this paper, we survey the research addressing appearance modeling of living human tissue. Among the many classes of natural materials already researched in Computer Graphics, living human tissues such as blood and skin have recently seen an increase in attention from graphics research. There is already an incipient but substantial body of literature on this topic, but we also lack a structured review as presented here. We introduce a classification for the approaches using the four types of human tissues as classifiers. We show a growing trend of solutions that use first principles from Physics and Biology as fundamental knowledge upon which the models are built. The organic quality of visual results provided by these Biophysical approaches is mainly determined by the optical properties of biophysical components interacting with light. Beyond just picture making, these models can be used in predictive simulations, with the potential for impact in many other areas

    Motion-I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling

    Full text link
    We introduce Motion-I2V, a novel framework for consistent and controllable image-to-video generation (I2V). In contrast to previous methods that directly learn the complicated image-to-video mapping, Motion-I2V factorizes I2V into two stages with explicit motion modeling. For the first stage, we propose a diffusion-based motion field predictor, which focuses on deducing the trajectories of the reference image's pixels. For the second stage, we propose motion-augmented temporal attention to enhance the limited 1-D temporal attention in video latent diffusion models. This module can effectively propagate reference image's feature to synthesized frames with the guidance of predicted trajectories from the first stage. Compared with existing methods, Motion-I2V can generate more consistent videos even at the presence of large motion and viewpoint variation. By training a sparse trajectory ControlNet for the first stage, Motion-I2V can support users to precisely control motion trajectories and motion regions with sparse trajectory and region annotations. This offers more controllability of the I2V process than solely relying on textual instructions. Additionally, Motion-I2V's second stage naturally supports zero-shot video-to-video translation. Both qualitative and quantitative comparisons demonstrate the advantages of Motion-I2V over prior approaches in consistent and controllable image-to-video generation. Please see our project page at https://xiaoyushi97.github.io/Motion-I2V/.Comment: Project page: https://xiaoyushi97.github.io/Motion-I2V
    • …
    corecore