2 research outputs found

    Memory Conserving Rendering Method for Hair/fur Systems in Computer Graphics

    Get PDF
    In a very CPU/memory intensive field of photo-realistic computer graphics, various techniques are employed in the attempt to conserve resources. One group of such optimization methods is dedicated to optimizing a representation of hair systems, grass systems or any group of objects that can be looked at as a generalized hair system. A classical method of computing hair systems is to represent each hair as a spline in memory and then compute intersections with each of them. This method gives good results, but usually consumes large amounts of memory. Another problem is - visually doubling the density of hair quadruples memory consumption. Even when gigabytes of memory are available, a realistic hair scene, may overwhelm memory size, which may lead to an application crash or at least, to an I/O bottleneck and to increasing time of rendering. Another method is to compute a hair system procedurally inside of a specified volume. This produces a small memory foot-print, but makes animation difficult because individual hairs within the volume are not controllable. In this work we propose a hybrid approach, where a single hair particle represents a cylindrical volume, in which multiple hair fibers will be computed on-the-fly. This approach will produce a constant memory footprint for that cylindrical volume, regardless of how many individual hairs are computed and it will allow individual hairs to retain the behavior of that volume. As a result, a proposed approach provides a significant reduction in memory footprint while increasing the number of hairs being computed.Computer Scienc

    Image-Based Approaches to Hair Modeling

    Get PDF
    Hair is a relevant characteristic of virtual characters, therefore the modeling of plausible facial hair and hairstyles is an essential step in the generation of computer generated (CG) avatars. However, the inherent geometric complexity of hair together with the huge number of filaments of an average human head make the task of modeling hairstyles a very challenging one. To date this is commonly a manual process which requires artist skills or very specialized and costly acquisition software. In this work we present an image-based approach to model facial hair (beard and eyebrows) and (head) hairstyles. Since facial hair is usually much shorter than the average head hair two different methods are resented, adapted to the characteristics of the hair to be modeled. Facial hair is modeled using data extracted from facial texture images and missing information is inferred by means of a database-driven prior model. Our hairstyle reconstruction technique employs images of the hair to be modeled taken with a thermal camera. The major advantage of our thermal image-based method over conventional image-based techniques lies on the fact that during data capture the hairstyle is "lit from the inside": the thermal camera captures heat irradiated by the head and actively re-emitted by the hair filaments almost isotropically. Following this approach we can avoid several issues of conventional image-based techniques, like shadowing or anisotropy in reflectance. The presented technique requires minimal user interaction and a simple acquisition setup. Several challenging examples demonstrate the potential of the proposed approach
    corecore