1,831 research outputs found

    Real-time relief mapping on arbitrary polygonal surfaces

    Full text link

    A survey of real-time crowd rendering

    Get PDF
    In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft

    Single-picture reconstruction and rendering of trees for plausible vegetation synthesis

    Get PDF
    State-of-the-art approaches for tree reconstruction either put limiting constraints on the input side (requiring multiple photographs, a scanned point cloud or intensive user input) or provide a representation only suitable for front views of the tree. In this paper we present a complete pipeline for synthesizing and rendering detailed trees from a single photograph with minimal user effort. Since the overall shape and appearance of each tree is recovered from a single photograph of the tree crown, artists can benefit from georeferenced images to populate landscapes with native tree species. A key element of our approach is a compact representation of dense tree crowns through a radial distance map. Our first contribution is an automatic algorithm for generating such representations from a single exemplar image of a tree. We create a rough estimate of the crown shape by solving a thin-plate energy minimization problem, and then add detail through a simplified shape-from-shading approach. The use of seamless texture synthesis results in an image-based representation that can be rendered from arbitrary view directions at different levels of detail. Distant trees benefit from an output-sensitive algorithm inspired on relief mapping. For close-up trees we use a billboard cloud where leaflets are distributed inside the crown shape through a space colonization algorithm. In both cases our representation ensures efficient preservation of the crown shape. Major benefits of our approach include: it recovers the overall shape from a single tree image, involves no tree modeling knowledge and minimal authoring effort, and the associated image-based representation is easy to compress and thus suitable for network streaming.Peer ReviewedPostprint (author's final draft

    Relief impostor selection for large scale urban rendering

    Get PDF
    Image-based rendering techniques are often the preferred choice to accelerate the exploration of massive outdoor models and complex human-made structures. In the last few years, relief mapping has been shown to be extremely useful as a compact representation of highly-detailed 3D models. In this paper we describe a rendering system for interactive, high-quality visualization of large scale urban models through a hierarchical collection of properly-oriented relief-mapped polygons. At the heart of our approach is a visibilityaware algorithm for the selection of the set of viewing planes supporting the relief maps. Our selection algorithm optimizes both the sampling density and the coverage of the relief maps and its running time is mostly independent on the underlying geometry. We show that our approach is suitable for navigating through large scale urban models at interactive rates while preserving both geometric and appearance details.Postprint (published version

    JUST NOTICEABLE DIFFERENCE SURVEY OF COMPUTER GENERATED IMAGERY USING NORMAL MAPS

    Get PDF
    Normal maps are widely used as a resource-efficient means of simulating detailed topology on 3D surfaces in the gaming, simulation, and film industries. However, as surface mesh density increases, it is unknown at what level of density these increases become no longer perceivable, and whether normal maps significantly affect this threshold. This study examined at what point participants were unable to discern differences between one level of mesh density and another using an adapted staircase model. Participants identified this threshold for five different organic character models. The averages of each of these thresholds were taken and compared against the results of a control group, which observed the same models without normal maps. The study found that the average threshold for discerning differences in level of detail occurred in the 3,000 to 14,000 polygon range for normal mapped models, and the 240,000 to 950,000 range for the control group. This analysis suggested that normal maps have a significant impact on the viewer\u27s ability to discern differences in detail, and that developing graphics beyond the range of 3,000 to 14,000 polygons is unnecessary for organic character models when normal maps are used

    Photorealistic Texturing for Modern Video Games

    Get PDF
    Simulating realism has become a standard for many games in the industry. While real-time rendering requires considerable rendering resources, texturing defines the physical parameters of the surfaces with a lower computer power. The objective of this thesis was to study the evolution of Texture Mapping and define a workflow for approaching a photorealism with modern instruments for video game production. All the textures were created with the usage of Agisoft Photoscan, Substance Designer & Paintrer, Abode Photoshop and Pixologic Zbrush. With the aid of both the theory and practical approaches, this thesis explores the questions of how the textures are used and which applications can help to build them for a better result. Each workflow is introduced with the main points of their purposes as the author’s suggestion, which can be used as a guideline for many companies, including Ringtail Studios OÜ. In conclusion, the thesis summarizes the outcome of the textures and their workflow. The results are successfully established by the author with attendance to introduce methods for the material production

    Defining the 3D geometry of thin shale units in the Sleipner reservoir using seismic attributes

    Get PDF
    Acknowledgments The seismic interpretation and image processing was carried out in the SeisLab facility at the University of Aberdeen (sponsored by BG BP and Chevron). Seismic imaging analysis was performed using GeoTeric (ffA), and analysis of seismic amplitudes was performed in Petrel 2015 (Schlumberger). We would like to thank the NDDC (RG11766-10) for funding this research and Statoil for the release of the Sleipner field seismic dataset utilized in this research paper and also Anne-Kari Furre and her colleagues for their assistance. We also thank the editor, Alejandro Escalona and the two anonymous reviewers for their constructive and in depth comments that improved the paper.Peer reviewedPostprin

    Hybrid Rugosity Mesostructures (HRMs) for fast and accurate rendering of fine haptic detail

    Get PDF
    The haptic rendering of surface mesostructure (fine relief features) in dense triangle meshes requires special structures, equipment, and high sampling rates for detailed perception of rugged models. Low cost approaches render haptic texture at the expense of fidelity of perception. We propose a faster method for surface haptic rendering using image-based Hybrid Rugosity Mesostructures (HRMs), paired maps with per-face heightfield displacements and normal maps, which are layered on top of a much decimated mesh, effectively adding greater surface detail than actually present in the geometry. The haptic probe’s force response algorithm is modulated using the blended HRM coat to render dense surface features at much lower costs. The proposed method solves typical problems at edge crossings, concave foldings and texture transitions. To prove the wellness of the approach, a usability testbed framework was built to measure and compare experimental results of haptic rendering approaches in a common set of specially devised meshes, HRMs, and performance tests. Trial results of user testing evaluations show the goodness of the proposed HRM technique, rendering accurate 3D surface detail at high sampling rates, deriving useful modeling and perception thresholds for this technique.Peer ReviewedPostprint (published version

    Relief mapping on cubic cell complexes

    Get PDF
    In this paper we present an algorithm for parameterizing arbitrary surfaces onto a quadrilateral domain defined by a collection of cubic cells. The parameterization inside each cell is implicit and thus requires storing no texture coordinates. Based upon this parameterization, we propose a unified representation of geometric and appearance information of complex models. The representation consists of a set of cubic cells (providing a coarse representation of the object) together with a collection of distance maps (encoding fine geometric detail inside each cell). Our new representation has similar uses than geometry images, but it requires storing a single distance value per texel instead of full vertex coordinates. When combined with color and normal maps, our representation can be used to render an approximation of the model through an output-sensitive relief mapping algorithm, thus being specially amenable for GPU raytracing.Postprint (author’s final draft
    corecore