187 research outputs found

    A survey of real-time crowd rendering

    Get PDF
    In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft

    Using image morphing for memory-efficient impostor rendering on GPU

    Get PDF
    Real-time rendering of large animated crowds consisting thousands of virtual humans is important for several applications including simulations, games and interactive walkthroughs; but cannot be performed using complex polygonal models at interactive frame rates. For that reason, several methods using large numbers of pre-computed image-based representations, which are called as impostors, have been proposed. These methods take the advantage of existing programmable graphics hardware to compensate the computational expense while maintaining the visual fidelity. Making the number of different virtual humans, which can be rendered in real-time, not restricted anymore by the required computational power but by the texture memory consumed for the variety and discretization of their animations. In this work, we proposed an alternative method that reduces the memory consumption by generating compelling intermediate textures using image-morphing techniques. In order to demonstrate the preserved perceptual quality of animations, where half of the key-frames were rendered using the proposed methodology, we have implemented the system using the graphical processing unit and obtained promising results at interactive frame rates

    Bandwidth-Efficient Parallel Visualization for Mobile Devices

    Get PDF

    Output-Sensitive Rendering of Detailed Animated Characters for Crowd Simulation

    Get PDF
    High-quality, detailed animated characters are often represented as textured polygonal meshes. The problem with this technique is the high cost that involves rendering and animating each one of these characters. This problem has become a major limiting factor in crowd simulation. Since we want to render a huge number of characters in real-time, the purpose of this thesis is therefore to study the current existing approaches in crowd rendering to derive a novel approach. The main limitations we have found when using impostors are (1) the big amount of memory needed to store them, which also has to be sent to the graphics card, (2) the lack of visual quality in close-up views, and (3) some visibility problems. As we wanted to overcome these limitations, and improve performance results, the found conclusions lead us to present a new representation for 3D animated characters using relief mapping, thus supporting an output-sensitive rendering. The basic idea of our approach is to encode each character through a small collection of textured boxes storing color and depth values. At runtime, each box is animated according to the rigid transformation of its associated bone in the animated skeleton. A fragment shader is used to recover the original geometry using an adapted version of relief mapping. Unlike competing output-sensitive approaches, our compact representation is able to recover high-frequency surface details and reproduces view-motion parallax e ects. Furthermore, the proposed approach ensures correct visibility among di erent animated parts, and it does not require us to prede ne the animation sequences nor to select a subset of discrete views. Finally, a user study demonstrates that our approach allows for a large number of simulated agents with negligible visual artifacts

    Image-based crowd rendering

    Full text link

    Spatially-encoded far-field representations for interactive walkthroughs

    Get PDF

    Computing and fabricating multilayer models

    Get PDF
    We present a method for automatically converting a digital 3D model into a multilayer model: a parallel stack of high-resolution 2D images embedded within a semi-transparent medium. Multilayer models can be produced quickly and cheaply and provide a strong sense of an object's 3D shape and texture over a wide range of viewing directions. Our method is designed to minimize visible cracks and other artifacts that can arise when projecting an input model onto a small number of parallel planes, and avoid layer transitions that cut the model along important surface features. We demonstrate multilayer models fabricated with glass and acrylic tiles using commercially available printers

    Multiresolution Techniques for Real–Time Visualization of Urban Environments and Terrains

    Get PDF
    In recent times we are witnessing a steep increase in the availability of data coming from real–life environments. Nowadays, virtually everyone connected to the Internet may have instant access to a tremendous amount of data coming from satellite elevation maps, airborne time-of-flight scanners and digital cameras, street–level photographs and even cadastral maps. As for other, more traditional types of media such as pictures and videos, users of digital exploration softwares expect commodity hardware to exhibit good performance for interactive purposes, regardless of the dataset size. In this thesis we propose novel solutions to the problem of rendering large terrain and urban models on commodity platforms, both for local and remote exploration. Our solutions build on the concept of multiresolution representation, where alternative representations of the same data with different accuracy are used to selectively distribute the computational power, and consequently the visual accuracy, where it is more needed on the base of the user’s point of view. In particular, we will introduce an efficient multiresolution data compression technique for planar and spherical surfaces applied to terrain datasets which is able to handle huge amount of information at a planetary scale. We will also describe a novel data structure for compact storage and rendering of urban entities such as buildings to allow real–time exploration of cityscapes from a remote online repository. Moreover, we will show how recent technologies can be exploited to transparently integrate virtual exploration and general computer graphics techniques with web applications

    Output-Sensitive Rendering of Detailed Animated Characters for Crowd Simulation

    Get PDF
    High-quality, detailed animated characters are often represented as textured polygonal meshes. The problem with this technique is the high cost that involves rendering and animating each one of these characters. This problem has become a major limiting factor in crowd simulation. Since we want to render a huge number of characters in real-time, the purpose of this thesis is therefore to study the current existing approaches in crowd rendering to derive a novel approach. The main limitations we have found when using impostors are (1) the big amount of memory needed to store them, which also has to be sent to the graphics card, (2) the lack of visual quality in close-up views, and (3) some visibility problems. As we wanted to overcome these limitations, and improve performance results, the found conclusions lead us to present a new representation for 3D animated characters using relief mapping, thus supporting an output-sensitive rendering. The basic idea of our approach is to encode each character through a small collection of textured boxes storing color and depth values. At runtime, each box is animated according to the rigid transformation of its associated bone in the animated skeleton. A fragment shader is used to recover the original geometry using an adapted version of relief mapping. Unlike competing output-sensitive approaches, our compact representation is able to recover high-frequency surface details and reproduces view-motion parallax e ects. Furthermore, the proposed approach ensures correct visibility among di erent animated parts, and it does not require us to prede ne the animation sequences nor to select a subset of discrete views. Finally, a user study demonstrates that our approach allows for a large number of simulated agents with negligible visual artifacts

    SPRITE TREE: AN EFFICIENT IMAGE-BASED REPRESENTATION FOR NETWORKED VIRTUAL ENVIRONMENTS

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH
    corecore