251 research outputs found

    A survey of real-time crowd rendering

    Get PDF
    In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft

    Image-based crowd rendering

    Full text link

    Level of detail for complex urban scenes with varied animated crowds, using XML

    Get PDF
    We present a system capable of handling several thousands of varied animated characters within a crowd. These characters are designed to have geometric, color animation and behaviour variety, nevertheless when a crowd becomes bigger, more memory is needed and is often difficult to achieve this objective. To solve this problem, we implemented two complementary data structures

    Output-Sensitive Rendering of Detailed Animated Characters for Crowd Simulation

    Get PDF
    High-quality, detailed animated characters are often represented as textured polygonal meshes. The problem with this technique is the high cost that involves rendering and animating each one of these characters. This problem has become a major limiting factor in crowd simulation. Since we want to render a huge number of characters in real-time, the purpose of this thesis is therefore to study the current existing approaches in crowd rendering to derive a novel approach. The main limitations we have found when using impostors are (1) the big amount of memory needed to store them, which also has to be sent to the graphics card, (2) the lack of visual quality in close-up views, and (3) some visibility problems. As we wanted to overcome these limitations, and improve performance results, the found conclusions lead us to present a new representation for 3D animated characters using relief mapping, thus supporting an output-sensitive rendering. The basic idea of our approach is to encode each character through a small collection of textured boxes storing color and depth values. At runtime, each box is animated according to the rigid transformation of its associated bone in the animated skeleton. A fragment shader is used to recover the original geometry using an adapted version of relief mapping. Unlike competing output-sensitive approaches, our compact representation is able to recover high-frequency surface details and reproduces view-motion parallax e ects. Furthermore, the proposed approach ensures correct visibility among di erent animated parts, and it does not require us to prede ne the animation sequences nor to select a subset of discrete views. Finally, a user study demonstrates that our approach allows for a large number of simulated agents with negligible visual artifacts

    Generating, animating, and rendering varied individuals for real-time crowds

    Get PDF
    To simulate realistic crowds of virtual humans in real time, three main requirements need satisfaction. First of all, quantity, i.e., the ability to simulate thousands of characters. Secondly, quality, because each virtual human composing a crowd needs to look unique in its appearance and animation. Finally, efficiency is paramount, for an operation usually efficient on a single virtual human, becomes extremely costly when applied on large crowds. Developing an architecture able to manage all three aspects is a challenging problem that we have addressed in our research. Our first contribution is an efficient and versatile architecture called YaQ, able to simulate thousands of characters in real time. This platform, developed at EPFL-VRLab, results from several years of research and integrates state-of-the-art techniques at all levels: YaQ aims at providing efficient algorithms and real-time solutions for populating globally and massively large-scale empty environments. YaQ thus fits various application domains, such as video games and virtual reality. Our architecture is especially efficient in managing the large quantity of data that is used to simulate crowds. In order to simulate large crowds, many instances of a small set of human templates have to be generated. From this starting point, if no care is taken to vary each character individually, many clones appear in the crowd. We present several algorithms to make each individual unique in the crowd. Firstly, we introduce a new method to distinguish body parts of a human and apply detailed color variety and patterns to each one of them. Secondly, we present two techniques to modify the shape and profile of a virtual human: a simple and efficient method for attaching accessories to individuals, and efficient tools to scale the skeleton and mesh of an instance. Finally, we also contribute to varying individuals' animation by introducing variations to the upper body movements, thus allowing characters to make a phone call, have a hand in their pocket, or carry heavy accessories, etc. To achieve quantity in a crowd, levels of detail need to be used. We explore the most adequate solutions to simulate large crowds with levels of detail, while avoiding disturbing switches between two different representations of a virtual human. To do so, we develop solutions to make most variety techniques scalable to all levels of detail

    Virtual humans: thirty years of research, what next?

    Get PDF
    In this paper, we present research results and future challenges in creating realistic and believable Virtual Humans. To realize these modeling goals, real-time realistic representation is essential, but we also need interactive and perceptive Virtual Humans to populate the Virtual Worlds. Three levels of modeling should be considered to create these believable Virtual Humans: 1) realistic appearance modeling, 2) realistic, smooth and flexible motion modeling, and 3) realistic high-level behaviors modeling. At first, the issues of creating virtual humans with better skeleton and realistic deformable bodies are illustrated. To give a level of believable behavior, challenges are laid on generating on the fly flexible motion and complex behaviours of Virtual Humans inside their environments using a realistic perception of the environment. Interactivity and group behaviours are also important parameters to create believable Virtual Humans which have challenges in creating believable relationship between real and virtual humans based on emotion and personality, and simulating realistic and believable behaviors of groups and crowds. Finally, issues in generating realistic virtual clothed and haired people are presente

    Output-Sensitive Rendering of Detailed Animated Characters for Crowd Simulation

    Get PDF
    High-quality, detailed animated characters are often represented as textured polygonal meshes. The problem with this technique is the high cost that involves rendering and animating each one of these characters. This problem has become a major limiting factor in crowd simulation. Since we want to render a huge number of characters in real-time, the purpose of this thesis is therefore to study the current existing approaches in crowd rendering to derive a novel approach. The main limitations we have found when using impostors are (1) the big amount of memory needed to store them, which also has to be sent to the graphics card, (2) the lack of visual quality in close-up views, and (3) some visibility problems. As we wanted to overcome these limitations, and improve performance results, the found conclusions lead us to present a new representation for 3D animated characters using relief mapping, thus supporting an output-sensitive rendering. The basic idea of our approach is to encode each character through a small collection of textured boxes storing color and depth values. At runtime, each box is animated according to the rigid transformation of its associated bone in the animated skeleton. A fragment shader is used to recover the original geometry using an adapted version of relief mapping. Unlike competing output-sensitive approaches, our compact representation is able to recover high-frequency surface details and reproduces view-motion parallax e ects. Furthermore, the proposed approach ensures correct visibility among di erent animated parts, and it does not require us to prede ne the animation sequences nor to select a subset of discrete views. Finally, a user study demonstrates that our approach allows for a large number of simulated agents with negligible visual artifacts
    • …
    corecore