30 research outputs found

    Using image morphing for memory-efficient impostor rendering on GPU

    Get PDF
    Real-time rendering of large animated crowds consisting thousands of virtual humans is important for several applications including simulations, games and interactive walkthroughs; but cannot be performed using complex polygonal models at interactive frame rates. For that reason, several methods using large numbers of pre-computed image-based representations, which are called as impostors, have been proposed. These methods take the advantage of existing programmable graphics hardware to compensate the computational expense while maintaining the visual fidelity. Making the number of different virtual humans, which can be rendered in real-time, not restricted anymore by the required computational power but by the texture memory consumed for the variety and discretization of their animations. In this work, we proposed an alternative method that reduces the memory consumption by generating compelling intermediate textures using image-morphing techniques. In order to demonstrate the preserved perceptual quality of animations, where half of the key-frames were rendered using the proposed methodology, we have implemented the system using the graphical processing unit and obtained promising results at interactive frame rates

    A survey of real-time crowd rendering

    Get PDF
    In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft

    Bandwidth-Efficient Parallel Visualization for Mobile Devices

    Get PDF

    The Video Mesh: A Data Structure for Image-based Video Editing

    Get PDF
    This paper introduces the video mesh, a data structure for representing video as 2.5D "paper cutouts." The video mesh allows interactive editing of moving objects and modeling of depth, which enables 3D effects and post-exposure camera control. The video mesh sparsely encodes optical flow as well as depth, and handles occlusion using local layering and alpha mattes. Motion is described by a sparse set of points tracked over time. Each point also stores a depth value. The video mesh is a triangulation over this point set and per-pixel information is obtained by interpolation. The user rotoscopes occluding contours and we introduce an algorithm to cut the video mesh along them. Object boundaries are refined with perpixel alpha values. The video mesh is at its core a set of texture mapped triangles, we leverage graphics hardware to enable interactive editing and rendering of a variety of effects. We demonstrate the effectiveness of our representation with a number of special effects including 3D viewpoint changes, object insertion, and depth-of-field manipulation

    Generating, animating, and rendering varied individuals for real-time crowds

    Get PDF
    To simulate realistic crowds of virtual humans in real time, three main requirements need satisfaction. First of all, quantity, i.e., the ability to simulate thousands of characters. Secondly, quality, because each virtual human composing a crowd needs to look unique in its appearance and animation. Finally, efficiency is paramount, for an operation usually efficient on a single virtual human, becomes extremely costly when applied on large crowds. Developing an architecture able to manage all three aspects is a challenging problem that we have addressed in our research. Our first contribution is an efficient and versatile architecture called YaQ, able to simulate thousands of characters in real time. This platform, developed at EPFL-VRLab, results from several years of research and integrates state-of-the-art techniques at all levels: YaQ aims at providing efficient algorithms and real-time solutions for populating globally and massively large-scale empty environments. YaQ thus fits various application domains, such as video games and virtual reality. Our architecture is especially efficient in managing the large quantity of data that is used to simulate crowds. In order to simulate large crowds, many instances of a small set of human templates have to be generated. From this starting point, if no care is taken to vary each character individually, many clones appear in the crowd. We present several algorithms to make each individual unique in the crowd. Firstly, we introduce a new method to distinguish body parts of a human and apply detailed color variety and patterns to each one of them. Secondly, we present two techniques to modify the shape and profile of a virtual human: a simple and efficient method for attaching accessories to individuals, and efficient tools to scale the skeleton and mesh of an instance. Finally, we also contribute to varying individuals' animation by introducing variations to the upper body movements, thus allowing characters to make a phone call, have a hand in their pocket, or carry heavy accessories, etc. To achieve quantity in a crowd, levels of detail need to be used. We explore the most adequate solutions to simulate large crowds with levels of detail, while avoiding disturbing switches between two different representations of a virtual human. To do so, we develop solutions to make most variety techniques scalable to all levels of detail

    Large-scale cloudscapes using noise

    Get PDF
    Clouds have been of particular interest in computer graphics due to the challenge they present. Clouds are considered fuzzy objects, and need specialized algorithms to model and render realistically. Many techniques exist to model and render clouds that have had much success. This research will take existing techniques in cloud modeling and rendering and create a new technique combining those with noise. The idea is that noise can be used to model large-scale repeatable 3D cloudscapes and to be able to model such cloudscapes much more quickly than current techniques. This would be beneficial to developers of virtual universes that have very many worlds numbering in the ten to hundreds to create convincing cloudscapes on each distinct world

    Real time city visualization

    Get PDF
    The visualization of cities in real time has a lot of potential applications, from urban and emergency planning, to driving simulators and entertainment. The massive amount of data and the computational requirements needed to render an entire city in detail are the reason why a lot of techniques have been proposed in this eld. Procedural city generation, building simpli cation and visibility processing are some of the approaches used to solve a small subset of the problems that these applications need to face. Our work proposes a new city rendering algorithm that is a radically di erent approach to what has been done before in this eld. The proposed technique is based on a structuration of the city data in a regular grid which is traversed, at runtime, by a ray tracing algorithm that keeps track of visible parts of the scene. As a preprocess, a set of quads de ning the buildings of a city is transformed to the regular grid used by our algorithm. The rendering algorithm uses this data to generate a real time representation of the city minimizing the overdraw, a common problem in other techniques. This is done by means of a geometry shader to generate only the minimum number of fragments needed to render the city from a given position

    Multiresolution Techniques for Real–Time Visualization of Urban Environments and Terrains

    Get PDF
    In recent times we are witnessing a steep increase in the availability of data coming from real–life environments. Nowadays, virtually everyone connected to the Internet may have instant access to a tremendous amount of data coming from satellite elevation maps, airborne time-of-flight scanners and digital cameras, street–level photographs and even cadastral maps. As for other, more traditional types of media such as pictures and videos, users of digital exploration softwares expect commodity hardware to exhibit good performance for interactive purposes, regardless of the dataset size. In this thesis we propose novel solutions to the problem of rendering large terrain and urban models on commodity platforms, both for local and remote exploration. Our solutions build on the concept of multiresolution representation, where alternative representations of the same data with different accuracy are used to selectively distribute the computational power, and consequently the visual accuracy, where it is more needed on the base of the user’s point of view. In particular, we will introduce an efficient multiresolution data compression technique for planar and spherical surfaces applied to terrain datasets which is able to handle huge amount of information at a planetary scale. We will also describe a novel data structure for compact storage and rendering of urban entities such as buildings to allow real–time exploration of cityscapes from a remote online repository. Moreover, we will show how recent technologies can be exploited to transparently integrate virtual exploration and general computer graphics techniques with web applications

    Scalable Realtime Rendering and Interaction with Digital Surface Models of Landscapes and Cities

    Get PDF
    Interactive, realistic rendering of landscapes and cities differs substantially from classical terrain rendering. Due to the sheer size and detail of the data which need to be processed, realtime rendering (i.e. more than 25 images per second) is only feasible with level of detail (LOD) models. Even the design and implementation of efficient, automatic LOD generation is ambitious for such out-of-core datasets considering the large number of scales that are covered in a single view and the necessity to maintain screen-space accuracy for realistic representation. Moreover, users want to interact with the model based on semantic information which needs to be linked to the LOD model. In this thesis I present LOD schemes for the efficient rendering of 2.5d digital surface models (DSMs) and 3d point-clouds, a method for the automatic derivation of city models from raw DSMs, and an approach allowing semantic interaction with complex LOD models. The hierarchical LOD model for digital surface models is based on a quadtree of precomputed, simplified triangle mesh approximations. The rendering of the proposed model is proved to allow real-time rendering of very large and complex models with pixel-accurate details. Moreover, the necessary preprocessing is scalable and fast. For 3d point clouds, I introduce an LOD scheme based on an octree of hybrid plane-polygon representations. For each LOD, the algorithm detects planar regions in an adequately subsampled point cloud and models them as textured rectangles. The rendering of the resulting hybrid model is an order of magnitude faster than comparable point-based LOD schemes. To automatically derive a city model from a DSM, I propose a constrained mesh simplification. Apart from the geometric distance between simplified and original model, it evaluates constraints based on detected planar structures and their mutual topological relations. The resulting models are much less complex than the original DSM but still represent the characteristic building structures faithfully. Finally, I present a method to combine semantic information with complex geometric models. My approach links the semantic entities to the geometric entities on-the-fly via coarser proxy geometries which carry the semantic information. Thus, semantic information can be layered on top of complex LOD models without an explicit attribution step. All findings are supported by experimental results which demonstrate the practical applicability and efficiency of the methods
    corecore