45 research outputs found

    A survey of real-time crowd rendering

    Get PDF
    In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft

    Rendering on a Budget: a Framework for Time-Critical Rendering

    Get PDF
    Abstract We present a technique for optimizing the rendering of highdepth complexity scenes. Prioritized-Layered Projection (PLP) does this by rendering an estimation of the visible set for each frame. The novelty in our work lies in the fact that we do not explicitly compute visible sets. Instead, our work is based on computing on demand a priority order for the polygons that maximizes the likelihood of rendering visible polygons before occluded ones for any given scene. Given a fixed budget, e.g. time or number of triangles, our rendering algorithm makes sure to render geometry respecting the computed priority. There are two main steps to our technique: (1) an occupancy-based tessellation of space; and (2) a soliditybased traversal algorithm. PLP works by first computing an occupancy-based tessellation of space, which tends to have more cells where there are more geometric primitives. In this spatial tessellation, each cell is assigned a solidity value, which is directly proportional to its likelihood of occluding other cells. In its simplest form, a cell's solidity value is directly proportional to the number of polygons contained within it. During our traversal algorithm cells are marked for projection, and the geometric primitives contained within them actually rendered. The traversal algorithm makes use of the cells' solidity, and other view-dependent information to determine the ordering in which to project cells. By carefully tailoring the traversal algorithm to the occupancy-based tessellation, we can achieve very good frame rates with low preprocessing and rendering costs. In this paper, we describe our technique and its implementation in detail. Also, we provide experimental evidence of its performance. We also briefly discuss extensions of our algorithm

    Accelerating Virtual Walkthrough with Visual Culling Techniques

    Get PDF
    Abstract-Virtual walkthrough application allows users to navigate and immerse in the generated 3D environment with computer graphics assist. The 3D environment requires a large amount of geometry to make it look realistic. When the number of geometry increase, the performance of the application will become slower. Consequently, it creates a conflict between the needs of realistic and real time. In this paper, we discuss the implementation of visual culling techniques such as view frustum culling, back face culling and occlusion culling in the virtual walkthrough application. We render only what we can see during the application runtime and cull away unnecessary geometry. This will accelerate the performance of the system. Without the culling techniques implemented in virtual reality application such as virtual walkthrough, the system has to allocate a large space of memory to store the geometry data. We have tested these techniques to the Ancient Malacca data. With the visual culling techniques implemented, the virtual walkthrough system can work in real time mode without scarifying realism factor

    CGAMES'2009

    Get PDF

    Stereoscopic urban visualization based on graphics processor unit

    Get PDF
    We propose a framework for the stereoscopic visualization of urban environments. The framework uses occlusion and view-frustum culling (VFC) and utilizes graphics hardware to speed up the rendering process. The occlusion culling is based on a slice-wise storage scheme that represents buildings using axis-aligned slices. This provides a fast and a low-cost way to access the visible parts of the buildings. View-frustum culling for stereoscopic visualization is carried out once for both eyes by applying a transformation to the culling location. Rendering using graphics hardware is based on the slice-wise building representation. The representation facilitates fast access to data that are pushed into the graphics procesing unit (GPU) buffers. We present algorithms to access this GPU data. The stereoscopic visualization uses off-axis projection, which we found more suitable for the case of urban visualization. The framework is tested on large urban models containing 7.8 million and 23 million polygons. Performance experiments show that real-time stereoscopic visualization can be achieved for large models. © 2008 Society of Photo-Optical Instrumentation Engineers

    Efficient multiple occlusion queries for scene graph systems

    Get PDF
    Image space occlusion culling is an useful approach to reduce the rendering load of large polygonal models. Like most large model techniques, it trades overhead costs with the rendering costs of the possibly occluded geometry. Meanwhile, modern graphics hardware supports occlusion culling. Unfortunately these hardware extensions consume fillrate and latency costs. In this paper, we propose a new technique for scene graph traversal optimized for efficient use of occlusion queries. Our approach uses several Occupancy Maps to organize the scene graph traversal. During traversal hierarchical occlusion culling, view frustrum culling and rendering is performed. The occlusion information is efficiently determined by asynchronous multiple occlusion queries with hardware-supported query functionality. To avoid redundant results, we arrange these multiple occlusion queries according to the information of several Occupancy Maps. Our presented technique is conservative and benefits from a partial depth order of the geometry

    Fast and Accurate Visibility Preprocessing

    Get PDF
    Visibility culling is a means of accelerating the graphical rendering of geometric models. Invisible objects are efficiently culled to prevent their submission to the standard graphics pipeline. It is advantageous to preprocess scenes in order to determine invisible objects from all possible camera views. This information is typically saved to disk and may then be reused until the model geometry changes. Such preprocessing algorithms are therefore used for scenes that are primarily static. Currently, the standard approach to visibility preprocessing algorithms is to use a form of approximate solution, known as conservative culling. Such algorithms over-estimate the set of visible polygons. This compromise has been considered necessary in order to perform visibility preprocessing quickly. These algorithms attempt to satisfy the goals of both rapid preprocessing and rapid run-time rendering. We observe, however, that there is a need for algorithms with superior performance in preprocessing, as well as for algorithms that are more accurate. For most applications these features are not required simultaneously. In this thesis we present two novel visibility preprocessing algorithms, each of which is strongly biased toward one of these requirements. The first algorithm has the advantage of performance. It executes quickly by exploiting graphics hardware. The algorithm also has the features of output sensitivity (to what is visible), and a logarithmic dependency in the size of the camera space partition. These advantages come at the cost of image error. We present a heuristic guided adaptive sampling methodology that minimises this error. We further show how this algorithm may be parallelised and also present a natural extension of the algorithm to five dimensions for accelerating generalised ray shooting. The second algorithm has the advantage of accuracy. No over-estimation is performed, nor are any sacrifices made in terms of image quality. The cost is primarily that of time. Despite the relatively long computation, the algorithm is still tractable and on average scales slightly superlinearly with the input size. This algorithm also has the advantage of output sensitivity. This is the first known tractable exact solution to the general 3D from-region visibility problem. In order to solve the exact from-region visibility problem, we had to first solve a more general form of the standard stabbing problem. An efficient solution to this problem is presented independently

    Point based graphics rendering with unified scalability solutions.

    Get PDF
    Standard real-time 3D graphics rendering algorithms use brute force polygon rendering, with complexity linear in the number of polygons and little regard for limiting processing to data that contributes to the image. Modern hardware can now render smaller scenes to pixel levels of detail, relaxing surface connectivity requirements. Sub-linear scalability optimizations are typically self-contained, requiring specific data structures, without shared functions and data. A new point based rendering algorithm 'Canopy' is investigated that combines multiple typically sub-linear scalability solutions, using a small core of data structures. Specifically, locale management, hierarchical view volume culling, backface culling, occlusion culling, level of detail and depth ordering are addressed. To demonstrate versatility further, shadows and collision detection are examined. Polygon models are voxelized with interpolated attributes to provide points. A scene tree is constructed, based on a BSP tree of points, with compressed attributes. The scene tree is embedded in a compressed, partitioned, procedurally based scene graph architecture that mimics conventional systems with groups, instancing, inlines and basic read on demand rendering from backing store. Hierarchical scene tree refinement constructs an image tree image space equivalent, with object space scene node points projected, forming image node equivalents. An image graph of image nodes is maintained, describing image and object space occlusion relationships, hierarchically refined with front to back ordering to a specified threshold whilst occlusion culling with occluder fusion. Visible nodes at medium levels of detail are refined further to rasterization scales. Occlusion culling defines a set of visible nodes that can support caching for temporal coherence. Occlusion culling is approximate, possibly not suiting critical applications. Qualities and performance are tested against standard rendering. Although the algorithm has a 0(f) upper bound in the scene sizef, it is shown to practically scale sub-linearly. Scenes with several hundred billion polygons conventionally, are rendered at interactive frame rates with minimal graphics hardware support

    Visibility computation through image generalization

    Get PDF
    This dissertation introduces the image generalization paradigm for computing visibility. The paradigm is based on the observation that an image is a powerful tool for computing visibility. An image can be rendered efficiently with the support of graphics hardware and each of the millions of pixels in the image reports a visible geometric primitive. However, the visibility solution computed by a conventional image is far from complete. A conventional image has a uniform sampling rate which can miss visible geometric primitives with a small screen footprint. A conventional image can only find geometric primitives to which there is direct line of sight from the center of projection (i.e. the eye) of the image; therefore, a conventional image cannot compute the set of geometric primitives that become visible as the viewpoint translates, or as time changes in a dynamic dataset. Finally, like any sample-based representation, a conventional image can only confirm that a geometric primitive is visible, but it cannot confirm that a geometric primitive is hidden, as that would require an infinite number of samples to confirm that the primitive is hidden at all of its points. ^ The image generalization paradigm overcomes the visibility computation limitations of conventional images. The paradigm has three elements. (1) Sampling pattern generalization entails adding sampling locations to the image plane where needed to find visible geometric primitives with a small footprint. (2) Visibility sample generalization entails replacing the conventional scalar visibility sample with a higher dimensional sample that records all geometric primitives visible at a sampling location as the viewpoint translates or as time changes in a dynamic dataset; the higher-dimensional visibility sample is computed exactly, by solving visibility event equations, and not through sampling. Another form of visibility sample generalization is to enhance a sample with its trajectory as the geometric primitive it samples moves in a dynamic dataset. (3) Ray geometry generalization redefines a camera ray as the set of 3D points that project at a given image location; this generalization supports rays that are not straight lines, and enables designing cameras with non-linear rays that circumvent occluders to gather samples not visible from a reference viewpoint. ^ The image generalization paradigm has been used to develop visibility algorithms for a variety of datasets, of visibility parameter domains, and of performance-accuracy tradeoff requirements. These include an aggressive from-point visibility algorithm that guarantees finding all geometric primitives with a visible fragment, no matter how small primitive\u27s image footprint, an efficient and robust exact from-point visibility algorithm that iterates between a sample-based and a continuous visibility analysis of the image plane to quickly converge to the exact solution, a from-rectangle visibility algorithm that uses 2D visibility samples to compute a visible set that is exact under viewpoint translation, a flexible pinhole camera that enables local modulations of the sampling rate over the image plane according to an input importance map, an animated depth image that not only stores color and depth per pixel but also a compact representation of pixel sample trajectories, and a curved ray camera that integrates seamlessly multiple viewpoints into a multiperspective image without the viewpoint transition distortion artifacts of prior art methods
    corecore