228 research outputs found

    Real-time rendering of cities at night

    Get PDF
    En synthèse d’images, déterminer la couleur d’une surface au pixel d’une image doit considérer toutes les sources de lumière de la scène pour évaluer leur contribution lumineuse sur la surface en question. Cette évaluation de la visibilité et en l’occurrence de la radiance incidente des sources de lumière est très coûteuse. Elle n’est généralement pas traitée pour chaque source de lumière en rendu temps-réel. Une ville en pleine nuit est un exemple de telle scène comportant une grande quantité de sources de lumière pour lesquelles les rendus temps-réel modernes ne peuvent pas évaluer la visibilité de toutes les sources de lumière individuelles. Nous présentons une technique exploitant la cohérence spatiale des villes et la co-hérence temporelle des rendus temps-réel pour accélérer le calcul de la visibilité des sources de lumière. Notre technique de visibilité profite des bloqueurs naturels et pré-dominants de la ville pour rapidement réduire la liste de sources de lumière à évaluer etainsi, accélérer le calcul de la visibilité en assumant des bloqueurs sous forme de boîtes alignées majoritairement selon certains axes dominants. Pour garantir la propagation des occultations, nous fusionnons les bloqueurs adjacents dans un seul et même bloqueur conservateur en termes d’occultations. Notre technique relie la visibilité de la caméra avec la visibilité des surfaces pour réduire le nombre d’évaluations à effectuer à chaque rendu, et ne calcule la visibilité que pour les surfaces visibles du point de vue de la caméra. Finalement, nous intégrons la technique de visibilité avec une technique de rendu réaliste, Lightcuts, qui a été mise à jour sur GPU dans un scénario de rendu temps-réel. Même si notre technique ne permettra pas d’atteindre le temps-réel en général dans une scène complexe, elle réduit suffisamment les contraintes pour espérer y arriver un jour.In image synthesis, to determine the final color of a surface at a specific image pixel,we must consider all potential light sources and evaluate if they contribute to the illumination. Since such evaluation is slow, real-time renderers traditionally do not evaluate each light source, and instead preemptively choose locally important light sources for which to evaluate visibility. A city at night is such a scene containing many light sources for which modern real-time renderers cannot allow themselves to evaluate every light source at every frame.We present a technique exploiting spatial coherency in cities and temporal coherency of real-time walkthroughs to reduce visibility evaluations in such scenes. Our technique uses the natural and predominant occluders of a city to efficiently reduce the number of light sources to evaluate. To further accelerate the evaluation we project the bounding boxes of buildings instead of their detailed model (these boxes should be oriented mostly along a few directions), and fuse adjacent occluders on an occlusion plane to form larger conservative occluders. Our technique also integrates results from camera visibility to further reduce the number of visibility evaluations executed per frame, and evaluates visible light sources for facades visible from the point of view of the camera. Finally, we integrate an offline rendering technique, Lightcuts, by adapting it to real-time GPU rendering to further save on rendering time.Even though our technique does not achieve real-time frame rates in a complex scene,it reduces the complexity of the problem enough so that we can hope to achieve such frame rates one day

    A survey of real-time crowd rendering

    Get PDF
    In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft

    Visibility computation through image generalization

    Get PDF
    This dissertation introduces the image generalization paradigm for computing visibility. The paradigm is based on the observation that an image is a powerful tool for computing visibility. An image can be rendered efficiently with the support of graphics hardware and each of the millions of pixels in the image reports a visible geometric primitive. However, the visibility solution computed by a conventional image is far from complete. A conventional image has a uniform sampling rate which can miss visible geometric primitives with a small screen footprint. A conventional image can only find geometric primitives to which there is direct line of sight from the center of projection (i.e. the eye) of the image; therefore, a conventional image cannot compute the set of geometric primitives that become visible as the viewpoint translates, or as time changes in a dynamic dataset. Finally, like any sample-based representation, a conventional image can only confirm that a geometric primitive is visible, but it cannot confirm that a geometric primitive is hidden, as that would require an infinite number of samples to confirm that the primitive is hidden at all of its points. ^ The image generalization paradigm overcomes the visibility computation limitations of conventional images. The paradigm has three elements. (1) Sampling pattern generalization entails adding sampling locations to the image plane where needed to find visible geometric primitives with a small footprint. (2) Visibility sample generalization entails replacing the conventional scalar visibility sample with a higher dimensional sample that records all geometric primitives visible at a sampling location as the viewpoint translates or as time changes in a dynamic dataset; the higher-dimensional visibility sample is computed exactly, by solving visibility event equations, and not through sampling. Another form of visibility sample generalization is to enhance a sample with its trajectory as the geometric primitive it samples moves in a dynamic dataset. (3) Ray geometry generalization redefines a camera ray as the set of 3D points that project at a given image location; this generalization supports rays that are not straight lines, and enables designing cameras with non-linear rays that circumvent occluders to gather samples not visible from a reference viewpoint. ^ The image generalization paradigm has been used to develop visibility algorithms for a variety of datasets, of visibility parameter domains, and of performance-accuracy tradeoff requirements. These include an aggressive from-point visibility algorithm that guarantees finding all geometric primitives with a visible fragment, no matter how small primitive\u27s image footprint, an efficient and robust exact from-point visibility algorithm that iterates between a sample-based and a continuous visibility analysis of the image plane to quickly converge to the exact solution, a from-rectangle visibility algorithm that uses 2D visibility samples to compute a visible set that is exact under viewpoint translation, a flexible pinhole camera that enables local modulations of the sampling rate over the image plane according to an input importance map, an animated depth image that not only stores color and depth per pixel but also a compact representation of pixel sample trajectories, and a curved ray camera that integrates seamlessly multiple viewpoints into a multiperspective image without the viewpoint transition distortion artifacts of prior art methods

    Fast scalable visualization techniques for interactive billion-particle walkthrough

    Get PDF
    This research develops a comprehensive framework for interactive walkthrough involving one billion particles in an immersive virtual environment to enable interrogative visualization of large atomistic simulation data. As a mixture of scientific and engineering approaches, the framework is based on four key techniques: adaptive data compression based on space-filling curves, octree-based visibility and occlusion culling, predictive caching based on machine learning, and scalable data reduction based on parallel and distributed processing. In terms of parallel rendering, this system combines functional parallelism, data parallelism, and temporal parallelism to improve interactivity. The visualization framework will be applicable not only to material simulation, but also to computational biology, applied mathematics, mechanical engineering, and nanotechnology, etc

    Efficient multiple occlusion queries for scene graph systems

    Get PDF
    Image space occlusion culling is an useful approach to reduce the rendering load of large polygonal models. Like most large model techniques, it trades overhead costs with the rendering costs of the possibly occluded geometry. Meanwhile, modern graphics hardware supports occlusion culling. Unfortunately these hardware extensions consume fillrate and latency costs. In this paper, we propose a new technique for scene graph traversal optimized for efficient use of occlusion queries. Our approach uses several Occupancy Maps to organize the scene graph traversal. During traversal hierarchical occlusion culling, view frustrum culling and rendering is performed. The occlusion information is efficiently determined by asynchronous multiple occlusion queries with hardware-supported query functionality. To avoid redundant results, we arrange these multiple occlusion queries according to the information of several Occupancy Maps. Our presented technique is conservative and benefits from a partial depth order of the geometry

    Efficient Geometry and Illumination Representations for Interactive Protein Visualization

    Get PDF
    This dissertation explores techniques for interactive simulation and visualization for large protein datasets. My thesis is that using efficient representations for geometric and illumination data can help in developing algorithms that achieve better interactivity for visual and computational proteomics. I show this by developing new algorithms for computation and visualization for proteins. I also show that the same insights that resulted in better algorithms for visual proteomics can also be turned around and used for more efficient graphics rendering. Molecular electrostatics is important for studying the structures and interactions of proteins, and is vital in many computational biology applications, such as protein folding and rational drug design. We have developed a system to efficiently solve the non-linear Poisson-Boltzmann equation governing molecular electrostatics. Our system simultaneously improves the accuracy and the efficiency of the solution by adaptively refining the computational grid near the solute-solvent interface. In addition, we have explored the possibility of mapping the PBE solution onto GPUs. We use pre-computed accumulation of transparency with spherical-harmonics-based compression to accelerate volume rendering of molecular electrostatics. We have also designed a time- and memory-efficient algorithm for interactive visualization of large dynamic molecules. With view-dependent precision control and memory-bandwidth reduction, we have achieved real-time visualization of dynamic molecular datasets with tens of thousands of atoms. Our algorithm is linearly scalable in the size of the molecular datasets. In addition, we present a compact mathematical model to efficiently represent the six-dimensional integrals of bidirectional surface scattering reflectance distribution functions (BSSRDFs) to render scattering effects in translucent materials interactively. Our analysis first reduces the complexity and dimensionality of the problem by decomposing the reflectance field into non-scattered and subsurface-scattered reflectance fields. While the non-scattered reflectance field can be described by 4D bidirectional reflectance distribution functions (BRDFs), we show that the scattered reflectance field can also be represented by a 4D field through pre-processing the neighborhood scattering radiance transfer integrals. We use a novel reference-points scheme to compactly represent the pre-computed integrals using a hierarchical and progressive spherical harmonics representation. Our algorithm scales linearly with the number of mesh vertices

    Hierarchical Variance Reduction Techniques for Monte Carlo Rendering

    Get PDF
    Ever since the first three-dimensional computer graphics appeared half a century ago, the goal has been to model and simulate how light interacts with materials and objects to form an image. The ultimate goal is photorealistic rendering, where the created images reach a level of accuracy that makes them indistinguishable from photographs of the real world. There are many applications ñ visualization of products and architectural designs yet to be built, special effects, computer-generated films, virtual reality, and video games, to name a few. However, the problem has proven tremendously complex; the illumination at any point is described by a recursive integral to which a closed-form solution seldom exists. Instead, computer simulation and Monte Carlo methods are commonly used to statistically estimate the result. This introduces undesirable noise, or variance, and a large body of research has been devoted to finding ways to reduce the variance. I continue along this line of research, and present several novel techniques for variance reduction in Monte Carlo rendering, as well as a few related tools. The research in this dissertation focuses on using importance sampling to pick a small set of well-distributed point samples. As the primary contribution, I have developed the first methods to explicitly draw samples from the product of distant high-frequency lighting and complex reflectance functions. By sampling the product, low noise results can be achieved using a very small number of samples, which is important to minimize the rendering times. Several different hierarchical representations are explored to allow efficient product sampling. In the first publication, the key idea is to work in a compressed wavelet basis, which allows fast evaluation of the product. Many of the initial restrictions of this technique were removed in follow-up work, allowing higher-resolution uncompressed lighting and avoiding precomputation of reflectance functions. My second main contribution is to present one of the first techniques to take the triple product of lighting, visibility and reflectance into account to further reduce the variance in Monte Carlo rendering. For this purpose, control variates are combined with importance sampling to solve the problem in a novel way. A large part of the technique also focuses on analysis and approximation of the visibility function. To further refine the above techniques, several useful tools are introduced. These include a fast, low-distortion map to represent (hemi)spherical functions, a method to create high-quality quasi-random points, and an optimizing compiler for analyzing shaders using interval arithmetic. The latter automatically extracts bounds for importance sampling of arbitrary shaders, as opposed to using a priori known reflectance functions. In summary, the work presented here takes the field of computer graphics one step further towards making photorealistic rendering practical for a wide range of uses. By introducing several novel Monte Carlo methods, more sophisticated lighting and materials can be used without increasing the computation times. The research is aimed at domain-specific solutions to the rendering problem, but I believe that much of the new theory is applicable in other parts of computer graphics, as well as in other fields

    Point based graphics rendering with unified scalability solutions.

    Get PDF
    Standard real-time 3D graphics rendering algorithms use brute force polygon rendering, with complexity linear in the number of polygons and little regard for limiting processing to data that contributes to the image. Modern hardware can now render smaller scenes to pixel levels of detail, relaxing surface connectivity requirements. Sub-linear scalability optimizations are typically self-contained, requiring specific data structures, without shared functions and data. A new point based rendering algorithm 'Canopy' is investigated that combines multiple typically sub-linear scalability solutions, using a small core of data structures. Specifically, locale management, hierarchical view volume culling, backface culling, occlusion culling, level of detail and depth ordering are addressed. To demonstrate versatility further, shadows and collision detection are examined. Polygon models are voxelized with interpolated attributes to provide points. A scene tree is constructed, based on a BSP tree of points, with compressed attributes. The scene tree is embedded in a compressed, partitioned, procedurally based scene graph architecture that mimics conventional systems with groups, instancing, inlines and basic read on demand rendering from backing store. Hierarchical scene tree refinement constructs an image tree image space equivalent, with object space scene node points projected, forming image node equivalents. An image graph of image nodes is maintained, describing image and object space occlusion relationships, hierarchically refined with front to back ordering to a specified threshold whilst occlusion culling with occluder fusion. Visible nodes at medium levels of detail are refined further to rasterization scales. Occlusion culling defines a set of visible nodes that can support caching for temporal coherence. Occlusion culling is approximate, possibly not suiting critical applications. Qualities and performance are tested against standard rendering. Although the algorithm has a 0(f) upper bound in the scene sizef, it is shown to practically scale sub-linearly. Scenes with several hundred billion polygons conventionally, are rendered at interactive frame rates with minimal graphics hardware support

    Sparse Volumetric Deformation

    Get PDF
    Volume rendering is becoming increasingly popular as applications require realistic solid shape representations with seamless texture mapping and accurate filtering. However rendering sparse volumetric data is difficult because of the limited memory and processing capabilities of current hardware. To address these limitations, the volumetric information can be stored at progressive resolutions in the hierarchical branches of a tree structure, and sampled according to the region of interest. This means that only a partial region of the full dataset is processed, and therefore massive volumetric scenes can be rendered efficiently. The problem with this approach is that it currently only supports static scenes. This is because it is difficult to accurately deform massive amounts of volume elements and reconstruct the scene hierarchy in real-time. Another problem is that deformation operations distort the shape where more than one volume element tries to occupy the same location, and similarly gaps occur where deformation stretches the elements further than one discrete location. It is also challenging to efficiently support sophisticated deformations at hierarchical resolutions, such as character skinning or physically based animation. These types of deformation are expensive and require a control structure (for example a cage or skeleton) that maps to a set of features to accelerate the deformation process. The problems with this technique are that the varying volume hierarchy reflects different feature sizes, and manipulating the features at the original resolution is too expensive; therefore the control structure must also hierarchically capture features according to the varying volumetric resolution. This thesis investigates the area of deforming and rendering massive amounts of dynamic volumetric content. The proposed approach efficiently deforms hierarchical volume elements without introducing artifacts and supports both ray casting and rasterization renderers. This enables light transport to be modeled both accurately and efficiently with applications in the fields of real-time rendering and computer animation. Sophisticated volumetric deformation, including character animation, is also supported in real-time. This is achieved by automatically generating a control skeleton which is mapped to the varying feature resolution of the volume hierarchy. The output deformations are demonstrated in massive dynamic volumetric scenes
    • …
    corecore