160 research outputs found
Optimization techniques for computationally expensive rendering algorithms
Realistic rendering in computer graphics simulates the interactions of light and surfaces. While many accurate models for surface reflection and lighting, including solid surfaces and participating media have been described; most of them rely on intensive computation. Common practices such as adding constraints and assumptions can increase performance. However, they may compromise the quality of the resulting images or the variety of phenomena that can be accurately represented. In this thesis, we will focus on rendering methods that require high amounts of computational resources. Our intention is to consider several conceptually different approaches capable of reducing these requirements with only limited implications in the quality of the results. The first part of this work will study rendering of time-¿varying participating media. Examples of this type of matter are smoke, optically thick gases and any material that, unlike the vacuum, scatters and absorbs the light that travels through it. We will focus on a subset of algorithms that approximate realistic illumination using images of real world scenes. Starting from the traditional ray marching algorithm, we will suggest and implement different optimizations that will allow performing the computation at interactive frame rates. This thesis will also analyze two different aspects of the generation of anti-¿aliased images. One targeted to the rendering of screen-¿space anti-¿aliased images and the reduction of the artifacts generated in rasterized lines and edges. We expect to describe an implementation that, working as a post process, it is efficient enough to be added to existing rendering pipelines with reduced performance impact. A third method will take advantage of the limitations of the human visual system (HVS) to reduce the resources required to render temporally antialiased images. While film and digital cameras naturally produce motion blur, rendering pipelines need to explicitly simulate it. This process is known to be one of the most important burdens for every rendering pipeline. Motivated by this, we plan to run a series of psychophysical experiments targeted at identifying groups of motion-¿blurred images that are perceptually equivalent. A possible outcome is the proposal of criteria that may lead to reductions of the rendering budgets
Morphological Antialiasing and Topological Reconstruction
International audienceMorphological antialiasing is a post-processing approach which does note require additional samples computation. This algorithm acts as a non-linear filter, ill-suited to massively parallel hardware architectures. We redesigned the initial method using multiple passes with, in particular, a new approach to line length computation. We also introduce in the method the notion of topological reconstruction to correct the weaknesses of postprocessing antialiasing techniques. Our method runs as a pure post-process filter providing full-image antialiasing at high framerates, competing with traditional MSAA
The Comparison of three 3D graphics raster processors and the design of another
There are a number of 3D graphics accelerator architectures on the market today. One of the largest issues concerning the design of a 3D accelerator is that of affordability for the home user while still delivering good performance. Three such architectures were analyzed: the Heresy architecture defined by Chiueh [2], the Talisman architecture defined by Torborg [7], and the Tayra architecture\u27s specification by White [9]. Portions of these three architectures were used to create a new architecture taking advantage of as many of their features as possible. The advantage of chunking will be analyzed, along with the advantages of a single cycle z-buffering algorithm. It was found that Fast Phong Shading is not suitable for implementation in this pipeline, and that the clipping algorithm should be eliminated in favor of a scissoring algorithm
Ray Tracing Gems
This book is a must-have for anyone serious about rendering in real time. With the announcement of new ray tracing APIs and hardware to support them, developers can easily create real-time applications with ray tracing as a core component. As ray tracing on the GPU becomes faster, it will play a more central role in real-time rendering. Ray Tracing Gems provides key building blocks for developers of games, architectural applications, visualizations, and more. Experts in rendering share their knowledge by explaining everything from nitty-gritty techniques that will improve any ray tracer to mastery of the new capabilities of current and future hardware. What you'll learn: The latest ray tracing techniques for developing real-time applications in multiple domains Guidance, advice, and best practices for rendering applications with Microsoft DirectX Raytracing (DXR) How to implement high-performance graphics for interactive visualizations, games, simulations, and more Who this book is for: Developers who are looking to leverage the latest APIs and GPU technology for real-time rendering and ray tracing Students looking to learn about best practices in these areas Enthusiasts who want to understand and experiment with their new GPU
Efficient algorithms for occlusion culling and shadows
The goal of this research is to develop more efficient techniques for computing the visibility and shadows in real-time rendering of three-dimensional scenes. Visibility algorithms determine what is visible from a camera, whereas shadow algorithms solve the same problem from the viewpoint of a light source.
In rendering, a lot of computational resources are often spent on primitives that are not visible in the final image. One visibility algorithm for reducing the overhead is occlusion culling, which quickly discards the objects or primitives that are obstructed from the view by other primitives. A new method is presented for performing occlusion culling using silhouettes of meshes instead of triangles. Additionally, modifications are suggested to occlusion queries in order to reduce their computational overhead.
The performance of currently available graphics hardware depends on the ordering of input primitives. A new technique, called delay streams, is proposed as a generic solution to order-dependent problems. The technique significantly reduces the pixel processing requirements by improving the efficiency of occlusion culling inside graphics hardware. Additionally, the memory requirements of order-independent transparency algorithms are reduced.
A shadow map is a discretized representation of the scene geometry as seen by a light source. Typically the discretization causes difficult aliasing issues, such as jagged shadow boundaries and incorrect self-shadowing. A novel solution is presented for suppressing all types of aliasing artifacts by providing the correct sampling points for shadow maps, thus fully abandoning the previously used regular structures. Also, a simple technique is introduced for limiting the shadow map lookups to the pixels that get projected inside the shadow map.
The fillrate problem of hardware-accelerated shadow volumes is greatly reduced with a new hierarchical rendering technique. The algorithm performs per-pixel shadow computations only at visible shadow boundaries, and uses lower resolution shadows for the parts of the screen that are guaranteed to be either fully lit or fully in shadow.
The proposed techniques are expected to improve the rendering performance in most real-time applications that use 3D graphics, especially in computer games. More efficient algorithms for occlusion culling and shadows are important steps towards larger, more realistic virtual environments.reviewe
Efficient Algorithms for Large-Scale Image Analysis
This work develops highly efficient algorithms for analyzing large images. Applications include object-based change detection and screening. The algorithms are 10-100 times as fast as existing software, sometimes even outperforming FGPA/GPU hardware, because they are designed to suit the computer architecture. This thesis describes the implementation details and the underlying algorithm engineering methodology, so that both may also be applied to other applications
Recommended from our members
New Simulation and Fusion Techniques for Assessing and Enhancing UAS Topographic and Bathymetric Point cloud Accuracy
Imagery acquired from unmanned aircraft systems (UAS) and processed with structure from motion (SfM) – multi-view stereo (MVS) algorithms provides transformative new capabilities for surveying and mapping. Together, these new tools are leading to a democratization of airborne surveying and mapping by enabling similar capabilities (including similar or better accuracies, albeit from substantially lower altitudes) at a fraction of the cost and size of conventional aircraft. While SfM-MVS processing is becoming widely used for mapping topography, and more recently bathymetry, empirical accuracy assessments—especially, those aimed at investigating the sensitivity of point cloud accuracy to varying acquisition and processing parameters—can be difficult, expensive, and logistically complicated. Additional challenges in bathymetric mapping from UAS imagery using SfM-MVS software relate to refraction-induced errors and lack of coverage in areas of homogeneous sandy substrate. This dissertation aims to address these challenges through development and testing of new algorithms for SfM-MVS accuracy assessment and bathymetry retrieval.
A new tool for simulating UAS imagery, simUAS, is presented and used to assess SfM-MVS accuracy for topographic mapping (Chapter 2) and bathymetric mapping (Chapter 3). The importance of simUAS is that it can be used to precisely vary one parameter at a time, while perfectly fixing all others, which is possible, because the UAS data are synthetically generated. Hence, the issues of uncontrolled variables, such as changing illumination levels and moving objects in the scene, which occur in empirical experiments using real UAS, are eliminated. Furthermore, simulated experiments using this approach can be performed without the need for costly and time-intensive fieldwork. The results of these studies demonstrate how processing settings and initial camera position accuracy relate to the accuracy of the resultant point cloud. For bathymetric processing, it was found that camera position accuracy is particularly important for generating accurate results.
Even when accurate camera positions are acquired for bathymetric data, SfM-MVS processing is still unable to resolve depths in regions which lack seafloor texture, such as sandy, homogeneous substrate. A new methodology is introduced and tested which uses the results from the SfM-MVS processing to train a radiometric model, which estimates water depth based on the wavelength-dependent attenuation of light in the water column (Chapter 4). The methodology is shown to increase the spatial coverage and improve the accuracy of the bathymetric data at a field site on Buck Island off of St. Croix in the U.S. Virgin Islands. Collectively, this work is anticipated to facilitate greater use of UAS for nearshore bathymetric mapping
Real-Time deep image rendering and order independent transparency
In computer graphics some operations can be performed in either object space or image space. Image space computation can be advantageous, especially with the high parallelism of GPUs, improving speed, accuracy and ease of implementation. For many image space techniques the information contained in regular 2D images is limiting. Recent graphics hardware features, namely atomic operations and dynamic memory location writes, now make it possible to capture and store all per-pixel fragment data from the rasterizer in a single pass in what we call a deep image. A deep image provides a state where all fragments are available and gives a more complete image based geometry representation, providing new possibilities in image based rendering techniques. This thesis investigates deep images and their growing use in real-time image space applications. A focus is new techniques for improving fundamental operation performance, including construction, storage, fast fragment sorting and sampling. A core and driving application is order-independent transparency (OIT). A number of deep image sorting improvements are presented, through which an order of magnitude performance increase is achieved, significantly advancing the ability to perform transparency rendering in real time. In the broader context of image based rendering we look at deep images as a discretized 3D geometry representation and discuss sampling techniques for raycasting and antialiasing with an implicit fragment connectivity approach. Using these ideas a more computationally complex application is investigated — image based depth of field (DoF). Deep images are used to provide partial occlusion, and in particular a form of deep image mipmapping allows a fast approximate defocus blur of up to full screen size
TetraDiffusion: Tetrahedral Diffusion Models for 3D Shape Generation
Probabilistic denoising diffusion models (DDMs) have set a new standard for
2D image generation. Extending DDMs for 3D content creation is an active field
of research. Here, we propose TetraDiffusion, a diffusion model that operates
on a tetrahedral partitioning of 3D space to enable efficient, high-resolution
3D shape generation. Our model introduces operators for convolution and
transpose convolution that act directly on the tetrahedral partition, and
seamlessly includes additional attributes such as color. Remarkably,
TetraDiffusion enables rapid sampling of detailed 3D objects in nearly
real-time with unprecedented resolution. It's also adaptable for generating 3D
shapes conditioned on 2D images. Compared to existing 3D mesh diffusion
techniques, our method is up to 200 times faster in inference speed, works on
standard consumer hardware, and delivers superior results.Comment: This version introduces a substantial update of arXiv:2211.13220v1
with significant changes in the framework and entirely new results. Project
page https://tetradiffusion.github.io
- …