152 research outputs found

    Anti-aliasing with stratified B-spline filters of arbitrary degree

    Get PDF
    A simple and elegant method is presented to perform anti-aliasing in raytraced images. The method uses stratified sampling to reduce the occurrence of artefacts in an image and features a B-spline filter to compute the final luminous intensity at each pixel. The method is scalable through the specification of the filter degree. A B-spline filter of degree one amounts to a simple anti-aliasing scheme with box filtering. Increasing the degree of the B-spline generates progressively smoother filters. Computation of the filter values is done in a recursive way, as part of a sequence of Newton-Raphson iterations, to obtain the optimal sample positions in screen space. The proposed method can perform both anti-aliasing in space and in time, the latter being more commonly known as motion blur. We show an application of the method to the ray casting of implicit procedural surfaces

    Linear-Time Poisson-Disk Patterns

    Get PDF
    We present an algorithm for generating Poisson-disc patterns taking O(N) time to generate NN points. The method is based on a grid of regions which can contain no more than one point in the final pattern, and uses an explicit model of point arrival times under a uniform Poisson process.Comment: 4 pages, 2 figure

    Temporally coherent interactive ray tracing

    Get PDF
    Journal ArticleAlthough ray tracing has been successfully applied to interactively render large datasets, supersampling pixels will not be practical in interactive applications for some time. Because large datasets tend to have subpixel detail, one-sample-per-pixel ray tracing can produce visually distracting popping and scintillaction. We present an algorithm that directs primary rays toward locations rendered in previous frames, thereby increasing temporal coherence. Our method tracks intersection points over time, and these tracked points are used as an oracle to aim rays for the next frame. This is in contrast to traditional image-based rendering techniques which advocate color reuse. We so acquire coherence between frames which reduces temporal artifacts without introducing significant processing overhead or causing unnecessary blur

    Optimization techniques for computationally expensive rendering algorithms

    Get PDF
    Realistic rendering in computer graphics simulates the interactions of light and surfaces. While many accurate models for surface reflection and lighting, including solid surfaces and participating media have been described; most of them rely on intensive computation. Common practices such as adding constraints and assumptions can increase performance. However, they may compromise the quality of the resulting images or the variety of phenomena that can be accurately represented. In this thesis, we will focus on rendering methods that require high amounts of computational resources. Our intention is to consider several conceptually different approaches capable of reducing these requirements with only limited implications in the quality of the results. The first part of this work will study rendering of time-­¿varying participating media. Examples of this type of matter are smoke, optically thick gases and any material that, unlike the vacuum, scatters and absorbs the light that travels through it. We will focus on a subset of algorithms that approximate realistic illumination using images of real world scenes. Starting from the traditional ray marching algorithm, we will suggest and implement different optimizations that will allow performing the computation at interactive frame rates. This thesis will also analyze two different aspects of the generation of anti-­¿aliased images. One targeted to the rendering of screen-­¿space anti-­¿aliased images and the reduction of the artifacts generated in rasterized lines and edges. We expect to describe an implementation that, working as a post process, it is efficient enough to be added to existing rendering pipelines with reduced performance impact. A third method will take advantage of the limitations of the human visual system (HVS) to reduce the resources required to render temporally antialiased images. While film and digital cameras naturally produce motion blur, rendering pipelines need to explicitly simulate it. This process is known to be one of the most important burdens for every rendering pipeline. Motivated by this, we plan to run a series of psychophysical experiments targeted at identifying groups of motion-­¿blurred images that are perceptually equivalent. A possible outcome is the proposal of criteria that may lead to reductions of the rendering budgets

    A Beam Tracing with Precise Antialiasing for Polyhedral Scenes

    Get PDF
    International audienceRay tracing is one of the most important rendering techniques used in computer graphics. A fundamental problem of classical ray tracers is the well-known aliasing. With small objects, or small shadows, aliasing becomes a crucial problem to solve. Beam tracers can be considered as an extension of classical ray tracers. They replace the concept of infinitesimal ray by that of beam but they are generally more complex than ray tracers. The new method presented in this paper is a high quality beam tracer that provides a robust and general antialiasing for polyhedral scenes. Compared to similar beam tracers, this method has some major advantages: - complex and expensive computations of conventional beam-object intersection are entirely avoided, so an extension to some non polyhedral scenes such as CSG ones is possible; - usual approximations or complex approaches for refraction computations are avoided. Moreover, this method is entirely compatible with the usual improvements of classical ray tracing (spatial subdivisions or hierarchical bounding volumes)

    Separable Image Warping with Spatial Lookup Tables

    Get PDF
    Image warping refers to the 2-D resampling of a source image onto a target image. In the general case, this requires costly 2-D filtering operations. Simplifications are possible when the warp can be expressed as a cascade of orthogonall-D transformations. In these cases, separable transformations have been introduced to realize large performance gains. The central ideas in this area were formulated in the 2-pass algorithm by Catmull and Smith. Although that method applies over an important class of transformations, there are intrinsic problems which limit its usefulness. The goal of this work is to extend the 2-pass approach to handle arbitrary spatial mapping functions. We address the difficulties intrinsic to 2-pass scanline algorithms: bottlenecking, foldovers, and the lack of closed-form inverse solutions. These problems are shown to be resolved in a general, efficient, separable technique, with graceful degradation for transformations of increasing complexity
    corecore