318 research outputs found

    A Method of Rendering CSG-Type Solids Using a Hybrid of Conventional Rendering Methods and Ray Tracing Techniques

    Get PDF
    This thesis describes a fast, efficient and innovative algorithm for producing shaded, still images of complex objects, built using constructive solid geometry ( CSG ) techniques. The algorithm uses a hybrid of conventional rendering methods and ray tracing techniques. A description of existing modelling and rendering methods is given in chapters 1, 2 and 3, with emphasis on the data structures and rendering techniques selected for incorporation in the hybrid method. Chapter 4 gives a general description of the hybrid method. This method processes data in the screen coordinate system and generates images in scan-line order. Scan lines are divided into spans (or segments) using the bounding rectangles of primitives calculated in screen coordinates. Conventional rendering methods and ray tracing techniques are used interchangeably along each scan-line. The method used is detennined by the number of primitives associated with a particular span. Conventional rendering methods are used when only one primitive is associated with a span, ray tracing techniques are used for hidden surface removal when two or more primitives are involved. In the latter case each pixel in the span is evaluated by accessing the polygon that is visible within each primitive associated with the span. The depth values (i. e. z-coordinates derived from the 3-dimensional definition) of the polygons involved are deduced for the pixel's position using linear interpolation. These values are used to determine the visible polygon. The CSG tree is accessed from the bottom upwards via an ordered index that enables the 'visible' primitives on any particular scan-line to be efficiently located. Within each primitive an ordered path through the data structure provides the polygons potentially visible on a particular scan-line. Lists of the active primitives and paths to potentially visible polygons are maintained throughout the rendering step and enable span coherence and scan-line coherence to be fully utilised. The results of tests with a range of typical objects and scenes are provided in chapter 5. These results show that the hybrid algorithm is significantly faster than full ray tracing algorithms

    Semi-Global Stereo Matching with Surface Orientation Priors

    Full text link
    Semi-Global Matching (SGM) is a widely-used efficient stereo matching technique. It works well for textured scenes, but fails on untextured slanted surfaces due to its fronto-parallel smoothness assumption. To remedy this problem, we propose a simple extension, termed SGM-P, to utilize precomputed surface orientation priors. Such priors favor different surface slants in different 2D image regions or 3D scene regions and can be derived in various ways. In this paper we evaluate plane orientation priors derived from stereo matching at a coarser resolution and show that such priors can yield significant performance gains for difficult weakly-textured scenes. We also explore surface normal priors derived from Manhattan-world assumptions, and we analyze the potential performance gains using oracle priors derived from ground-truth data. SGM-P only adds a minor computational overhead to SGM and is an attractive alternative to more complex methods employing higher-order smoothness terms.Comment: extended draft of 3DV 2017 (spotlight) pape

    Efficient Methods for Fast Shading

    Get PDF
    On devices without battery consuming and specialized hardware for rendering, it is important to improve the speed and quality so that these methods are suitable for real-time rendering. Furthermore such algorithms are needed on the coming multicore architectures. We show how the methods by Gouraud and Phong, the commonly most used methods for shading, can be improved and made faster for both software rendering as well as simple low energy consuming hardware implementations. Moreover, this paper summarizes the authors’ achievements in increasing shading speed and performance and a Bidirectional Reflectance Distribution Function is simplified for faster computing and hardware implementatio

    Aircraft geometry verification with enhanced computer generated displays

    Get PDF
    A method for visual verification of aerodynamic geometries using computer generated, color shaded images is described. The mathematical models representing aircraft geometries are created for use in theoretical aerodynamic analyses and in computer aided manufacturing. The aerodynamic shapes are defined using parametric bi-cubic splined patches. This mathematical representation is then used as input to an algorithm that generates a color shaded image of the geometry. A discussion of the techniques used in the mathematical representation of the geometry and in the rendering of the color shaded display is presented. The results include examples of color shaded displays, which are contrasted with wire frame type displays. The examples also show the use of mapped surface pressures in terms of color shaded images of V/STOL fighter/attack aircraft and advanced turboprop aircraft

    Femto-photography: capturing and visualizing the propagation of light

    Get PDF
    We present femto-photography, a novel imaging technique to capture and visualize the propagation of light. With an effective exposure time of 1.85 picoseconds (ps) per frame, we reconstruct movies of ultrafast events at an equivalent resolution of about one half trillion frames per second. Because cameras with this shutter speed do not exist, we re-purpose modern imaging hardware to record an ensemble average of repeatable events that are synchronized to a streak sensor, in which the time of arrival of light from the scene is coded in one of the sensor's spatial dimensions. We introduce reconstruction methods that allow us to visualize the propagation of femtosecond light pulses through macroscopic scenes; at such fast resolution, we must consider the notion of time-unwarping between the camera's and the world's space-time coordinate systems to take into account effects associated with the finite speed of light. We apply our femto-photography technique to visualizations of very different scenes, which allow us to observe the rich dynamics of time-resolved light transport effects, including scattering, specular reflections, diffuse interreflections, diffraction, caustics, and subsurface scattering. Our work has potential applications in artistic, educational, and scientific visualizations; industrial imaging to analyze material properties; and medical imaging to reconstruct subsurface elements. In addition, our time-resolved technique may motivate new forms of computational photography.MIT Media Lab ConsortiumLincoln LaboratoryMassachusetts Institute of Technology. Institute for Soldier NanotechnologiesAlfred P. Sloan Foundation (Research Fellowship)United States. Defense Advanced Research Projects Agency (Young Faculty Award

    Layered depth images

    Get PDF
    In this paper we present a set of efficient image based rendering methods capable of rendering multiple frames per second on a PC. The first method warps Sprites with Depth representing smooth surfaces without the gaps found in other techniques. A second method for more general scenes performs warping from an intermediate representation called a Layered Depth Image (LDI). An LDI is a view of the scene from a single input camera view, but with multiple pixels along each line of sight. The size of the representation grows only linearly with the observed depth complexity in the scene. Moreover, because the LDI data are represented in a single image coordinate system, McMillan's warp ordering algorithm can be successfully adapted. As a result, pixels are drawn in the output image in back-to-front order. No z-buffer is required, so alpha-compositing can be done efficiently without depth sorting. This makes splatting an efficient solution to the resampling problem.Engineering and Applied Science

    Content-adaptive lenticular prints

    Get PDF
    Lenticular prints are a popular medium for producing automultiscopic glasses-free 3D images. The light field emitted by such prints has a fixed spatial and angular resolution. We increase both perceived angular and spatial resolution by modifying the lenslet array to better match the content of a given light field. Our optimization algorithm analyzes the input light field and computes an optimal lenslet size, shape, and arrangement that best matches the input light field given a set of output parameters. The resulting emitted light field shows higher detail and smoother motion parallax compared to fixed-size lens arrays. We demonstrate our technique using rendered simulations and by 3D printing lens arrays, and we validate our approach in simulation with a user study
    corecore