3 research outputs found

    3D Stereo Rendering Using FPGA

    Get PDF
    Stereo rendering presents a virtual 3D scene from two slightly different vantage points. It is of great importance in the field of machine vision, robotics and image analysis. This paper proposes a stereo vision system that is realized in a single field programmable gate array (FPGA). Calculations of the stereo pairs are made by using  two-center projection (off-axis) method. The first red resultant image is for left eye while the second blue one is for right eye; the 3D illusion is produced when looking to them using anaglyph. This computer graphic hardware system is implemented using Spartan3E XC3S500E FPGA kit. The execution time for the proposal is 1266 faster than OpenGL time with maximum operating frequency of 35.417 MHz, while the max occupation area reaches 84%. Keywords: Computer Graphic; Stereoscopic; anaglyph; FPGA; two-center projection ;Off-axis Method; stereo pairs. DOI: 10.7176/CEIS/10-3-04 Publication date: April 30th 201

    Optimization techniques for computationally expensive rendering algorithms

    Get PDF
    Realistic rendering in computer graphics simulates the interactions of light and surfaces. While many accurate models for surface reflection and lighting, including solid surfaces and participating media have been described; most of them rely on intensive computation. Common practices such as adding constraints and assumptions can increase performance. However, they may compromise the quality of the resulting images or the variety of phenomena that can be accurately represented. In this thesis, we will focus on rendering methods that require high amounts of computational resources. Our intention is to consider several conceptually different approaches capable of reducing these requirements with only limited implications in the quality of the results. The first part of this work will study rendering of time-­¿varying participating media. Examples of this type of matter are smoke, optically thick gases and any material that, unlike the vacuum, scatters and absorbs the light that travels through it. We will focus on a subset of algorithms that approximate realistic illumination using images of real world scenes. Starting from the traditional ray marching algorithm, we will suggest and implement different optimizations that will allow performing the computation at interactive frame rates. This thesis will also analyze two different aspects of the generation of anti-­¿aliased images. One targeted to the rendering of screen-­¿space anti-­¿aliased images and the reduction of the artifacts generated in rasterized lines and edges. We expect to describe an implementation that, working as a post process, it is efficient enough to be added to existing rendering pipelines with reduced performance impact. A third method will take advantage of the limitations of the human visual system (HVS) to reduce the resources required to render temporally antialiased images. While film and digital cameras naturally produce motion blur, rendering pipelines need to explicitly simulate it. This process is known to be one of the most important burdens for every rendering pipeline. Motivated by this, we plan to run a series of psychophysical experiments targeted at identifying groups of motion-­¿blurred images that are perceptually equivalent. A possible outcome is the proposal of criteria that may lead to reductions of the rendering budgets
    corecore