5 research outputs found

    Visualization of Industrial Structures with Implicit GPU Primitives

    Get PDF
    International audienceWe present a method to interactively visualize large industrial models by replacing most triangles with implicit GPU primitives: cylinders, cone and torus slices. After a reverse-engineering process that recovers these primitives from triangle meshes, we encode their implicit parameters in a texture that is sent to the GPU. In rendering time, the implicit primitives are visualized seamlessly with other triangles in the scene. The method was tested on two massive industrial models, achieving better performance and image quality while reducing memory use

    Real time city visualization

    Get PDF
    The visualization of cities in real time has a lot of potential applications, from urban and emergency planning, to driving simulators and entertainment. The massive amount of data and the computational requirements needed to render an entire city in detail are the reason why a lot of techniques have been proposed in this eld. Procedural city generation, building simpli cation and visibility processing are some of the approaches used to solve a small subset of the problems that these applications need to face. Our work proposes a new city rendering algorithm that is a radically di erent approach to what has been done before in this eld. The proposed technique is based on a structuration of the city data in a regular grid which is traversed, at runtime, by a ray tracing algorithm that keeps track of visible parts of the scene. As a preprocess, a set of quads de ning the buildings of a city is transformed to the regular grid used by our algorithm. The rendering algorithm uses this data to generate a real time representation of the city minimizing the overdraw, a common problem in other techniques. This is done by means of a geometry shader to generate only the minimum number of fragments needed to render the city from a given position

    Simplifying Complex Environments using Incremental Textured Depth Meshes

    No full text
    We present an incremental algorithm to compute image-based simplifications of a large environment. We use an optimization-based approach to generate samples based on scene visibility, and from each viewpoint create textured depth meshes (TDMs) using sampled range panoramas of the environment. The optimization function minimizes artifacts such as skins and cracks in the reconstruction. We also present an encoding scheme for multiple TDMs that exploits spatial coherence among different viewpoints. The resulting simplifications, incremental textured depth meshes (ITDMs), reduce preprocessing, storage, rendering costs and visible artifacts. Our algorithm has been applied to large, complex synthetic environments comprising millions of primitives. It is able to render them at 20 -- 40 frames a second on a PC with little loss in visual fidelity

    Towards Predictive Rendering in Virtual Reality

    Get PDF
    The strive for generating predictive images, i.e., images representing radiometrically correct renditions of reality, has been a longstanding problem in computer graphics. The exactness of such images is extremely important for Virtual Reality applications like Virtual Prototyping, where users need to make decisions impacting large investments based on the simulated images. Unfortunately, generation of predictive imagery is still an unsolved problem due to manifold reasons, especially if real-time restrictions apply. First, existing scenes used for rendering are not modeled accurately enough to create predictive images. Second, even with huge computational efforts existing rendering algorithms are not able to produce radiometrically correct images. Third, current display devices need to convert rendered images into some low-dimensional color space, which prohibits display of radiometrically correct images. Overcoming these limitations is the focus of current state-of-the-art research. This thesis also contributes to this task. First, it briefly introduces the necessary background and identifies the steps required for real-time predictive image generation. Then, existing techniques targeting these steps are presented and their limitations are pointed out. To solve some of the remaining problems, novel techniques are proposed. They cover various steps in the predictive image generation process, ranging from accurate scene modeling over efficient data representation to high-quality, real-time rendering. A special focus of this thesis lays on real-time generation of predictive images using bidirectional texture functions (BTFs), i.e., very accurate representations for spatially varying surface materials. The techniques proposed by this thesis enable efficient handling of BTFs by compressing the huge amount of data contained in this material representation, applying them to geometric surfaces using texture and BTF synthesis techniques, and rendering BTF covered objects in real-time. Further approaches proposed in this thesis target inclusion of real-time global illumination effects or more efficient rendering using novel level-of-detail representations for geometric objects. Finally, this thesis assesses the rendering quality achievable with BTF materials, indicating a significant increase in realism but also confirming the remainder of problems to be solved to achieve truly predictive image generation
    corecore