509 research outputs found

    GPU-based Streaming for Parallel Level of Detail on Massive Model Rendering

    Get PDF
    Rendering massive 3D models in real-time has long been recognized as a very challenging problem because of the limited computational power and memory space available in a workstation. Most existing rendering techniques, especially level of detail (LOD) processing, have suffered from their sequential execution natures, and does not scale well with the size of the models. We present a GPU-based progressive mesh simplification approach which enables the interactive rendering of large 3D models with hundreds of millions of triangles. Our work contributes to the massive rendering research in two ways. First, we develop a novel data structure to represent the progressive LOD mesh, and design a parallel mesh simplification algorithm towards GPU architecture. Second, we propose a GPU-based streaming approach which adopt a frame-to-frame coherence scheme in order to minimize the high communication cost between CPU and GPU. Our results show that the parallel mesh simplification algorithm and GPU-based streaming approach significantly improve the overall rendering performance

    3D Mesh Simplification. A survey of algorithms and CAD model simplification tests

    Get PDF
    Simplification of highly detailed CAD models is an important step when CAD models are visualized or by other means utilized in augmented reality applications. Without simplification, CAD models may cause severe processing and storage is- sues especially in mobile devices. In addition, simplified models may have other advantages like better visual clarity or improved reliability when used for visual pose tracking. The geometry of CAD models is invariably presented in form of a 3D mesh. In this paper, we survey mesh simplification algorithms in general and focus especially to algorithms that can be used to simplify CAD models. We test some commonly known algorithms with real world CAD data and characterize some new CAD related simplification algorithms that have not been surveyed in previous mesh simplification reviews.Siirretty Doriast

    Parallel Mesh Processing

    Get PDF
    Die aktuelle Forschung im Bereich der Computergrafik versucht den zunehmenden Ansprüchen der Anwender gerecht zu werden und erzeugt immer realistischer wirkende Bilder. Dementsprechend werden die Szenen und Verfahren, die zur Darstellung der Bilder genutzt werden, immer komplexer. So eine Entwicklung ist unweigerlich mit der Steigerung der erforderlichen Rechenleistung verbunden, da die Modelle, aus denen eine Szene besteht, aus Milliarden von Polygonen bestehen können und in Echtzeit dargestellt werden müssen. Die realistische Bilddarstellung ruht auf drei Säulen: Modelle, Materialien und Beleuchtung. Heutzutage gibt es einige Verfahren für effiziente und realistische Approximation der globalen Beleuchtung. Genauso existieren Algorithmen zur Erstellung von realistischen Materialien. Es gibt zwar auch Verfahren für das Rendering von Modellen in Echtzeit, diese funktionieren aber meist nur für Szenen mittlerer Komplexität und scheitern bei sehr komplexen Szenen. Die Modelle bilden die Grundlage einer Szene; deren Optimierung hat unmittelbare Auswirkungen auf die Effizienz der Verfahren zur Materialdarstellung und Beleuchtung, so dass erst eine optimierte Modellrepräsentation eine Echtzeitdarstellung ermöglicht. Viele der in der Computergrafik verwendeten Modelle werden mit Hilfe der Dreiecksnetze repräsentiert. Das darin enthaltende Datenvolumen ist enorm, um letztlich den Detailreichtum der jeweiligen Objekte darstellen bzw. den wachsenden Realitätsanspruch bewältigen zu können. Das Rendern von komplexen, aus Millionen von Dreiecken bestehenden Modellen stellt selbst für moderne Grafikkarten eine große Herausforderung dar. Daher ist es insbesondere für die Echtzeitsimulationen notwendig, effiziente Algorithmen zu entwickeln. Solche Algorithmen sollten einerseits Visibility Culling1, Level-of-Detail, (LOD), Out-of-Core Speicherverwaltung und Kompression unterstützen. Anderseits sollte diese Optimierung sehr effizient arbeiten, um das Rendering nicht noch zusätzlich zu behindern. Dies erfordert die Entwicklung paralleler Verfahren, die in der Lage sind, die enorme Datenflut effizient zu verarbeiten. Der Kernbeitrag dieser Arbeit sind neuartige Algorithmen und Datenstrukturen, die speziell für eine effiziente parallele Datenverarbeitung entwickelt wurden und in der Lage sind sehr komplexe Modelle und Szenen in Echtzeit darzustellen, sowie zu modellieren. Diese Algorithmen arbeiten in zwei Phasen: Zunächst wird in einer Offline-Phase die Datenstruktur erzeugt und für parallele Verarbeitung optimiert. Die optimierte Datenstruktur wird dann in der zweiten Phase für das Echtzeitrendering verwendet. Ein weiterer Beitrag dieser Arbeit ist ein Algorithmus, welcher in der Lage ist, einen sehr realistisch wirkenden Planeten prozedural zu generieren und in Echtzeit zu rendern

    Procedural Planet Generation

    Get PDF
    Procedural planet generation is a way of creating interesting, computer-generated environments from a set of specified guidelines. When displaying these environments, only enough detail needs to be present so that the current view seems realistic. This can be accomplished by using simplified versions of the objects until more detail is required. This paper describes how to accomplish this level of detail switching using progressive meshes and describes a specific implementation of the simplification mechanism used to generate them

    Time-critical multiresolution rendering of large complex models

    Get PDF
    Very large and geometrically complex scenes, exceeding millions of polygons and hundreds of objects, arise naturally in many areas of interactive computer graphics. Time-critical rendering of such scenes requires the ability to trade visual quality with speed. Previous work has shown that this can be done by representing individual scene components as multiresolution triangle meshes, and performing at each frame a convex constrained optimization to choose the mesh resolutions that maximize image quality while meeting timing constraints. In this paper we demonstrate that the nonlinear optimization problem with linear constraints associated to a large class of quality estimation heuristics is efficiently solved using an active-set strategy. By exploiting the problem structure, Lagrange multipliers estimates and equality constrained problem solutions are computed in linear time. Results show that our algorithms and data structures provide low memory overhead, smooth level-of-detail control, and guarantee, within acceptable limits, a uniform, bounded frame rate even for widely changing viewing conditions. Implementation details are presented along with the results of tests for memory needs, algorithm timing, and efficacy.785-803Pubblicat

    An Accelerating 3D Image Reconstruction System Based on the Level-of-Detail Algorithm

    Get PDF
    This paper proposes a research of An Accelerating 3D Image Reconstruction System Based on the Level-of-Detail Algorithm and combines 3D graphic application interfaces, such as DirectX3D and OpenCV to reconstruct the 3D imaging system for Magnetic Resonance Imaging (MRI), and adds Level of Detail (LOD) algorithm to the system. The system uses the volume rendering method to perform 3D reconstruction for brain imaging. The process, which is based on the level of detail algorithm that converts and formulates functions from differing levels of detail and scope, significantly reduces the complexity of required processing and computation, under the premises of maintaining drawing quality. To validate the system's efficiency enhancement on brain imaging reconstruction, this study operates the system on various computer platforms, and uses multiple sets of data to perform rendering and 3D object imaging reconstruction, the results of which are then verified and compared

    Towards Predictive Rendering in Virtual Reality

    Get PDF
    The strive for generating predictive images, i.e., images representing radiometrically correct renditions of reality, has been a longstanding problem in computer graphics. The exactness of such images is extremely important for Virtual Reality applications like Virtual Prototyping, where users need to make decisions impacting large investments based on the simulated images. Unfortunately, generation of predictive imagery is still an unsolved problem due to manifold reasons, especially if real-time restrictions apply. First, existing scenes used for rendering are not modeled accurately enough to create predictive images. Second, even with huge computational efforts existing rendering algorithms are not able to produce radiometrically correct images. Third, current display devices need to convert rendered images into some low-dimensional color space, which prohibits display of radiometrically correct images. Overcoming these limitations is the focus of current state-of-the-art research. This thesis also contributes to this task. First, it briefly introduces the necessary background and identifies the steps required for real-time predictive image generation. Then, existing techniques targeting these steps are presented and their limitations are pointed out. To solve some of the remaining problems, novel techniques are proposed. They cover various steps in the predictive image generation process, ranging from accurate scene modeling over efficient data representation to high-quality, real-time rendering. A special focus of this thesis lays on real-time generation of predictive images using bidirectional texture functions (BTFs), i.e., very accurate representations for spatially varying surface materials. The techniques proposed by this thesis enable efficient handling of BTFs by compressing the huge amount of data contained in this material representation, applying them to geometric surfaces using texture and BTF synthesis techniques, and rendering BTF covered objects in real-time. Further approaches proposed in this thesis target inclusion of real-time global illumination effects or more efficient rendering using novel level-of-detail representations for geometric objects. Finally, this thesis assesses the rendering quality achievable with BTF materials, indicating a significant increase in realism but also confirming the remainder of problems to be solved to achieve truly predictive image generation

    TOM: totally ordered mesh. A multiresolution data structure for time-critical graphics applications

    Get PDF
    Tridimensional interactive applications are confronted to situations where very large databases have to be animated, transmitted and displayed in very short bounded times. As it is generally impossible to handle the complete graphics description while meeting timing constraint, techniques enabling the extraction and manipulation of a significant part of the geometric database have been the focus of many research works in the field of computer graphics. Multiresolution representations of 3D models provide access to 3D objects at arbitrary resolutions while minimizing appearance degradation. Several kinds of data structures have been recently proposed for dealing with polygonal or parametric representations, but where not generally optimized for time-critical applications. We describe the TOM (Totally Ordered Mesh), a multiresolution triangle mesh structure tailored to the support of time-critical adaptive rendering. The structure grants high speed access to the continuous levels of detail of a mesh and allows very fast traversal of the list of triangles at arbitrary resolution so that bottlenecks in the graphic pipeline are avoided. Moreover, and without specific compression, the memory footprint of the TOM is small (about 108% of the single resolution object in face-vertex form) so that large scenes can be effectively handled. The TOM structure also supports storage of per vertex (or per corner of triangle) attributes such as colors, normals, texture coordinates or dynamic properties. Implementation details are presented along with the results of tests for memory needs, approximation quality, timing and efficacy
    corecore