402 research outputs found

    Multi-Resolution Texture Coding for Multi-Resolution 3D Meshes

    Full text link
    We present an innovative system to encode and transmit textured multi-resolution 3D meshes in a progressive way, with no need to send several texture images, one for each mesh LOD (Level Of Detail). All texture LODs are created from the finest one (associated to the finest mesh), but can be re- constructed progressively from the coarsest thanks to refinement images calculated in the encoding process, and transmitted only if needed. This allows us to adjust the LOD/quality of both 3D mesh and texture according to the rendering power of the device that will display them, and to the network capacity. Additionally, we achieve big savings in data transmission by avoiding altogether texture coordinates, which are generated automatically thanks to an unwrapping system agreed upon by both encoder and decoder

    Planet-Sized Batched Dynamic Adaptive Meshes (P-BDAM)

    Get PDF
    This paper describes an efficient technique for out-of-core management and interactive rendering of planet sized textured terrain surfaces. The technique, called planet-sized batched dynamic adaptive meshes (P-BDAM), extends the BDAM approach by using as basic primitive a general triangulation of points on a displaced triangle. The proposed framework introduces several advances with respect to the state of the art: thanks to a batched host-to-graphics communication model, we outperform current adaptive tessellation solutions in terms of rendering speed; we guarantee overall geometric continuity, exploiting programmable graphics hardware to cope with the accuracy issues introduced by single precision floating points; we exploit a compressed out of core representation and speculative prefetching for hiding disk latency during rendering of out-of-core data; we efficiently construct high quality simplified representations with a novel distributed out of core simplification algorithm working on a standard PC network.147-15

    Robust and Scalable Transmission of Arbitrary 3D Models over Wireless Networks

    Get PDF
    We describe transmission of 3D objects represented by texture and mesh over unreliable networks, extending our earlier work for regular mesh structure to arbitrary meshes and considering linear versus cubic interpolation. Our approach to arbitrary meshes considers stripification of the mesh and distributing nearby vertices into different packets, combined with a strategy that does not need texture or mesh packets to be retransmitted. Only the valence (connectivity) packets need to be retransmitted; however, storage of valence information requires only 10% space compared to vertices and even less compared to photorealistic texture. Thus, less than 5% of the packets may need to be retransmitted in the worst case to allow our algorithm to successfully reconstruct an acceptable object under severe packet loss. Even though packet loss during transmission has received limited research attention in the past, this topic is important for improving quality under lossy conditions created by shadowing and interference. Results showing the implementation of the proposed approach using linear, cubic, and Laplacian interpolation are described, and the mesh reconstruction strategy is compared with other methods

    Towards Predictive Rendering in Virtual Reality

    Get PDF
    The strive for generating predictive images, i.e., images representing radiometrically correct renditions of reality, has been a longstanding problem in computer graphics. The exactness of such images is extremely important for Virtual Reality applications like Virtual Prototyping, where users need to make decisions impacting large investments based on the simulated images. Unfortunately, generation of predictive imagery is still an unsolved problem due to manifold reasons, especially if real-time restrictions apply. First, existing scenes used for rendering are not modeled accurately enough to create predictive images. Second, even with huge computational efforts existing rendering algorithms are not able to produce radiometrically correct images. Third, current display devices need to convert rendered images into some low-dimensional color space, which prohibits display of radiometrically correct images. Overcoming these limitations is the focus of current state-of-the-art research. This thesis also contributes to this task. First, it briefly introduces the necessary background and identifies the steps required for real-time predictive image generation. Then, existing techniques targeting these steps are presented and their limitations are pointed out. To solve some of the remaining problems, novel techniques are proposed. They cover various steps in the predictive image generation process, ranging from accurate scene modeling over efficient data representation to high-quality, real-time rendering. A special focus of this thesis lays on real-time generation of predictive images using bidirectional texture functions (BTFs), i.e., very accurate representations for spatially varying surface materials. The techniques proposed by this thesis enable efficient handling of BTFs by compressing the huge amount of data contained in this material representation, applying them to geometric surfaces using texture and BTF synthesis techniques, and rendering BTF covered objects in real-time. Further approaches proposed in this thesis target inclusion of real-time global illumination effects or more efficient rendering using novel level-of-detail representations for geometric objects. Finally, this thesis assesses the rendering quality achievable with BTF materials, indicating a significant increase in realism but also confirming the remainder of problems to be solved to achieve truly predictive image generation

    Unified Texture Management for Arbitrary Meshes

    Get PDF
    Video games and simulators commonly use very detailed textures, whose cumulative size is often larger than the GPU memory. Textures may be loaded progressively, but dynamically loading and transferring this large amount of data in GPU memory results in loading delays and poor performance. Therefore, managing texture memory has become an important issue. While this problem has been (partly) addressed early for the specific case of terrain rendering, there is no generic texture management system for arbitrary meshes. We propose such a system, implemented on today's GPUs, which unifies classical solutions aimed at reducing memory transfer: progressive loading, texture compression, and caching strategies. For this, we introduce a new algorithm -- running on GPU -- to solve the major difficulty of detecting which parts of the texture are required for rendering. Our system is based on three components manipulating a tile pool which stores texture data in GPU memory. First, the Texture Load Map determines at every frame the appropriate list of texture tiles (i.e. location and MIP-map level) to render from the current viewpoint. Second, the Texture Cache manages the tile pool. Finally, the Texture Producer loads and decodes required texture tiles asynchronously in the tile pool. Decoding of compressed texture data is implemented on GPU to minimize texture transfer. The Texture Producer can also generate procedural textures. Our system is transparent to the user, and the only parameter that must be supplied at runtime is the current viewpoint. No modifications of the mesh are required. We demonstrate our system on large scenes displayed in real time. We show that it achieves interactive frame rates even in low-memory low-bandwidth situations

    Scalable Realtime Rendering and Interaction with Digital Surface Models of Landscapes and Cities

    Get PDF
    Interactive, realistic rendering of landscapes and cities differs substantially from classical terrain rendering. Due to the sheer size and detail of the data which need to be processed, realtime rendering (i.e. more than 25 images per second) is only feasible with level of detail (LOD) models. Even the design and implementation of efficient, automatic LOD generation is ambitious for such out-of-core datasets considering the large number of scales that are covered in a single view and the necessity to maintain screen-space accuracy for realistic representation. Moreover, users want to interact with the model based on semantic information which needs to be linked to the LOD model. In this thesis I present LOD schemes for the efficient rendering of 2.5d digital surface models (DSMs) and 3d point-clouds, a method for the automatic derivation of city models from raw DSMs, and an approach allowing semantic interaction with complex LOD models. The hierarchical LOD model for digital surface models is based on a quadtree of precomputed, simplified triangle mesh approximations. The rendering of the proposed model is proved to allow real-time rendering of very large and complex models with pixel-accurate details. Moreover, the necessary preprocessing is scalable and fast. For 3d point clouds, I introduce an LOD scheme based on an octree of hybrid plane-polygon representations. For each LOD, the algorithm detects planar regions in an adequately subsampled point cloud and models them as textured rectangles. The rendering of the resulting hybrid model is an order of magnitude faster than comparable point-based LOD schemes. To automatically derive a city model from a DSM, I propose a constrained mesh simplification. Apart from the geometric distance between simplified and original model, it evaluates constraints based on detected planar structures and their mutual topological relations. The resulting models are much less complex than the original DSM but still represent the characteristic building structures faithfully. Finally, I present a method to combine semantic information with complex geometric models. My approach links the semantic entities to the geometric entities on-the-fly via coarser proxy geometries which carry the semantic information. Thus, semantic information can be layered on top of complex LOD models without an explicit attribution step. All findings are supported by experimental results which demonstrate the practical applicability and efficiency of the methods
    • …
    corecore