286 research outputs found

    Mesh compression: Theory and practice.

    Get PDF
    Three-dimensional meshes (3D meshes, for short) are fast becoming an emerging media type, used in a variety of application domains such as engineering design, manufacture, architecture, bio-informatics, medicine, entertainment, commerce, science, defense, etc. The volume of data of this media type that is being circulated on the internet is increasing very rapidly and is being used as frequently as other media types like text, audio (1D), images and video (2D). Hence, 3D meshes need good processing and visualization methods. Also, the sizes of these meshes are much greater than the other media types mentioned above and often exceeds the memory and bandwidth available for their storage and transmission. Compression schemes for such large 3D meshes have become a subject of intense study lately. Meshes are either made up of triangles or quadrilaterals. Meshes made up of only triangles are called triangle meshes and meshes made up of quadrilaterals are called quadrilateral meshes (quad meshes, for short). A mesh is described by specifying its geometry (vertex coordinates) and its connectivity (adjacencies of the triangles or quadrilaterals). Previous research on mesh compression has been mostly for triangle meshes. Quad meshes were traditionally handled by first triangulating them and then applying triangle mesh compression techniques. In order to avoid this additional triangulation step, a direct technique is proposed for compressing and decompressing the connectivity of quad meshes. This technique takes a quad mesh as input and encodes its connectivity as a sequence of opcodes which can be restored back to the quad mesh, using the decompression technique. A data structure called EdgeTable is introduced to aid in the traversal of a quad mesh during compression. Also, a technique based on constrained Delaunay triangulation for reconstructing the connectivity of a 2D mesh from its geometry and a minimum set of edges is proposed. Source: Masters Abstracts International, Volume: 44-03, page: 1393. Thesis (M.Sc.)--University of Windsor (Canada), 2005

    Multiresolution Techniques for Real–Time Visualization of Urban Environments and Terrains

    Get PDF
    In recent times we are witnessing a steep increase in the availability of data coming from real–life environments. Nowadays, virtually everyone connected to the Internet may have instant access to a tremendous amount of data coming from satellite elevation maps, airborne time-of-flight scanners and digital cameras, street–level photographs and even cadastral maps. As for other, more traditional types of media such as pictures and videos, users of digital exploration softwares expect commodity hardware to exhibit good performance for interactive purposes, regardless of the dataset size. In this thesis we propose novel solutions to the problem of rendering large terrain and urban models on commodity platforms, both for local and remote exploration. Our solutions build on the concept of multiresolution representation, where alternative representations of the same data with different accuracy are used to selectively distribute the computational power, and consequently the visual accuracy, where it is more needed on the base of the user’s point of view. In particular, we will introduce an efficient multiresolution data compression technique for planar and spherical surfaces applied to terrain datasets which is able to handle huge amount of information at a planetary scale. We will also describe a novel data structure for compact storage and rendering of urban entities such as buildings to allow real–time exploration of cityscapes from a remote online repository. Moreover, we will show how recent technologies can be exploited to transparently integrate virtual exploration and general computer graphics techniques with web applications

    STL 2.0: A Proposal for a Universal Multi-Material Additive Manufacturing File Format

    Get PDF
    The de-facto standard STL file format has served the rapid prototyping community for over two decades, but falls short with the advent of new technological developments such as the ability to handle multiple and graded materials, specify volumetric digital inkjet patterns and surface colors. We study a variety of requirements for additive fabrication technologies and propose a new compact XML-based file format. The new Additive Manufacturing File (AMF) format allows the resolution-independent specification of geometry and material properties. Regions may be defined geometrically either using a triangle mesh, using functional representations, or through a voxel bitmap. Each region is associated with a material, which may be defined as a base (single) material or hierarchically by a combination of other materials, either functionally (enabling smooth gradients) or voxel-wise (for arbitrary microstructure). Files can be self-contained or refer to external or online material libraries. With a simple conversion, the AMF file format is both forward and backwards compatible with the current standard STL format, and the flexibility of the XML structure enables additional features to be adopted as needed by CAD programs and future additive manufacturing processes. Code and examples are publicly available.Mechanical Engineerin

    3-D Mesh geometry compression with set partitioning in the spectral domain

    Get PDF
    This paper explains the development of a highly efficient progressive 3-D mesh geometry coder based on the region adaptive transform in the spectral mesh compression method. A hierarchical set partitioning technique, originally proposed for the efficient compression of wavelet transform coefficients in high-performance wavelet-based image coding methods, is proposed for the efficient compression of the coefficients of this transform. Experiments confirm that the proposed coder employing such a region adaptive transform has a high compression performance rarely achieved by other state of the art 3-D mesh geometry compression algorithms. A new, high-performance fixed spectral basis method is also proposed for reducing the computational complexity of the transform. Many-to-one mappings are employed to relate the coded irregular mesh region to a regular mesh whose basis is used. To prevent loss of compression performance due to the low-pass nature of such mappings, transitions are made from transform-based coding to spatial coding on a per region basis at high coding rates. Experimental results show the performance advantage of the newly proposed fixed spectral basis method over the original fixed spectral basis method in the literature that employs one-to-one mappings.This work was supported in part by the Scientific and Technological Research Council of Turkey, and conducted under Project 106E064Publisher's Versio

    3D Mesh Simplification. A survey of algorithms and CAD model simplification tests

    Get PDF
    Simplification of highly detailed CAD models is an important step when CAD models are visualized or by other means utilized in augmented reality applications. Without simplification, CAD models may cause severe processing and storage is- sues especially in mobile devices. In addition, simplified models may have other advantages like better visual clarity or improved reliability when used for visual pose tracking. The geometry of CAD models is invariably presented in form of a 3D mesh. In this paper, we survey mesh simplification algorithms in general and focus especially to algorithms that can be used to simplify CAD models. We test some commonly known algorithms with real world CAD data and characterize some new CAD related simplification algorithms that have not been surveyed in previous mesh simplification reviews.Siirretty Doriast

    Towards Predictive Rendering in Virtual Reality

    Get PDF
    The strive for generating predictive images, i.e., images representing radiometrically correct renditions of reality, has been a longstanding problem in computer graphics. The exactness of such images is extremely important for Virtual Reality applications like Virtual Prototyping, where users need to make decisions impacting large investments based on the simulated images. Unfortunately, generation of predictive imagery is still an unsolved problem due to manifold reasons, especially if real-time restrictions apply. First, existing scenes used for rendering are not modeled accurately enough to create predictive images. Second, even with huge computational efforts existing rendering algorithms are not able to produce radiometrically correct images. Third, current display devices need to convert rendered images into some low-dimensional color space, which prohibits display of radiometrically correct images. Overcoming these limitations is the focus of current state-of-the-art research. This thesis also contributes to this task. First, it briefly introduces the necessary background and identifies the steps required for real-time predictive image generation. Then, existing techniques targeting these steps are presented and their limitations are pointed out. To solve some of the remaining problems, novel techniques are proposed. They cover various steps in the predictive image generation process, ranging from accurate scene modeling over efficient data representation to high-quality, real-time rendering. A special focus of this thesis lays on real-time generation of predictive images using bidirectional texture functions (BTFs), i.e., very accurate representations for spatially varying surface materials. The techniques proposed by this thesis enable efficient handling of BTFs by compressing the huge amount of data contained in this material representation, applying them to geometric surfaces using texture and BTF synthesis techniques, and rendering BTF covered objects in real-time. Further approaches proposed in this thesis target inclusion of real-time global illumination effects or more efficient rendering using novel level-of-detail representations for geometric objects. Finally, this thesis assesses the rendering quality achievable with BTF materials, indicating a significant increase in realism but also confirming the remainder of problems to be solved to achieve truly predictive image generation

    Towards a High Quality Real-Time Graphics Pipeline

    Get PDF
    Modern graphics hardware pipelines create photorealistic images with high geometric complexity in real time. The quality is constantly improving and advanced techniques from feature film visual effects, such as high dynamic range images and support for higher-order surface primitives, have recently been adopted. Visual effect techniques have large computational costs and significant memory bandwidth usage. In this thesis, we identify three problem areas and propose new algorithms that increase the performance of a set of computer graphics techniques. Our main focus is on efficient algorithms for the real-time graphics pipeline, but parts of our research are equally applicable to offline rendering. Our first focus is texture compression, which is a technique to reduce the memory bandwidth usage. The core idea is to store images in small compressed blocks which are sent over the memory bus and are decompressed on-the-fly when accessed. We present compression algorithms for two types of texture formats. High dynamic range images capture environment lighting with luminance differences over a wide intensity range. Normal maps store perturbation vectors for local surface normals, and give the illusion of high geometric surface detail. Our compression formats are tailored to these texture types and have compression ratios of 6:1, high visual fidelity, and low-cost decompression logic. Our second focus is tessellation culling. Culling is a commonly used technique in computer graphics for removing work that does not contribute to the final image, such as completely hidden geometry. By discarding rendering primitives from further processing, substantial arithmetic computations and memory bandwidth can be saved. Modern graphics processing units include flexible tessellation stages, where rendering primitives are subdivided for increased geometric detail. Images with highly detailed models can be synthesized, but the incurred cost is significant. We have devised a simple remapping technique that allowsfor better tessellation distribution in screen space. Furthermore, we present programmable tessellation culling, where bounding volumes for displaced geometry are computed and used to conservatively test if a primitive can be discarded before tessellation. We introduce a general tessellation culling framework, and an optimized algorithm for rendering of displaced Bézier patches, which is expected to be a common use case for graphics hardware tessellation. Our third and final focus is forward-looking, and relates to efficient algorithms for stochastic rasterization, a rendering technique where camera effects such as depth of field and motion blur can be faithfully simulated. We extend a graphics pipeline with stochastic rasterization in spatio-temporal space and show that stochastic motion blur can be rendered with rather modest pipeline modifications. Furthermore, backface culling algorithms for motion blur and depth of field rendering are presented, which are directly applicable to stochastic rasterization. Hopefully, our work in this field brings us closer to high quality real-time stochastic rendering

    Discontinuity-Aware Base-Mesh Modeling of Depth for Scalable Multiview Image Synthesis and Compression

    Full text link
    This thesis is concerned with the challenge of deriving disparity from sparsely communicated depth for performing disparity-compensated view synthesis for compression and rendering of multiview images. The modeling of depth is essential for deducing disparity at view locations where depth is not available and is also critical for visibility reasoning and occlusion handling. This thesis first explores disparity derivation methods and disparity-compensated view synthesis approaches. Investigations reveal the merits of adopting a piece-wise continuous mesh description of depth for deriving disparity at target view locations to enable disparity-compensated backward warping of texture. Visibility information can be reasoned due to the correspondence relationship between views that a mesh model provides, while the connectivity of a mesh model assists in resolving depth occlusion. The recent JPEG 2000 Part-17 extension defines tools for scalable coding of discontinuous media using breakpoint-dependent DWT, where breakpoints describe discontinuity boundary geometry. This thesis proposes a method to efficiently reconstruct depth coded using JPEG 2000 Part-17 as a piece-wise continuous mesh, where discontinuities are driven by the encoded breakpoints. Results show that the proposed mesh can accurately represent decoded depth while its complexity scales along with decoded depth quality. The piece-wise continuous mesh model anchored at a single viewpoint or base-view can be augmented to form a multi-layered structure where the underlying layers carry depth information of regions that are occluded at the base-view. Such a consolidated mesh representation is termed a base-mesh model and can be projected to many viewpoints, to deduce complete disparity fields between any pair of views that are inherently consistent. Experimental results demonstrate the superior performance of the base-mesh model in multiview synthesis and compression compared to other state-of-the-art methods, including the JPEG Pleno light field codec. The proposed base-mesh model departs greatly from conventional pixel-wise or block-wise depth models and their forward depth mapping for deriving disparity ingrained in existing multiview processing systems. When performing disparity-compensated view synthesis, there can be regions for which reference texture is unavailable, and inpainting is required. A new depth-guided texture inpainting algorithm is proposed to restore occluded texture in regions where depth information is either available or can be inferred using the base-mesh model

    Lossless Compression of Predicted Floating-Point Geometry

    Get PDF
    The sizeof geometric data sets in scientific and industrial applications is constantly increasing. Storing surfng or volume meshes in standard uncompressedf ormats results in large files that are expensive to store and slow to load and transmit. Scientists and engineersofne refeer ff using mesh compression because currently available schemes modif the mesh data. While connectivity is encoded in a lossless manner, the floating-point coordinates associated with the vertices are quantized onto aunif6: integer grid to enable e#cient predictive compression. Although a fine enough grid can usually represent the data with su#cient precision, the original floating-point values will change, regardless of grid resolution. In this paper we describe a methodf or compressing floating-point coordinates with predictive coding in a completely lossless manner. The initial quantization step is omitted and predictions are calculated in floating-point. The predicted and the actual floating-point values are broken up into sign, exponent, and mantissa and their corrections are compressed separately with context-based arithmetic coding. As the quality of the predictions varies with the exponent, we use the exponent to switch between di#erent arithmetic contexts. We report compression results using the popular parallelogram predictor, but our approach will work with any prediction scheme. The achieved bit-ratesf or lossless floating-point compression nicely complement those resultingfsu unifting quantizing with di#erent precisions
    corecore