410 research outputs found

    Static 3D Triangle Mesh Compression Overview

    Get PDF
    3D triangle meshes are extremely used to model discrete surfaces, and almost always represented with two tables: one for geometry and another for connectivity. While the raw size of a triangle mesh is of around 200 bits per vertex, by coding cleverly (and separately) those two distinct kinds of information it is possible to achieve compression ratios of 15:1 or more. Different techniques must be used depending on whether single-rate vs. progressive bitstreams are sought; and, in the latter case, on whether or not hierarchically nested meshes are desirable during reconstructio

    Mesh compression: Theory and practice.

    Get PDF
    Three-dimensional meshes (3D meshes, for short) are fast becoming an emerging media type, used in a variety of application domains such as engineering design, manufacture, architecture, bio-informatics, medicine, entertainment, commerce, science, defense, etc. The volume of data of this media type that is being circulated on the internet is increasing very rapidly and is being used as frequently as other media types like text, audio (1D), images and video (2D). Hence, 3D meshes need good processing and visualization methods. Also, the sizes of these meshes are much greater than the other media types mentioned above and often exceeds the memory and bandwidth available for their storage and transmission. Compression schemes for such large 3D meshes have become a subject of intense study lately. Meshes are either made up of triangles or quadrilaterals. Meshes made up of only triangles are called triangle meshes and meshes made up of quadrilaterals are called quadrilateral meshes (quad meshes, for short). A mesh is described by specifying its geometry (vertex coordinates) and its connectivity (adjacencies of the triangles or quadrilaterals). Previous research on mesh compression has been mostly for triangle meshes. Quad meshes were traditionally handled by first triangulating them and then applying triangle mesh compression techniques. In order to avoid this additional triangulation step, a direct technique is proposed for compressing and decompressing the connectivity of quad meshes. This technique takes a quad mesh as input and encodes its connectivity as a sequence of opcodes which can be restored back to the quad mesh, using the decompression technique. A data structure called EdgeTable is introduced to aid in the traversal of a quad mesh during compression. Also, a technique based on constrained Delaunay triangulation for reconstructing the connectivity of a 2D mesh from its geometry and a minimum set of edges is proposed. Source: Masters Abstracts International, Volume: 44-03, page: 1393. Thesis (M.Sc.)--University of Windsor (Canada), 2005

    Mesh Compression

    Get PDF
    Die Kompression von Netzen ist eine weitgefächerte Forschungsrichtung mit Anwendungen in den verschiedensten Bereichen, wie zum Beispiel im Bereich der Handhabung extrem großer Modelle, beim Austausch von dreidimensionalem Inhalt über das Internet, im elektronischen Handel, als anpassungsfähige Repräsentation für Volumendatensätze usw. In dieser Arbeit wird das Verfahren der Cut-Border Machine beschrieben. Die Cut-Border Machine kodiert Netze, indem ein Teilbereich durch das Netz wächst (region growing). Kodiert wird die Art und Weise, wie neue Netzelemente dem wachsenden Teilbereich einverleibt werden. Das Verfahren der Cut-Border Machine kann sowohl auf Dreiecksnetze als auch auf Tetraedernetze angewendet werden. Trotz der einfachen Struktur des Verfahrens kann eine sehr hohe Kompressionsrate erzielt werden. Im Falle von Tetraedernetzen erreicht die Cut-Border Machine die beste Kompressionsrate von allen bekannten Verfahren. Die einfache Struktur der Cut-Border Machine ermöglicht einerseits die Realisierung direkt in Hardware und ist auch als Implementierung in Software extrem schnell. Auf der anderen Seite erlaubt die Einfachheit eine theoretische Analyse des Algorithmus. Gezeigt werden konnte, dass für ebene Triangulierungen eine leicht modifizierte Version der Cut-Border Machine lineare Laufzeiten in der Zahl der Knoten erzielt und dass die komprimierte Darstellung nur linearen Speicherbedarf benötigt, d.h. nicht mehr als fünf Bits pro Knoten. Neben der detaillierten Beschreibung der Cut-Border Machine mit mehreren Verbesserungen und Optimierungen, enthält die Arbeit eine Einführung zu Netzen und geeigneten Datenstrukturen und entwickelt mehrere Kodierungsverfahren, die im Bereich der Netzkompression Anwendung finden. Eine breite Übersicht verwandter Arbeiten gibt Einblick in des Forschungsgebiet. Weiterhin wird die Effizienz mehrerer in der Literatur beschriebener Verfahren verbessert. Insbesondere konnte die algorithmisch erzielte obere Schranke für die KodiMesh Compression is a broad research area with applications in a lot of different areas, such as the handling of very large models, the exchange of three dimensional content over the internet, electronic commerce, the flexible representation of volumetric data and so on. In this thesis the mesh compression method of the Cut-Border Machine is described. The Cut-Border Machine encodes meshes by growing a region through the mesh and encoding the way, in which the mesh elements are incorporated into the growing region. The Cut-Border Machine can be applied to triangular and tetrahedral meshes. Although the method is not too complicated, it achieves very good compression rates. In the tetrahedral case the Cut-Border Machine performs best among all known methods. The simple nature of the Cut-Border Machine allows on the one hand for a hardware implementation and performs also as software implementation extremely well. On the other hand the simplicity allows for a theoretical analysis of the Cut-Border Machine. It could be shown, that for planar triangulations a slightly modified version of the Cut-Border Machine runs in linear time in the number of vertices and that the compressed representation only consumes linear storage space, i.e. no more than five bits per vertex. Besides the detailed description of the Cut-Border Machine with several improvements and optimizations, the thesis gives an introduction to meshes and appropriate data structures, develops several coding techniques useful for mesh compression and gives a broad overview of related work. Furthermore the author improves the encoding efficiency of several other compression techniques. In particular could the algorithmically achieved upper bound for the encoding of planar triangulations be improved to ten percent above the theoretical limit, what is the best known result up to now

    Wavelet representation of contour sets

    Get PDF
    Journal ArticleWe present a new wavelet compression and multiresolution modeling approach for sets of contours (level sets). In contrast to previous wavelet schemes, our algorithm creates a parametrization of a scalar field induced by its contours and compactly stores this parametrization rather than function values sampled on a regular grid. Our representation is based on hierarchical polygon meshes with subdivision connectivity whose vertices are transformed into wavelet coefficients. From this sparse set of coefficients, every set of contours can be efficiently reconstructed at multiple levels of resolution. When applying lossy compression, introducing high quantization errors, our method preserves contour topology, in contrast to compression methods applied to the corresponding field function. We provide numerical results for scalar fields defined on planar domains. Our approach generalizes to volumetric domains, time-varying contours, and level sets of vector fields

    3D Shape Segmentation with Projective Convolutional Networks

    Full text link
    This paper introduces a deep architecture for segmenting 3D objects into their labeled semantic parts. Our architecture combines image-based Fully Convolutional Networks (FCNs) and surface-based Conditional Random Fields (CRFs) to yield coherent segmentations of 3D shapes. The image-based FCNs are used for efficient view-based reasoning about 3D object parts. Through a special projection layer, FCN outputs are effectively aggregated across multiple views and scales, then are projected onto the 3D object surfaces. Finally, a surface-based CRF combines the projected outputs with geometric consistency cues to yield coherent segmentations. The whole architecture (multi-view FCNs and CRF) is trained end-to-end. Our approach significantly outperforms the existing state-of-the-art methods in the currently largest segmentation benchmark (ShapeNet). Finally, we demonstrate promising segmentation results on noisy 3D shapes acquired from consumer-grade depth cameras.Comment: This is an updated version of our CVPR 2017 paper. We incorporated new experiments that demonstrate ShapePFCN performance under the case of consistent *upright* orientation and an additional input channel in our rendered images for encoding height from the ground plane (upright axis coordinate values). Performance is improved in this settin

    Scalable wavelet-based coding of irregular meshes with interactive region-of-interest support

    Get PDF
    This paper proposes a novel functionality in wavelet-based irregular mesh coding, which is interactive region-of-interest (ROI) support. The proposed approach enables the user to define the arbitrary ROIs at the decoder side and to prioritize and decode these regions at arbitrarily high-granularity levels. In this context, a novel adaptive wavelet transform for irregular meshes is proposed, which enables: 1) varying the resolution across the surface at arbitrarily fine-granularity levels and 2) dynamic tiling, which adapts the tile sizes to the local sampling densities at each resolution level. The proposed tiling approach enables a rate-distortion-optimal distribution of rate across spatial regions. When limiting the highest resolution ROI to the visible regions, the fine granularity of the proposed adaptive wavelet transform reduces the required amount of graphics memory by up to 50%. Furthermore, the required graphics memory for an arbitrary small ROI becomes negligible compared to rendering without ROI support, independent of any tiling decisions. Random access is provided by a novel dynamic tiling approach, which proves to be particularly beneficial for large models of over 10(6) similar to 10(7) vertices. The experiments show that the dynamic tiling introduces a limited lossless rate penalty compared to an equivalent codec without ROI support. Additionally, rate savings up to 85% are observed while decoding ROIs of tens of thousands of vertices
    corecore