7 research outputs found

    BIUP3: Boundary Topological Invariant of 3D Objects Through Front Propagation at a Constant Speed

    Get PDF
    Topological features constitute the highest abstraction in object representation. Euler characteristic is one of the most widely used topological invariants. The computation of the Euler characteristic is mainly based on three well-known mathematical formulae, which calculate either on the boundary of object or on the whole object. However, as digital objects are often non-manifolds, none of the known formulae can correctly compute the genus of digital surfaces. In this paper, we show that a new topological surface invariant of 3D digital objects, called BIUP/sup 3/, can be obtained through a special homeomorphic transform: front propagation at a constant speed. BIUP/sup 3/ overcomes the theoretic weakness of the Euler characteristic and it applies to both manifolds and non-manifolds. The computation of BIUP/sup 3/ can be done efficiently through a virtual front propagation, leaving the images unaffected

    Séparation Aveugle de Sources appliquée à la compression du maillage 3D

    Get PDF
    é - Les méthodes récentes de compression des maillages 3D peuvent être divisées en trois : simplification polyhédralée, compression des positions 3D et des attributs et codage de la connectivité. Les travaux présentés dans ce papier concernent le codage des positions 3D (géométrie du maillage 3D), leur objectif est de contribuer aux performances de la compression en réduisant la corrélation spatiale par une méthode de Séparation Aveugle de Sources (SAS) introduite avant le codage entropique. La méthode Eigenvalues Décomposition (EVD) a été adaptée pour travailler en sous-blocs et ainsi réduire la taille de la matrice de transformation pour fournir une compression efficace

    Mesh compression: Theory and practice.

    Get PDF
    Three-dimensional meshes (3D meshes, for short) are fast becoming an emerging media type, used in a variety of application domains such as engineering design, manufacture, architecture, bio-informatics, medicine, entertainment, commerce, science, defense, etc. The volume of data of this media type that is being circulated on the internet is increasing very rapidly and is being used as frequently as other media types like text, audio (1D), images and video (2D). Hence, 3D meshes need good processing and visualization methods. Also, the sizes of these meshes are much greater than the other media types mentioned above and often exceeds the memory and bandwidth available for their storage and transmission. Compression schemes for such large 3D meshes have become a subject of intense study lately. Meshes are either made up of triangles or quadrilaterals. Meshes made up of only triangles are called triangle meshes and meshes made up of quadrilaterals are called quadrilateral meshes (quad meshes, for short). A mesh is described by specifying its geometry (vertex coordinates) and its connectivity (adjacencies of the triangles or quadrilaterals). Previous research on mesh compression has been mostly for triangle meshes. Quad meshes were traditionally handled by first triangulating them and then applying triangle mesh compression techniques. In order to avoid this additional triangulation step, a direct technique is proposed for compressing and decompressing the connectivity of quad meshes. This technique takes a quad mesh as input and encodes its connectivity as a sequence of opcodes which can be restored back to the quad mesh, using the decompression technique. A data structure called EdgeTable is introduced to aid in the traversal of a quad mesh during compression. Also, a technique based on constrained Delaunay triangulation for reconstructing the connectivity of a 2D mesh from its geometry and a minimum set of edges is proposed. Source: Masters Abstracts International, Volume: 44-03, page: 1393. Thesis (M.Sc.)--University of Windsor (Canada), 2005

    LIDAR data classification and compression

    Get PDF
    Airborne Laser Detection and Ranging (LIDAR) data has a wide range of applications in agriculture, archaeology, biology, geology, meteorology, military and transportation, etc. LIDAR data consumes hundreds of gigabytes in a typical day of acquisition, and the amount of data collected will continue to grow as sensors improve in resolution and functionality. LIDAR data classification and compression are therefore very important for managing, visualizing, analyzing and using this huge amount of data. Among the existing LIDAR data classification schemes, supervised learning has been used and can obtain up to 96% of accuracy. However some of the features used are not readily available, and the training data is also not always available in practice. In existing LIDAR data compression schemes, the compressed size can be 5%-23% of the original size, but still could be in the order of gigabyte, which is impractical for many applications. The objectives of this dissertation are (1) to develop LIDAR classification schemes that can classify airborne LIDAR data more accurately without some features or training data that existing work requires; (2) to explore lossy compression schemes that can compress LIDAR data at a much higher compression rate than is currently available. We first investigate two independent ways to classify LIDAR data depending on the availability of training data: when training data is available, we use supervised machine learning techniques such as support vector machine (SVM); when training data is not readily available, we develop an unsupervised classification method that can classify LIDAR data as good as supervised classification methods. Experimental results show that the accuracy of our classification results are over 99%. We then present two new lossy LIDAR data compression methods and compare their performance. The first one is a wavelet based compression scheme while the second one is geometry based. Our new geometry based compression is a geometry and statistics driven LIDAR point-cloud compression method which combines both application knowledge and scene content to enable fast transmission from the sensor platform while preserving the geometric properties of objects within a scene. The new algorithm is based on the idea of compression by classification. It utilizes the unique height function simplicity as well as the local spatial coherence and linearity of the aerial LIDAR data and can automatically compress the data to the desired level-of-details defined by the user. Either of the two developed classification methods can be used to automatically detect regions that are not locally linear such as vegetations or trees. In those regions, the local statistics descriptions, such as mean, variance, expectation, etc., are stored to efficiently represent the region and restore the geometry in the decompression phase. The new geometry-based compression schemes for building and ground data can compress efficiently and significantly reduce the file size, while retaining a good fit for the scalable "zoom in" requirements. Experimental results show that compared with existing LIDAR lossy compression work, our proposed approach achieves two orders of magnitude lower bit rate with the same quality, making it feasible for applications that were not practical before. The ability to store information into a database and query them efficiently becomes possible with the proposed highly efficient compression scheme.Includes bibliographical references (pages 106-116)

    Efficient Compression of Non-Manifold Polygonal Meshes

    Get PDF
    We present a method for compressing non-manifold polygonal meshes, i.e., polygonal meshes with singularities, which occur very frequently in the real-world. Most efficient polygonal compression methods currently available are restricted to a manifold mesh: they require converting a non-manifold mesh to a manifold mesh, and fail to retrieve the original model connectivity after decompression. The present method works by converting the original model to a manifold model, encoding the manifold model using an existing mesh compression technique, and clustering, or stitching together during the decompression process vertices that were duplicated earlier to faithfully recover the original connectivity. This paper focuses on efficiently encoding and decoding the stitching information. Using a naive method, the stitching information would incur a prohibitive cost, while our methods guarantee a worst case cost of O(logm) bits per vertex replication, where m is the number of non-manifold vertices. Furthermore, when exploiting the adjacency between vertex replications, many replications can be encoded with an insignificant cost. By interleaving the connectivity, stitching information, geometry and properties, we can avoid encoding repeated vertices (and properties bound to vertices) multiple times; thus a reduction of the size of the bit-stream of about 10% is obtained compared with encoding the model as a manifold

    Compression of 3D models with NURBS

    Get PDF
    With recent progress in computing, algorithmics and telecommunications, 3D models are increasingly used in various multimedia applications. Examples include visualization, gaming, entertainment and virtual reality. In the multimedia domain 3D models have been traditionally represented as polygonal meshes. This piecewise planar representation can be thought of as the analogy of bitmap images for 3D surfaces. As bitmap images, they enjoy great flexibility and are particularly well suited to describing information captured from the real world, through, for instance, scanning processes. They suffer, however, from the same shortcomings, namely limited resolution and large storage size. The compression of polygonal meshes has been a very active field of research in the last decade and rather efficient compression algorithms have been proposed in the literature that greatly mitigate the high storage costs. However, such a low level description of a 3D shape has a bounded performance. More efficient compression should be reachable through the use of higher level primitives. This idea has been explored to a great extent in the context of model based coding of visual information. In such an approach, when compressing the visual information a higher level representation (e.g., 3D model of a talking head) is obtained through analysis methods. This can be seen as an inverse projection problem. Once this task is fullled, the resulting parameters of the model are coded instead of the original information. It is believed that if the analysis module is efficient enough, the total cost of coding (in a rate distortion sense) will be greatly reduced. The relatively poor performance and high complexity of currently available analysis methods (except for specific cases where a priori knowledge about the nature of the objects is available), has refrained a large deployment of coding techniques based on such an approach. Progress in computer graphics has however changed this situation. In fact, nowadays, an increasing number of pictures, video and 3D content are generated by synthesis processing rather than coming from a capture device such as a camera or a scanner. This means that the underlying model in the synthesis stage can be used for their efficient coding without the need for a complex analysis module. In other words it would be a mistake to attempt to compress a low level description (e.g., a polygonal mesh) when a higher level one is available from the synthesis process (e.g., a parametric surface). This is, however, what is usually done in the multimedia domain, where higher level 3D model descriptions are converted to polygonal meshes, if anything by the lack of standard coded formats for the former. On a parallel but related path, the way we consume audio-visual information is changing. As opposed to recent past and a large part of today's applications, interactivity is becoming a key element in the way we consume information. In the context of interest in this dissertation, this means that when coding visual information (an image or a video for instance), previously obvious considerations such as decision on sampling parameters are not so obvious anymore. In fact, as in an interactive environment the effective display resolution can be controlled by the user through zooming, there is no clear optimal setting for the sampling period. This means that because of interactivity, the representation used to code the scene should allow the display of objects in a variety of resolutions, and ideally up to infinity. One way to resolve this problem would be by extensive over-sampling. But this approach is unrealistic and too expensive to implement in many situations. The alternative would be to use a resolution independent representation. In the realm of 3D modeling, such representations are usually available when the models are created by an artist on a computer. The scope of this dissertation is precisely the compression of 3D models in higher level forms. The direct coding in such a form should yield improved rate-distortion performance while providing a large degree of resolution independence. There has not been, so far, any major attempt to efficiently compress these representations, such as parametric surfaces. This thesis proposes a solution to overcome this gap. A variety of higher level 3D representations exist, of which parametric surfaces are a popular choice among designers. Within parametric surfaces, Non-Uniform Rational B-Splines (NURBS) enjoy great popularity as a wide range of NURBS based modeling tools are readily available. Recently, NURBS has been included in the Virtual Reality Modeling Language (VRML) and its next generation descendant eXtensible 3D (X3D). The nice properties of NURBS and their widespread use has lead us to choose them as the form we use for the coded representation. The primary goal of this dissertation is the definition of a system for coding 3D NURBS models with guaranteed distortion. The basis of the system is entropy coded differential pulse coded modulation (DPCM). In the case of NURBS, guaranteeing the distortion is not trivial, as some of its parameters (e.g., knots) have a complicated influence on the overall surface distortion. To this end, a detailed distortion analysis is performed. In particular, previously unknown relations between the distortion of knots and the resulting surface distortion are demonstrated. Compression efficiency is pursued at every stage and simple yet efficient entropy coder realizations are defined. The special case of degenerate and closed surfaces with duplicate control points is addressed and an efficient yet simple coding is proposed to compress the duplicate relationships. Encoder aspects are also analyzed. Optimal predictors are found that perform well across a wide class of models. Simplification techniques are also considered for improved compression efficiency at negligible distortion cost. Transmission over error prone channels is also considered and an error resilient extension defined. The data stream is partitioned by independently coding small groups of surfaces and inserting the necessary resynchronization markers. Simple strategies for achieving the desired level of protection are proposed. The same extension also serves the purpose of random access and on-the-fly reordering of the data stream
    corecore