3,153 research outputs found

    Triangle mesh compression and homological spanning forests

    Get PDF
    Triangle three-dimensional meshes have been widely used to represent 3D objects in several applications. These meshes are usually surfaces that require a huge amount of resources when they are stored, processed or transmitted. Therefore, many algorithms proposing an efficient compression of these meshes have been developed since the early 1990s. In this paper we propose a lossless method that compresses the connectivity of the mesh by using a valence-driven approach. Our algorithm introduces an improvement over the currently available valence-driven methods, being able to deal with triangular surfaces of arbitrary topology and encoding, at the same time, the topological information of the mesh by using Homological Spanning Forests. We plan to develop in the future (geo-topological) image analysis and processing algorithms, that directly work with the compressed data

    Adaptive coarse-to-fine quantization for optimizing rate-distortion of progressive mesh compression

    Get PDF
    International audienceWe propose a new connectivity-based progressivecompression approach for triangle meshes. The keyidea is to adapt the quantization precision to the resolutionof each intermediate mesh so as to optimizethe rate-distortion trade-off. This adaptation is automaticallydetermined during the encoding processand the overhead is efficiently encoded using geometricalprediction techniques. We also introducean optimization of the geometry coding by usinga bijective discrete rotation. Results show that ourapproach delivers a better rate-distortion behaviorthan both connectivity-based and geometry-basedcompression state of the art method

    Low velocity impact modeling in composite laminates capturing permanent indentation

    Get PDF
    This paper deals with impact damage and permanent indentation modeling. A numerical model has been elaborated in order to simulate the different impact damage types developing during low velocity/low energy impact. The three current damage types: matrix cracking, fiber failure and delamination, are simulated. Inter-laminar damage, i.e. interface delamination, is conventionally simulated using interface elements based on fracture mechanics. Intra-laminar damage, i.e. matrix cracks, is simulated using interface elements based on failure criterion. Fiber failure is simulated using degradation in the volume elements. The originality of this model is to simulate permanent indentation after impact with a ‘‘plastic-like’’model introduced in the matrix cracking elements. This model type is based on experimental observations showing matrix cracking debris which block crack closure. Lastly, experimental validation is performed, which demonstrates the model’s satisfactory relevance in simulating impact damage. This acceptable match between experiment and modeling confirms the interest of the novel approach proposed in this paper to describe the physics behind permanent indentation

    Mesh based Scene Evaluation Metrics for LOD and Simplification

    Get PDF
    I present seven metrics to quantify attributes of different meshes in a scene. Each metricrepresents a different geometrical or topological aspect of the mesh. Theresulting ratingvalues serve to convey the underlying complex data to the user. These allow the user toswiftly compare several features of multiple meshes. The metricsmay thus guide usersand programs during the process of mesh modification, i.e. optimization, simplification orsmoothing, and scene modification as a whole.I evaluate each metric individually by applying them to a sample scene. To examine thecorrectness and expressiveness of the metrics I compare the automatically calculated ratingsto the raw base data. I find two of the metrics to be immediately useful and four of theratings promising, but in need of adjustments. The remaining last metric, however, requiressignificant rework to generate useful data on par with the other six metrics.This thesis first introduces the subject with a motivating example. It then presents importantconcepts and research on related topics. Afterwards it details the concept of the programand the mathematical considerations it is based on. It also lists my approach to solving thechallenges which emerged during the implementation. Subsequently, the thesis focusses onthe visualized output of the program and challenges said ouput. Finally, it contrasts theexpectations and goals of each metric with the respective actual result

    T4DT: Tensorizing Time for Learning Temporal 3D Visual Data

    Full text link
    Unlike 2D raster images, there is no single dominant representation for 3D visual data processing. Different formats like point clouds, meshes, or implicit functions each have their strengths and weaknesses. Still, grid representations such as signed distance functions have attractive properties also in 3D. In particular, they offer constant-time random access and are eminently suitable for modern machine learning. Unfortunately, the storage size of a grid grows exponentially with its dimension. Hence they often exceed memory limits even at moderate resolution. This work explores various low-rank tensor formats, including the Tucker, tensor train, and quantics tensor train decompositions, to compress time-varying 3D data. Our method iteratively computes, voxelizes, and compresses each frame's truncated signed distance function and applies tensor rank truncation to condense all frames into a single, compressed tensor that represents the entire 4D scene. We show that low-rank tensor compression is extremely compact to store and query time-varying signed distance functions. It significantly reduces the memory footprint of 4D scenes while surprisingly preserving their geometric quality. Unlike existing iterative learning-based approaches like DeepSDF and NeRF, our method uses a closed-form algorithm with theoretical guarantees
    corecore