59 research outputs found

    A METHOD TO REDUCE THE NUMBER OF TRIANGLES OF A MESH TO ALLOW ITS FLUENT VISUALIZATION IN A 3D PRINTER CONTROL PANEL

    Get PDF
    This invention disclosure proposes a method for generating a reduced size mesh from a printresolution mesh received in a 3D printer. The 3D printer receives the content to be printed by means of triangle meshes. To be able to provide the required printing resolution, the number of triangles conforming the 3D part can be quite high, leading to high memory and processing requirements when aiming to manipulate the model. Previewing the 3D model of the part on the printer control panel can lead to a non-responsive User interface and even impact the performance of the printer firmware because there is no limit specified on the number of triangles that the original mesh can have. This invention disclosure presents a method to generate a visualization mesh that has a lower resolution than the original mesh received by the printer. This is accomplished by reducing the number of triangles in the meshes which allows the control panel of the printer to smoothly display the original shape and appearance of the 3D parts. In addition to that, these generated meshes will also be used to create lower size files that can be transferred between units (e.g. Printing Unit, Build Unit and Processing Station Unit)

    Gestion de la complexité géométrique dans le calcul d'éclairement pour la présentation publique de scÚnes archéologiques complexes

    Get PDF
    International audienceFor cultural heritage, more and more 3D objects are acquired using 3D scanners [Levoy 2000]. The resulting objects are very detailed with a large visual richness but their geometric complexity requires specific methods to render them. We first show how to simplify those objects using a low-resolution mesh with its associated normal maps [Boubekeur 2005] which encode details. Using this representation, we show how to add global illumination with a grid-based and vector-based representation [Pacanowski 2005]. This grid captures efficiently low-frequency indirect illumination. We use 3D textures (for large objects) and 2D textures (for quasi-planar objects) for storing a fixed set of irradiance vectors. These grids are built during a preprocessing step by using almost any existing stochastic global illumination approach. During the rendering step, the indirect illumination within a grid cell is interpolated from its associated irradiance vectors, resulting in a smooth everywhere representation. Furthermore, the vector-based representation offers additional robustness against local variations of geometric properties of the scene.Pour l’étude du patrimoine, de plus en plus d’objets 3D sont acquis par le biais de scanners 3D [Levoy 2000]. Les objets ainsi acquis contiennent de nombreux dĂ©tails et fournissent une trĂšs grande richesse visuelle. Mais pour les afficher, leur trĂšs grande complexitĂ© gĂ©omĂ©trique nĂ©cessite l’utilisation d’algorithmes spĂ©cifiques. Nous montrons ici comment simplifier ces objets par un maillage de faible rĂ©solution et une collection de cartes de normales [Boubekeur 2005] pour prĂ©server les dĂ©tails. Avec cette reprĂ©sentation, nous montrons comment il est possible de calculer un Ă©clairement rĂ©aliste Ă  l’aide d’une grille et de donnĂ©es vectorielles [Pacanowski 2005]. Cette grille permet de capturer efficacement les basses frĂ©quences d’un Ă©clairement indirect. Nous utilisons des textures 3D (pour des gros objets) et potentiellement des textures 2D (pour les objets quasi-plans) afin de stocker un nombre prĂ©dĂ©terminĂ© de vecteurs d’irradiance. Ces grilles sont calculĂ©es au cours d’un prĂ©-calcul Ă  l’aide de n’importe quelle mĂ©thode stochastique de calcul d’éclairement global. Pour l’affichage, l’éclairement indirect dĂ» Ă  la grille est interpolĂ© au sein de la cellule associĂ©e Ă  la position courante, fournissant ainsi une reprĂ©sentation continue. De plus, cette approche vectorielle permet une plus grande robustesse aux variations locales des propriĂ©tĂ©s gĂ©omĂ©triques de la scĂšne

    Knowledge-based out-of-core algorithms for data management in visualization

    Get PDF
    Journal ArticleData management is the very first issue in handling very large datasets. Many existing out-of-core algorithms used in visualization are closely coupled with application-specific logic. This paper presents two knowledgebased out-of-core prefetching algorithms that do not use hard-coded rendering-related logic. They acquire the knowledge of the access history and patterns dynamically, and adapt their prefetching strategies accordingly. We have compared the algorithms with a demand-based algorithm, as well as a more domain-specific out-of-core algorithm. We carried out our evaluation in conjunction with an example application where rendering multiple point sets in a volume scene graph put a great strain on the rendering algorithm in terms of memory management. Our results have shown that the knowledge-based approach offers a better cache-hit to disk-access trade-off. This work demonstrates that it is possible to build an out-of-core prefetching algorithm without depending on rendering-related application-specific logic. The knowledge based approach has the advantage of being generic, efficient, flexible and self-adaptive

    Parallel extraction and simplification of large isosurfaces using an extended tandem algorithm

    Get PDF
    International audienceIn order to deal with the common trend in size increase of volumetric datasets, in the past few years research in isosurface extraction has focused on related aspects such as surface simplification and load-balanced parallel algorithms. We present a parallel, block-wise extension of the tandem algorithm by Attali et al., which simplifies on the fly an isosurface being extracted. Our approach minimizes the overall memory consumption using an adequate block splitting and merging strategy along with the introduction of a component dumping mechanism that drastically reduces the amount of memory needed for particular datasets such as those encountered in geophysics. As soon as detected, surface components are migrated to the disk along with a meta-data index (oriented bounding box, volume, etc.) that permits further improved exploration scenarios (small component removal or particularly oriented component selection for instance). For ease of implementation, we carefully describe a master and worker algorithm architecture that clearly separates the four required basic tasks. We show several results of our parallel algorithm applied on a geophysical dataset of size 7000 × 1600 × 2000

    Connect-and-Slice: an hybrid approach for reconstructing 3D objects

    Get PDF
    International audienceConverting point clouds generated by Laser scanning, multiview stereo imagery or depth cameras into compact polygon meshes is a challenging problem in vision. Existing methods are either robust to imperfect data or scalable, but rarely both. In this paper, we address this issue with an hybrid method that successively connects and slices planes detected from 3D data. The core idea consists in constructing an efficient and compact partitioning data structure. The later is i) spatially-adaptive in the sense that a plane slices a restricted number of relevant planes only, and ii) composed of components with different structural meaning resulting from a preliminary analysis of the plane connec-tivity. Our experiments on a variety of objects and sensors show the versatility of our approach as well as its competitiveness with respect to existing methods

    Watertight and 2-Manifold Surface Meshes Using Dual Contouring With Tetrahedral Decomposition of Grid Cubes

    Get PDF
    The Dual Contouring algorithm (DC) is a grid-based process used to generate surface meshes from volumetric data. The advantage of DC is that it can reproduce sharp features by inserting vertices anywhere inside the grid cube, as opposed to the Marching Cubes (MC) algorithm that can insert vertices only on the grid edges. However, DC is unable to guarantee 2-manifold and watertight meshes due to the fact that it produces only one vertex for each grid cube. We present a modified Dual Contouring algorithm that is capable of overcoming this limitation. Our method decomposes an ambiguous grid cube into a maximum of twelve tetrahedral cells; we introduce novel polygon generation rules that produce 2-manifold and watertight surface meshes. We have applied our proposed method on realistic data, and a comparison of the results of our proposed method with results from traditional DC shows the effectiveness of our method

    XOR-Based Compact Triangulations

    Get PDF
    Media, image processing, and geometric-based systems and applications need data structures to model and represent different geometric entities and objects. These data structures have to be time efficient and compact in term of space. Many structures in use are proposed to satisfy those constraints. This paper introduces a novel compact data structure inspired by the XOR-linked lists. The subject of this paper concerns the triangular data structures. Nevertheless, the underlying idea could be used for any other geometrical subdivision. The ability of the bitwise XOR operator to reduce the number of references is used to model triangle and vertex references. The use of the XOR combined references needs to define a context from which the triangle is accessed. The direct access to any triangle is not possible using only the XOR-linked scheme. To allow the direct access, additional information are added to the structure. This additional information permits a constant time access to any element of the triangulation using a local resolution scheme. This information represents an additional cost to the triangulation, but the gain is still maintained. This cost is reduced by including this additional information to a local sub-triangulation and not to each triangle. Sub-triangulations are calculated implicitly according to the catalog-based structure. This approach could be easily extended to other representation models, such as vertex-based structures or edge-based structures. The obtained results are very interesting since the theoretical gain is estimated to 38 % and the practical gain obtained from sample benches is about 34 %

    Efficient Decimation of Polygonal Models Using Normal Field Deviation

    Get PDF
    A simple and robust greedy algorithm has been proposed for efficient and quality decimation of polygonal models. The performance of a simplification algorithm depends on how the local geometric deviation caused by a local decimation operation is measured. As normal field of a surface plays key role in its visual appearance, exploiting the local normal field deviation in a novel way, a new measure of geometric fidelity has been introduced. This measure has the potential to identify and preserve the salient features of a surface model automatically. The resulting algorithm is simple to implement, produces approximations of better quality and is efficient in running time. Subjective and objective comparisons validate the assertion. It is suitable for applications where the focus is better speed-quality trade-off, and simplification is used as a processing step in other algorithms

    Gestion de la complexité géométrique dans le calcul d'éclairement pour la présentation publique de scÚnes archéologiques complexes

    Get PDF
    International audienceFor cultural heritage, more and more 3D objects are acquired using 3D scanners [Levoy 2000]. The resulting objects are very detailed with a large visual richness but their geometric complexity requires specific methods to render them. We first show how to simplify those objects using a low-resolution mesh with its associated normal maps [Boubekeur 2005] which encode details. Using this representation, we show how to add global illumination with a grid-based and vector-based representation [Pacanowski 2005]. This grid captures efficiently low-frequency indirect illumination. We use 3D textures (for large objects) and 2D textures (for quasi-planar objects) for storing a fixed set of irradiance vectors. These grids are built during a preprocessing step by using almost any existing stochastic global illumination approach. During the rendering step, the indirect illumination within a grid cell is interpolated from its associated irradiance vectors, resulting in a smooth everywhere representation. Furthermore, the vector-based representation offers additional robustness against local variations of geometric properties of the scene.Pour l’étude du patrimoine, de plus en plus d’objets 3D sont acquis par le biais de scanners 3D [Levoy 2000]. Les objets ainsi acquis contiennent de nombreux dĂ©tails et fournissent une trĂšs grande richesse visuelle. Mais pour les afficher, leur trĂšs grande complexitĂ© gĂ©omĂ©trique nĂ©cessite l’utilisation d’algorithmes spĂ©cifiques. Nous montrons ici comment simplifier ces objets par un maillage de faible rĂ©solution et une collection de cartes de normales [Boubekeur 2005] pour prĂ©server les dĂ©tails. Avec cette reprĂ©sentation, nous montrons comment il est possible de calculer un Ă©clairement rĂ©aliste Ă  l’aide d’une grille et de donnĂ©es vectorielles [Pacanowski 2005]. Cette grille permet de capturer efficacement les basses frĂ©quences d’un Ă©clairement indirect. Nous utilisons des textures 3D (pour des gros objets) et potentiellement des textures 2D (pour les objets quasi-plans) afin de stocker un nombre prĂ©dĂ©terminĂ© de vecteurs d’irradiance. Ces grilles sont calculĂ©es au cours d’un prĂ©-calcul Ă  l’aide de n’importe quelle mĂ©thode stochastique de calcul d’éclairement global. Pour l’affichage, l’éclairement indirect dĂ» Ă  la grille est interpolĂ© au sein de la cellule associĂ©e Ă  la position courante, fournissant ainsi une reprĂ©sentation continue. De plus, cette approche vectorielle permet une plus grande robustesse aux variations locales des propriĂ©tĂ©s gĂ©omĂ©triques de la scĂšne

    Coarse-to-fine approximation of range images with bounded error adaptive triangular meshes

    Full text link
    Copyright 2007 Society of Photo-Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibitedA new technique for approximating range images with adaptive triangular meshes ensuring a user-defined approximation error is presented. This technique is based on an efficient coarse-to-fine refinement algorithm that avoids iterative optimization stages. The algorithm first maps the pixels of the given range image to 3D points defined in a curvature space. Those points are then tetrahedralized with a 3D Delaunay algorithm. Finally, an iterative process starts digging up the convex hull of the obtained tetrahedralization, progressively removing the triangles that do not fulfill the specified approximation error. This error is assessed in the original 3D space. The introduction of the aforementioned curvature space makes it possible for both convex and nonconvex object surfaces to be approximated with adaptive triangular meshes, improving thus the behavior of previous coarse-to-fine sculpturing techniques. The proposed technique is evaluated on real range images and compared to two simplification techniques that also ensure a user-defined approximation error: a fine-to-coarse approximation algorithm based on iterative optimization (Jade) and an optimization-free, fine-to-coarse algorithm (Simplification Envelopes).This work has been partially supported by the Spanish Ministry of Education and Science under projects TRA2004- 06702/AUT and DPI2004-07993-C03-03. The first author was supported by The RamĂłn y Cajal Program
    • 

    corecore