511 research outputs found

    Easy Integral Surfaces: A Fast, Quad-based Stream and Path Surface Algorithm

    Get PDF
    a fast, quad-based stream and path surface algorith

    â„“_1-Based Construction of Polycube Maps from Complex Shapes

    Get PDF
    Polycube maps of triangle meshes have proved useful in a wide range of applications, including texture mapping and hexahedral mesh generation. However, constructing either fully automatically or with limited user control a low-distortion polycube from a detailed surface remains challenging in practice. We propose a variational method for deforming an input triangle mesh into a polycube shape through minimization of the â„“_1-norm of the mesh normals, regularized via an as-rigid-as-possible volumetric distortion energy. Unlike previous work, our approach makes no assumption on the orientation, or on the presence of features in the input model. User-guided control over the resulting polycube map is also offered to increase design flexibility. We demonstrate the robustness, efficiency, and controllability of our method on a variety of examples, and explore applications in hexahedral remeshing and quadrangulation

    Error-Bounded and Feature Preserving Surface Remeshing with Minimal Angle Improvement

    Get PDF
    The typical goal of surface remeshing consists in finding a mesh that is (1) geometrically faithful to the original geometry, (2) as coarse as possible to obtain a low-complexity representation and (3) free of bad elements that would hamper the desired application. In this paper, we design an algorithm to address all three optimization goals simultaneously. The user specifies desired bounds on approximation error {\delta}, minimal interior angle {\theta} and maximum mesh complexity N (number of vertices). Since such a desired mesh might not even exist, our optimization framework treats only the approximation error bound {\delta} as a hard constraint and the other two criteria as optimization goals. More specifically, we iteratively perform carefully prioritized local operators, whenever they do not violate the approximation error bound and improve the mesh otherwise. In this way our optimization framework greedily searches for the coarsest mesh with minimal interior angle above {\theta} and approximation error bounded by {\delta}. Fast runtime is enabled by a local approximation error estimation, while implicit feature preservation is obtained by specifically designed vertex relocation operators. Experiments show that our approach delivers high-quality meshes with implicitly preserved features and better balances between geometric fidelity, mesh complexity and element quality than the state-of-the-art.Comment: 14 pages, 20 figures. Submitted to IEEE Transactions on Visualization and Computer Graphic

    HexBox: Interactive Box Modeling of Hexahedral Meshes

    Get PDF
    We introduce HexBox, an intuitive modeling method and interactive tool for creating and editing hexahedral meshes. Hexbox brings the major and widely validated surface modeling paradigm of surface box modeling into the world of hex meshing. The main idea is to allow the user to box-model a volumetric mesh by primarily modifying its surface through a set of topological and geometric operations. We support, in particular, local and global subdivision, various instantiations of extrusion, removal, and cloning of elements, the creation of non-conformal or conformal grids, as well as shape modifications through vertex positioning, including manual editing, automatic smoothing, or, eventually, projection on an externally-provided target surface. At the core of the efficient implementation of the method is the coherent maintenance, at all steps, of two parallel data structures: a hexahedral mesh representing the topology and geometry of the currently modeled shape, and a directed acyclic graph that connects operation nodes to the affected mesh hexahedra. Operations are realized by exploiting recent advancements in grid- based meshing, such as mixing of 3-refinement, 2-refinement, and face-refinement, and using templated topological bridges to enforce on-the-fly mesh conformity across pairs of adjacent elements. A direct manipulation user interface lets users control all operations. The effectiveness of our tool, released as open source to the community, is demonstrated by modeling several complex shapes hard to realize with competing tools and techniques

    Low-discrepancy point sampling of 2D manifolds for visual computing

    Get PDF
    Point distributions are used to sample surfaces for a wide variety of applications within the fields of graphics and computational geometry, such as point-based graphics, remeshing and area/volume measurement. The quality of such point distributions is important, and quality criteria are often application dependent. Common quality criteria include visual appearance, an even distribution whilst avoiding aliasing and other artifacts, and minimisation of the number of points required to accurately sample a surface. Previous work suggests that discrepancy measures the uniformity of a point distribution and hence a point distribution of minimal discrepancy is expected to be of high quality. We investigate discrepancy as a measure of sampling quality, and present a novel approach for generating low-discrepancy point distributions on parameterised surfaces. Our approach uses the idea of converting the 2D sampling problem into a ID problem by adaptively mapping a space-filling curve onto the surface. A ID sequence is then generated and used to sample the surface along the curve. The sampling process takes into account the parametric mapping, employing a corrective approach similar to histogram equalisation, to ensure that it gives a 2D low-discrepancy point distribution on the surface. The local sampling density can be controlled by a user-defined density function, e.g. to preserve local features, or to achieve desired data reduction rates. Experiments show that our approach efficiently generates low-discrepancy distributions on arbitrary parametric surfaces, demonstrating nearly as good results as popular low-discrepancy sampling methods designed for particular surfaces like planes and spheres. We develop a generalised notion of the standard discrepancy measure, which considers a broader set of sample shapes used to compute the discrepancy. In this more thorough testing, our sampling approach produces results superior to popular distributions. We also demonstrate that the point distributions produced by our approach closely adhere to the blue noise criterion, compared to the popular low-discrepancy methods tested, which show high levels of structure, undesirable for visual representation. Furthermore, we present novel sampling algorithms to generate low-discrepancy distributions on triangle meshes. To sample the mesh, it is cut into a disc topology, and a parameterisation is generated. Our sampling algorithm can then be used to sample the parameterised mesh, using robust methods for computing discrete differential properties of the surface. After these pre-processing steps, the sampling density can be adjusted in real-time. Experiments also show that our sampling approach can accurately resample existing meshes with low discrepancy, demonstrating error rates when reducing the mesh complexity as good as the best results in the literature. We present three applications of our mesh sampling algorithm. We first describe a point- based graphics sampling approach, which includes a global hole-filling algorithm. We investigate the coverage of sample discs for this approach, demonstrating results superior to random sampling and a popular low-discrepancy method. Moreover, we develop levels of detail and view dependent rendering approaches, providing very fine-grained density control with distance and angle, and silhouette enhancement. We further discuss a triangle- based remeshing technique, producing high quality, topologically unaltered meshes. Finally, we describe a complete framework for sampling and painting engineering prototype models. This approach provides density control according to surface texture, and gives full dithering control of the point sample distribution. Results exhibit high quality point distributions for painting that are invariant to surface orientation or complexity. The main contributions of this thesis are novel algorithms to generate high-quality density- controlled point distributions on parametric surfaces and triangular meshes. Qualitative assessment and discrepancy measures and blue noise criteria show their high sampling quality in general. We introduce generalised discrepancy measures which indicate that the sampling quality of our approach is superior to other low-discrepancy sampling techniques. Moreover, we present novel approaches towards remeshing, point-based rendering and robotic painting of prototypes by adapting our sampling algorithms and demonstrate the overall good quality of the results for these specific applications.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Low-discrepancy point sampling of 2D manifolds for visual computing

    Get PDF
    Point distributions are used to sample surfaces for a wide variety of applications within the fields of graphics and computational geometry, such as point-based graphics, remeshing and area/volume measurement. The quality of such point distributions is important, and quality criteria are often application dependent. Common quality criteria include visual appearance, an even distribution whilst avoiding aliasing and other artifacts, and minimisation of the number of points required to accurately sample a surface. Previous work suggests that discrepancy measures the uniformity of a point distribution and hence a point distribution of minimal discrepancy is expected to be of high quality. We investigate discrepancy as a measure of sampling quality, and present a novel approach for generating low-discrepancy point distributions on parameterised surfaces. Our approach uses the idea of converting the 2D sampling problem into a ID problem by adaptively mapping a space-filling curve onto the surface. A ID sequence is then generated and used to sample the surface along the curve. The sampling process takes into account the parametric mapping, employing a corrective approach similar to histogram equalisation, to ensure that it gives a 2D low-discrepancy point distribution on the surface. The local sampling density can be controlled by a user-defined density function, e.g. to preserve local features, or to achieve desired data reduction rates. Experiments show that our approach efficiently generates low-discrepancy distributions on arbitrary parametric surfaces, demonstrating nearly as good results as popular low-discrepancy sampling methods designed for particular surfaces like planes and spheres. We develop a generalised notion of the standard discrepancy measure, which considers a broader set of sample shapes used to compute the discrepancy. In this more thorough testing, our sampling approach produces results superior to popular distributions. We also demonstrate that the point distributions produced by our approach closely adhere to the blue noise criterion, compared to the popular low-discrepancy methods tested, which show high levels of structure, undesirable for visual representation. Furthermore, we present novel sampling algorithms to generate low-discrepancy distributions on triangle meshes. To sample the mesh, it is cut into a disc topology, and a parameterisation is generated. Our sampling algorithm can then be used to sample the parameterised mesh, using robust methods for computing discrete differential properties of the surface. After these pre-processing steps, the sampling density can be adjusted in real-time. Experiments also show that our sampling approach can accurately resample existing meshes with low discrepancy, demonstrating error rates when reducing the mesh complexity as good as the best results in the literature. We present three applications of our mesh sampling algorithm. We first describe a point- based graphics sampling approach, which includes a global hole-filling algorithm. We investigate the coverage of sample discs for this approach, demonstrating results superior to random sampling and a popular low-discrepancy method. Moreover, we develop levels of detail and view dependent rendering approaches, providing very fine-grained density control with distance and angle, and silhouette enhancement. We further discuss a triangle- based remeshing technique, producing high quality, topologically unaltered meshes. Finally, we describe a complete framework for sampling and painting engineering prototype models. This approach provides density control according to surface texture, and gives full dithering control of the point sample distribution. Results exhibit high quality point distributions for painting that are invariant to surface orientation or complexity. The main contributions of this thesis are novel algorithms to generate high-quality density- controlled point distributions on parametric surfaces and triangular meshes. Qualitative assessment and discrepancy measures and blue noise criteria show their high sampling quality in general. We introduce generalised discrepancy measures which indicate that the sampling quality of our approach is superior to other low-discrepancy sampling techniques. Moreover, we present novel approaches towards remeshing, point-based rendering and robotic painting of prototypes by adapting our sampling algorithms and demonstrate the overall good quality of the results for these specific applications.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    3D photogrammetric data modeling and optimization for multipurpose analysis and representation of Cultural Heritage assets

    Get PDF
    This research deals with the issues concerning the processing, managing, representation for further dissemination of the big amount of 3D data today achievable and storable with the modern geomatic techniques of 3D metric survey. In particular, this thesis is focused on the optimization process applied to 3D photogrammetric data of Cultural Heritage assets. Modern Geomatic techniques enable the acquisition and storage of a big amount of data, with high metric and radiometric accuracy and precision, also in the very close range field, and to process very detailed 3D textured models. Nowadays, the photogrammetric pipeline has well-established potentialities and it is considered one of the principal technique to produce, at low cost, detailed 3D textured models. The potentialities offered by high resolution and textured 3D models is today well-known and such representations are a powerful tool for many multidisciplinary purposes, at different scales and resolutions, from documentation, conservation and restoration to visualization and education. For example, their sub-millimetric precision makes them suitable for scientific studies applied to the geometry and materials (i.e. for structural and static tests, for planning restoration activities or for historical sources); their high fidelity to the real object and their navigability makes them optimal for web-based visualization and dissemination applications. Thanks to the improvement made in new visualization standard, they can be easily used as visualization interface linking different kinds of information in a highly intuitive way. Furthermore, many museums look today for more interactive exhibitions that may increase the visitors’ emotions and many recent applications make use of 3D contents (i.e. in virtual or augmented reality applications and through virtual museums). What all of these applications have to deal with concerns the issue deriving from the difficult of managing the big amount of data that have to be represented and navigated. Indeed, reality based models have very heavy file sizes (also tens of GB) that makes them difficult to be handled by common and portable devices, published on the internet or managed in real time applications. Even though recent advances produce more and more sophisticated and capable hardware and internet standards, empowering the ability to easily handle, visualize and share such contents, other researches aim at define a common pipeline for the generation and optimization of 3D models with a reduced number of polygons, however able to satisfy detailed radiometric and geometric requests. iii This thesis is inserted in this scenario and focuses on the 3D modeling process of photogrammetric data aimed at their easy sharing and visualization. In particular, this research tested a 3D models optimization, a process which aims at the generation of Low Polygons models, with very low byte file size, processed starting from the data of High Poly ones, that nevertheless offer a level of detail comparable to the original models. To do this, several tools borrowed from the game industry and game engine have been used. For this test, three case studies have been chosen, a modern sculpture of a contemporary Italian artist, a roman marble statue, preserved in the Civic Archaeological Museum of Torino, and the frieze of the Augustus arch preserved in the city of Susa (Piedmont- Italy). All the test cases have been surveyed by means of a close range photogrammetric acquisition and three high detailed 3D models have been generated by means of a Structure from Motion and image matching pipeline. On the final High Poly models generated, different optimization and decimation tools have been tested with the final aim to evaluate the quality of the information that can be extracted by the final optimized models, in comparison to those of the original High Polygon one. This study showed how tools borrowed from the Computer Graphic offer great potentialities also in the Cultural Heritage field. This application, in fact, may meet the needs of multipurpose and multiscale studies, using different levels of optimization, and this procedure could be applied to different kind of objects, with a variety of different sizes and shapes, also on multiscale and multisensor data, such as buildings, architectural complexes, data from UAV surveys and so on

    Geodesic Convolutional Shape Optimization

    Get PDF
    Aerodynamic shape optimization has many industrial applications. Existing methods, however, are so computationally demanding that typical engineering practices are to either simply try a limited number of hand-designed shapes or restrict oneself to shapes that can be parameterized using only few degrees of freedom. In this work, we introduce a new way to optimize complex shapes fast and accurately. To this end, we train Geodesic Convolutional Neural Networks to emulate a fluidynamics simulator. The key to making this approach practical is remeshing the original shape using a polycube map, which makes it possible to perform the computations on GPUs instead of CPUs. The neural net is then used to formulate an objective function that is differentiable with respect to the shape parameters, which can then be optimized using a gradient-based technique. This outperforms state- of-the-art methods by 5 to 20% for standard problems and, even more importantly, our approach applies to cases that previous methods cannot handle
    • …
    corecore