47 research outputs found

    High-Quality Simplification and Repair of Polygonal Models

    Get PDF
    Because of the rapid evolution of 3D acquisition and modelling methods, highly complex and detailed polygonal models with constantly increasing polygon count are used as three-dimensional geometric representations of objects in computer graphics and engineering applications. The fact that this particular representation is arguably the most widespread one is due to its simplicity, flexibility and rendering support by 3D graphics hardware. Polygonal models are used for rendering of objects in a broad range of disciplines like medical imaging, scientific visualization, computer aided design, film industry, etc. The handling of huge scenes composed of these high-resolution models rapidly approaches the computational capabilities of any graphics accelerator. In order to be able to cope with the complexity and to build level-of-detail representations, concentrated efforts were dedicated in the recent years to the development of new mesh simplification methods that produce high-quality approximations of complex models by reducing the number of polygons used in the surface while keeping the overall shape, volume and boundaries preserved as much as possible. Many well-established methods and applications require "well-behaved" models as input. Degenerate or incorectly oriented faces, T-joints, cracks and holes are just a few of the possible degenaracies that are often disallowed by various algorithms. Unfortunately, it is all too common to find polygonal models that contain, due to incorrect modelling or acquisition, such artefacts. Applications that may require "clean" models include finite element analysis, surface smoothing, model simplification, stereo lithography. Mesh repair is the task of removing artefacts from a polygonal model in order to produce an output model that is suitable for further processing by methods and applications that have certain quality requirements on their input. This thesis introduces a set of new algorithms that address several particular aspects of mesh repair and mesh simplification. One of the two mesh repair methods is dealing with the inconsistency of normal orientation, while another one, removes the inconsistency of vertex connectivity. Of the three mesh simplification approaches presented here, the first one attempts to simplify polygonal models with the highest possible quality, the second, applies the developed technique to out-of-core simplification, and the third, prevents self-intersections of the model surface that can occur during mesh simplification

    A Comparative Study on Polygonal Mesh Simplification Algorithms

    Get PDF
    Polygonal meshes are a common way of representing three dimensional surface models in many different areas of computer graphics and geometry processing. However, with the evolution of the technology, polygonal models are becoming more and more complex. As the complexity of the models increase, the visual approximation to the real world objects get better but there is a trade-off between the cost of processing these models and better visual approximation. In order to reduce this cost, the number of polygons in a model can be reduced by mesh simplification algorithms. These algorithms are widely used such that nearly all of the popular mesh editing libraries include at least one of them. In this work, polygonal simplification algorithms that are embedded in open source libraries: CGAL, VTK and OpenMesh are compared with the Metro geometric error measuring tool. By this way we try to supply a guidance for developers for publicly available mesh libraries in order to implement polygonal mesh simplification

    Constrained parameterization with applications to graphics and image processing.

    Get PDF
    Surface parameterization is to establish a transformation that maps the points on a surface to a specified parametric domain. It has been widely applied to computer graphics and image processing fields. The challenging issue is that the usual positional constraints always result in triangle flipping in parameterizations (also called foldovers). Additionally, distortion is inevitable in parameterizations. Thus the rigid constraint is always taken into account. In general, the constraints are application-dependent. This thesis thus focuses on the various constraints depended on applications and investigates the foldover-free constrained parameterization approaches individually. Such constraints usually include, simple positional constraints, tradeoff of positional constraints and rigid constraint, and rigid constraint. From the perspective of applications, we aim at the foldover-free parameterization methods with positional constraints, the as-rigid-as-possible parameterization with positional constraints, and the well-shaped well-spaced pre-processing procedure for low-distortion parameterizations in this thesis. The first contribution of this thesis is the development of a RBF-based re-parameterization algorithm for the application of the foldover-free constrained texture mapping. The basic idea is to split the usual parameterization procedure into two steps, 2D parameterization with the constraints of convex boundaries and 2D re-parameterization with the interior positional constraints. Moreover, we further extend the 2D re-parameterization approach with the interior positional constraints to high dimensional datasets, such as, volume data and polyhedrons. The second contribution is the development of a vector field based deformation algorithm for 2D mesh deformation and image warping. Many presented deformation approaches are used to employ the basis functions (including our proposed RBF-based re-parameterization algorithm here). The main problem is that such algorithms have infinite support, that is, any local deformation always leads to small changes over the whole domain. Our presented vector field based algorithm can effectively carry on the local deformation while reducing distortion as much as possible. The third contribution is the development of a pre-processing for surface parameterization. Except the developable surfaces, the current parameterization approaches inevitably incur large distortion. To reduce distortion, we proposed a pre-processing procedure in this thesis, including mesh partition and mesh smoothing. As a result, the resulting meshes are partitioned into a set of small patches with rectangle-like boundaries. Moreover, they are well-shaped and well-spaced. This pre-processing procedure can evidently improve the quality of meshes for low-distortion parameterizations

    Parameter optimization and learning for 3D object reconstruction from line drawings.

    Get PDF
    Du, Hao.Thesis (M.Phil.)--Chinese University of Hong Kong, 2010.Includes bibliographical references (p. 61).Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- 3D Reconstruction from 2D Line Drawings and its Applications --- p.1Chapter 1.2 --- Algorithmic Development of 3D Reconstruction from 2D Line Drawings --- p.3Chapter 1.2.1 --- Line Labeling and Realization Problem --- p.4Chapter 1.2.2 --- 3D Reconstruction from Multiple Line Drawings --- p.5Chapter 1.2.3 --- 3D Reconstruction from a Single Line Drawing --- p.6Chapter 1.3 --- Research Problems and Our Contributions --- p.12Chapter 2 --- Adaptive Parameter Setting --- p.15Chapter 2.1 --- Regularities in Optimization-Based 3D Reconstruction --- p.15Chapter 2.1.1 --- Face Planarity --- p.18Chapter 2.1.2 --- Line Parallelism --- p.19Chapter 2.1.3 --- Line Verticality --- p.19Chapter 2.1.4 --- Isometry --- p.19Chapter 2.1.5 --- Corner Orthogonality --- p.20Chapter 2.1.6 --- Skewed Facial Orthogonality --- p.21Chapter 2.1.7 --- Skewed Facial Symmetry --- p.22Chapter 2.1.8 --- Line Orthogonality --- p.24Chapter 2.1.9 --- Minimum Standard Deviation of Angles --- p.24Chapter 2.1.10 --- Face Perpendicularity --- p.24Chapter 2.1.11 --- Line Collinearity --- p.25Chapter 2.1.12 --- Whole Symmetry --- p.25Chapter 2.2 --- Adaptive Parameter Setting in the Objective Function --- p.26Chapter 2.2.1 --- Hill-Climbing Optimization Technique --- p.28Chapter 2.2.2 --- Adaptive Weight Setting and its Explanations --- p.29Chapter 3 --- Parameter Learning --- p.33Chapter 3.1 --- Construction of A Large 3D Object Database --- p.33Chapter 3.2 --- Training Dataset Generation --- p.34Chapter 3.3 --- Parameter Learning Framework --- p.37Chapter 3.3.1 --- Evolutionary Algorithms --- p.38Chapter 3.3.2 --- Reconstruction Error Calculation --- p.39Chapter 3.3.3 --- Parameter Learning Algorithm --- p.41Chapter 4 --- Experimental Results --- p.45Chapter 4.1 --- Adaptive Parameter Setting --- p.45Chapter 4.1.1 --- Use Manually-Set Weights --- p.45Chapter 4.1.2 --- Learn the Best Weights with Different Strategies --- p.48Chapter 4.2 --- Evolutionary-Algorithm-Based Parameter Learning --- p.49Chapter 5 --- Conclusions and Future Work --- p.53Bibliography --- p.5

    Analysis and Parameterization of Triangulated Surfaces

    Get PDF
    This dissertation deals with the analysis and parameterization of surfaces represented by triangle meshes, that is, piecewise linear surfaces which enable a simple representation of 3D models commonly used in mathematics and computer science. Providing equivalent and high-level representations of a 3D triangle mesh M is of basic importance for approaching different computational problems and applications in the research fields of Computational Geometry, Computer Graphics, Geometry Processing, and Shape Modeling. The aim of the thesis is to show how high-level representations of a given surface M can be used to find other high-level or equivalent descriptions of M and vice versa. Furthermore, this analysis is related to the study of local and global properties of triangle meshes depending on the information that we want to capture and needed by the application context. The local analysis of an arbitrary triangle mesh M is based on a multi-scale segmentation of M together with the induced local parameterization, where we replace the common hypothesis of decomposing M into a family of disc-like patches (i.e., 0-genus and one boundary component) with a feature-based segmentation of M into regions of 0-genus without constraining the number of boundary components of each patch. This choice and extension is motivated by the necessity of identifying surface patches with features, of reducing the parameterization distortion, and of better supporting standard applications of the parameterization such as remeshing or more generally surface approximation, texture mapping, and compression. The global analysis, characterization, and abstraction of M take into account its topological and geometric aspects represented by the combinatorial structure of M (i.e., the mesh connectivity) with the associated embedding in R^3. Duality and dual Laplacian smoothing are the first characterizations of M presented with the final aim of a better understanding of the relations between mesh connectivity and geometry, as discussed by several works in this research area, and extended in the thesis to the case of 3D parameterization. The global analysis of M has been also approached by defining a real function on M which induces a Reeb graph invariant with respect to affine transformations and best suited for applications such as shape matching and comparison. Morse theory and the Reeb graph were also used for supporting a new and simple method for solving the global parameterization problem, that is, the search of a cut graph of an arbitrary triangle mesh M. The main characteristics of the proposed approach with respect to previous work are its capability of defining a family of cut graphs, instead of just one cut, of bordered and closed surfaces which are treated with a unique approach. Furthermore, each cut graph is smooth and the way it is built is based on the cutting procedure of 0-genus surfaces that was used for the local parameterization of M. As discussed in the thesis, defining a family of cut graphs provides a great flexibility and effective simplifications of the analysis, modeling, and visualization of (time-depending) scalar and vector fields; in fact, the global parameterization of M enables to reduce th

    Analysis and Generation of Quality Polytopal Meshes with Applications to the Virtual Element Method

    Get PDF
    This thesis explores the concept of the quality of a mesh, the latter being intended as the discretization of a two- or three- dimensional domain. The topic is interdisciplinary in nature, as meshes are massively used in several fields from both the geometry processing and the numerical analysis communities. The goal is to produce a mesh with good geometrical properties and the lowest possible number of elements, able to produce results in a target range of accuracy. In other words, a good quality mesh that is also cheap to handle, overcoming the typical trade-off between quality and computational cost. To reach this goal, we first need to answer the question: ''How, and how much, does the accuracy of a numerical simulation or a scientific computation (e.g., rendering, printing, modeling operations) depend on the particular mesh adopted to model the problem? And which geometrical features of the mesh most influence the result?'' We present a comparative study of the different mesh types, mesh generation techniques, and mesh quality measures currently available in the literature related to both engineering and computer graphics applications. This analysis leads to the precise definition of the notion of quality for a mesh, in the particular context of numerical simulations of partial differential equations with the virtual element method, and the consequent construction of criteria to determine and optimize the quality of a given mesh. Our main contribution consists in a new mesh quality indicator for polytopal meshes, able to predict the performance of the virtual element method over a particular mesh before running the simulation. Strictly related to this, we also define a quality agglomeration algorithm that optimizes the quality of a mesh by wisely agglomerating groups of neighboring elements. The accuracy and the reliability of both tools are thoroughly verified in a series of tests in different scenarios

    Persistent Homology Tools for Image Analysis

    Get PDF
    Topological Data Analysis (TDA) is a new field of mathematics emerged rapidly since the first decade of the century from various works of algebraic topology and geometry. The goal of TDA and its main tool of persistent homology (PH) is to provide topological insight into complex and high dimensional datasets. We take this premise onboard to get more topological insight from digital image analysis and quantify tiny low-level distortion that are undetectable except possibly by highly trained persons. Such image distortion could be caused intentionally (e.g. by morphing and steganography) or naturally in abnormal human tissue/organ scan images as a result of onset of cancer or other diseases. The main objective of this thesis is to design new image analysis tools based on persistent homological invariants representing simplicial complexes on sets of pixel landmarks over a sequence of distance resolutions. We first start by proposing innovative automatic techniques to select image pixel landmarks to build a variety of simplicial topologies from a single image. Effectiveness of each image landmark selection demonstrated by testing on different image tampering problems such as morphed face detection, steganalysis and breast tumour detection. Vietoris-Rips simplicial complexes constructed based on the image landmarks at an increasing distance threshold and topological (homological) features computed at each threshold and summarized in a form known as persistent barcodes. We vectorise the space of persistent barcodes using a technique known as persistent binning where we demonstrated the strength of it for various image analysis purposes. Different machine learning approaches are adopted to develop automatic detection of tiny texture distortion in many image analysis applications. Homological invariants used in this thesis are the 0 and 1 dimensional Betti numbers. We developed an innovative approach to design persistent homology (PH) based algorithms for automatic detection of the above described types of image distortion. In particular, we developed the first PH-detector of morphing attacks on passport face biometric images. We shall demonstrate significant accuracy of 2 such morph detection algorithms with 4 types of automatically extracted image landmarks: Local Binary patterns (LBP), 8-neighbour super-pixels (8NSP), Radial-LBP (R-LBP) and centre-symmetric LBP (CS-LBP). Using any of these techniques yields several persistent barcodes that summarise persistent topological features that help gaining insights into complex hidden structures not amenable by other image analysis methods. We shall also demonstrate significant success of a similarly developed PH-based universal steganalysis tool capable for the detection of secret messages hidden inside digital images. We also argue through a pilot study that building PH records from digital images can differentiate breast malignant tumours from benign tumours using digital mammographic images. The research presented in this thesis creates new opportunities to build real applications based on TDA and demonstrate many research challenges in a variety of image processing/analysis tasks. For example, we describe a TDA-based exemplar image inpainting technique (TEBI), superior to existing exemplar algorithm, for the reconstruction of missing image regions
    corecore