243 research outputs found

    Surface Denoising based on The Variation of Normals and Retinal Shape Analysis

    Get PDF
    Through the development of this thesis, starting from the curvature tensor, we have been able to understand the variation of tangent vectors to define a shape analysis operator and also a relationship between the classical shape operator and the curvature tensor on a triangular surface. In continuation, the first part of the thesis analyzed the variation of surface normals and introduced a shape analysis operator, which is further used for mesh and point set denoising. In the second part of the thesis, mathematical modeling and shape quantification algorithms are introduced for retinal shape analysis. In the first half, this thesis followed the concept of the variation of surface normals, which is termed as the normal voting tensor and derived a relation between the shape operator and the normal voting tensor. The concept of the directional and the mean curvatures is extended on the dual representation of a triangulated surface. A normal voting tensor is defined on each triangle of a geometry and termed as the element-based normal voting tensor (ENVT). Later, a deformation tensor is extracted from the ENVT and it consists of the anisotropy of a surface and the mean curvature vector is defined based on the ENVT deformation tensor. The ENVT-based mesh denoising algorithm is introduced, where the ENVT is used as a shape operator. A binary optimization technique is applied on the spectral components of the ENVT that helps the algorithm to retain sharp features in the concerned geometry and improves the convergence rate of the algorithm. Later, a stochastic analysis of the effect of noise on the triangular mesh based on the minimum edge length of the elements in the geometry is explained. It gives an upper bound to the noise standard deviation to have minimum probability for flipped element normals. The ENVT-based mesh denoising concept is extended for a point set denoising, where noisy vertex normals are filtered using the vertex-based NVT and the binary optimization. For vertex update stage in point set denoising, we added different constraints to the quadratic error metric based on features (edges and corners) or non-feature (planar) points. This thesis also investigated a robust statistics framework for face normal bilateral filtering and proposed a robust and high fidelity two-stage mesh denoising method using Tukey’s bi-weight function as a robust estimator, which stops the diffusion at sharp features and produces smooth umbilical regions. This algorithm introduced a novel vertex update scheme, which uses a differential coordinate-based Laplace operator along with an edge-face normal orthogonality constraint to produce a high-quality mesh without face normal flips and it also makes the algorithm more robust against high-intensity noise. The second half of thesis focused on the application of the proposed geometric processing algorithms on the OCT (optical coherence tomography) scan data for quantification of the human retinal shape. The retina is a part of the central nervous system and comprises a similar cellular composition as the brain. Therefore, many neurological disorders affect the retinal shape and these neuroinflammatory conditions are known to cause modifications to two important regions of the retina: the fovea and the optical nerve head (ONH). This thesis consists of an accurate and robust shape modeling of these regions to diagnose several neurological disorders by detecting the shape changes. For the fovea, a parametric modeling algorithm is introduced using Cubic Bezier and this algorithm derives several 3D shape parameters, which quantify the foveal shape with high accuracy. For the ONH, a 3D shape analysis algorithm is introduced to measure the shape variation regarding different neurological disorders. The proposed algorithm uses triangulated manifold surfaces of two different layers of the retina to derive several 3D shape parameters. The experimental results of the fovea and the ONH morphometry confirmed that these algorithms can provide an aid to diagnose several neurological disorders

    NormalNet: Learning based Guided Normal Filtering for Mesh Denoising

    Get PDF
    Mesh denoising is a critical technology in geometry processing, which aims to recover high-fidelity 3D mesh models of objects from noise-corrupted versions. In this work, we propose a deep learning based face normal filtering scheme for mesh denoising, called \textit{NormalNet}. Different from natural images, for mesh, it is difficult to collect enough examples to build a robust end-to-end training scheme for deep networks. To remedy this problem, we propose an iterative framework to generate enough face-normal pairs, based on which a convolutional neural networks (CNNs) based scheme is designed for guidance normal learning. Moreover, to facilitate the 3D convolution operation in CNNs, for each face in mesh, we propose a voxelization strategy to transform irregular local mesh structure into regular 4D-array form. Finally, guided normal filtering is performed to obtain filtered face normals, according to which denoised positions of vertices are derived. Compared to the state-of-the-art works, the proposed scheme can generate accurate guidance normals and remove noise effectively while preserving original features and avoiding pseudo-features

    Surface Denoising based on Normal Filtering in a Robust Statistics Framework

    Full text link
    During a surface acquisition process using 3D scanners, noise is inevitable and an important step in geometry processing is to remove these noise components from these surfaces (given as points-set or triangulated mesh). The noise-removal process (denoising) can be performed by filtering the surface normals first and by adjusting the vertex positions according to filtered normals afterwards. Therefore, in many available denoising algorithms, the computation of noise-free normals is a key factor. A variety of filters have been introduced for noise-removal from normals, with different focus points like robustness against outliers or large amplitude of noise. Although these filters are performing well in different aspects, a unified framework is missing to establish the relation between them and to provide a theoretical analysis beyond the performance of each method. In this paper, we introduce such a framework to establish relations between a number of widely-used nonlinear filters for face normals in mesh denoising and vertex normals in point set denoising. We cover robust statistical estimation with M-smoothers and their application to linear and non-linear normal filtering. Although these methods originate in different mathematical theories - which include diffusion-, bilateral-, and directional curvature-based algorithms - we demonstrate that all of them can be cast into a unified framework of robust statistics using robust error norms and their corresponding influence functions. This unification contributes to a better understanding of the individual methods and their relations with each other. Furthermore, the presented framework provides a platform for new techniques to combine the advantages of known filters and to compare them with available methods

    Segmentation Based Mesh Denoising

    Full text link
    Feature-preserving mesh denoising has received noticeable attention recently. Many methods often design great weighting for anisotropic surfaces and small weighting for isotropic surfaces, to preserve sharp features. However, they often disregard the fact that small weights still pose negative impacts to the denoising outcomes. Furthermore, it may increase the difficulty in parameter tuning, especially for users without any background knowledge. In this paper, we propose a novel clustering method for mesh denoising, which can avoid the disturbance of anisotropic information and be easily embedded into commonly-used mesh denoising frameworks. Extensive experiments have been conducted to validate our method, and demonstrate that it can enhance the denoising results of some existing methods remarkably both visually and quantitatively. It also largely relaxes the parameter tuning procedure for users, in terms of increasing stability for existing mesh denoising methods
    • …
    corecore