448 research outputs found

    Region-based saliency estimation for 3D shape analysis and understanding

    Get PDF
    The detection of salient regions is an important pre-processing step for many 3D shape analysis and understanding tasks. This paper proposes a novel method for saliency detection in 3D free form shapes. Firstly, we smooth the surface normals by a bilateral filter. Such a method is capable of smoothing the surfaces and retaining the local details. Secondly, a novel method is proposed for the estimation of the saliency value of each vertex. To this end, two new features are defined: Retinex-based Importance Feature (RIF) and Relative Normal Distance (RND). They are based on the human visual perception characteristics and surface geometry respectively. Since the vertex based method cannot guarantee that the detected salient regions are semantically continuous and complete, we propose to refine such values based on surface patches. The detected saliency is finally used to guide the existing techniques for mesh simplification, interest point detection, and overlapping point cloud registration. The comparative studies based on real data from three publicly accessible databases show that the proposed method usually outperforms five selected state of the art ones both qualitatively and quantitatively for saliency detection and 3D shape analysis and understanding

    Saliency-guided integration of multiple scans

    Get PDF
    we present a novel method..

    NormalNet: Learning based Guided Normal Filtering for Mesh Denoising

    Get PDF
    Mesh denoising is a critical technology in geometry processing, which aims to recover high-fidelity 3D mesh models of objects from noise-corrupted versions. In this work, we propose a deep learning based face normal filtering scheme for mesh denoising, called \textit{NormalNet}. Different from natural images, for mesh, it is difficult to collect enough examples to build a robust end-to-end training scheme for deep networks. To remedy this problem, we propose an iterative framework to generate enough face-normal pairs, based on which a convolutional neural networks (CNNs) based scheme is designed for guidance normal learning. Moreover, to facilitate the 3D convolution operation in CNNs, for each face in mesh, we propose a voxelization strategy to transform irregular local mesh structure into regular 4D-array form. Finally, guided normal filtering is performed to obtain filtered face normals, according to which denoised positions of vertices are derived. Compared to the state-of-the-art works, the proposed scheme can generate accurate guidance normals and remove noise effectively while preserving original features and avoiding pseudo-features

    Feature preserving noise removal for binary voxel volumes using 3D surface skeletons

    Get PDF
    Skeletons are well-known descriptors that capture the geometry and topology of 2D and 3D shapes. We leverage these properties by using surface skeletons to remove noise from 3D shapes. For this, we extend an existing method that removes noise, but keeps important (salient) corners for 2D shapes. Our method detects and removes large-scale, complex, and dense multiscale noise patterns that contaminate virtually the entire surface of a given 3D shape, while recovering its main (salient) edges and corners. Our method can treat any (voxelized) 3D shapes and surface-noise types, is computationally scalable, and has one easy-to-set parameter. We demonstrate the added-value of our approach by comparing our results with several known 3D shape denoising methods

    3D COLORED MESH STRUCTURE-PRESERVING FILTERING WITH ADAPTIVE P-LAPLACIAN ON DIRECTED GRAPHS

    Get PDF
    International audienceEditing of 3D colored meshes represents a fundamental component of nowadays computer vision and computer graphics applications. In this paper, we propose a framework based on the p-laplacian on directed graphs for structure-preserving filtering. This relies on a novel objective function composed of a fitting term, a smoothness term with a spatially-variant pTV norm, and a structure-preserving term. The last two terms can be related to formulations of the p-Laplacian on directed graphs. This enables to impose different forms of processing onto different graph areas for better smoothing quality

    One Class One Click: Quasi Scene-level Weakly Supervised Point Cloud Semantic Segmentation with Active Learning

    Full text link
    Reliance on vast annotations to achieve leading performance severely restricts the practicality of large-scale point cloud semantic segmentation. For the purpose of reducing data annotation costs, effective labeling schemes are developed and contribute to attaining competitive results under weak supervision strategy. Revisiting current weak label forms, we introduce One Class One Click (OCOC), a low cost yet informative quasi scene-level label, which encapsulates point-level and scene-level annotations. An active weakly supervised framework is proposed to leverage scarce labels by involving weak supervision from global and local perspectives. Contextual constraints are imposed by an auxiliary scene classification task, respectively based on global feature embedding and point-wise prediction aggregation, which restricts the model prediction merely to OCOC labels. Furthermore, we design a context-aware pseudo labeling strategy, which effectively supplement point-level supervisory signals. Finally, an active learning scheme with a uncertainty measure - temporal output discrepancy is integrated to examine informative samples and provides guidance on sub-clouds query, which is conducive to quickly attaining desirable OCOC annotations and reduces the labeling cost to an extremely low extent. Extensive experimental analysis using three LiDAR benchmarks collected from airborne, mobile and ground platforms demonstrates that our proposed method achieves very promising results though subject to scarce labels. It considerably outperforms genuine scene-level weakly supervised methods by up to 25\% in terms of average F1 score and achieves competitive results against full supervision schemes. On terrestrial LiDAR dataset - Semantics3D, using approximately 2\textpertenthousand{} of labels, our method achieves an average F1 score of 85.2\%, which increases by 11.58\% compared to the baseline model

    Leveraging Inlier Correspondences Proportion for Point Cloud Registration

    Full text link
    In feature-learning based point cloud registration, the correct correspondence construction is vital for the subsequent transformation estimation. However, it is still a challenge to extract discriminative features from point cloud, especially when the input is partial and composed by indistinguishable surfaces (planes, smooth surfaces, etc.). As a result, the proportion of inlier correspondences that precisely match points between two unaligned point clouds is beyond satisfaction. Motivated by this, we devise several techniques to promote feature-learning based point cloud registration performance by leveraging inlier correspondences proportion: a pyramid hierarchy decoder to characterize point features in multiple scales, a consistent voting strategy to maintain consistent correspondences and a geometry guided encoding module to take geometric characteristics into consideration. Based on the above techniques, We build our Geometry-guided Consistent Network (GCNet), and challenge GCNet by indoor, outdoor and object-centric synthetic datasets. Comprehensive experiments demonstrate that GCNet outperforms the state-of-the-art methods and the techniques used in GCNet is model-agnostic, which could be easily migrated to other feature-based deep learning or traditional registration methods, and dramatically improve the performance. The code is available at https://github.com/zhulf0804/NgeNet
    corecore