4 research outputs found

    Segmentation Based Mesh Denoising

    Full text link
    Feature-preserving mesh denoising has received noticeable attention recently. Many methods often design great weighting for anisotropic surfaces and small weighting for isotropic surfaces, to preserve sharp features. However, they often disregard the fact that small weights still pose negative impacts to the denoising outcomes. Furthermore, it may increase the difficulty in parameter tuning, especially for users without any background knowledge. In this paper, we propose a novel clustering method for mesh denoising, which can avoid the disturbance of anisotropic information and be easily embedded into commonly-used mesh denoising frameworks. Extensive experiments have been conducted to validate our method, and demonstrate that it can enhance the denoising results of some existing methods remarkably both visually and quantitatively. It also largely relaxes the parameter tuning procedure for users, in terms of increasing stability for existing mesh denoising methods

    Fast and Scalable Mesh Superfacets

    No full text
    In the field of computer vision, the introduction of a low-level preprocessing step to oversegment images into su-perpixels – relatively small regions whose boundaries agree with those of the semantic entities in the scene – has enabled advances in segmentation by reducing the number of elements to be labeled from hundreds of thousands, or millions, to a just few hundred. While some recent works in mesh processing have used an analogous over-segmentation, they were not intended to be general and have relied on graph cut techniques that do not scale to current mesh sizes. Here, we present an iterative superfacet algorithm and introduce adaptations of underseg-mentation error and compactness, which are well-motivated and principled metrics from the vision community. We demonstrate that our approach produces results comparable to those of the normalized cuts algorithm when evaluated on the Princeton Segmentation Benchmark, while requiring orders of magnitude less time and memory and easily scaling to, and enabling the processing of, much larger meshes
    corecore