2,263 research outputs found

    Highly corrupted image inpainting through hypoelliptic diffusion

    Get PDF
    We present a new image inpainting algorithm, the Averaging and Hypoelliptic Evolution (AHE) algorithm, inspired by the one presented in [SIAM J. Imaging Sci., vol. 7, no. 2, pp. 669--695, 2014] and based upon a semi-discrete variation of the Citti-Petitot-Sarti model of the primary visual cortex V1. The AHE algorithm is based on a suitable combination of sub-Riemannian hypoelliptic diffusion and ad-hoc local averaging techniques. In particular, we focus on reconstructing highly corrupted images (i.e. where more than the 80% of the image is missing), for which we obtain reconstructions comparable with the state-of-the-art.Comment: 15 pages, 10 figure

    MTRNet: A Generic Scene Text Eraser

    Full text link
    Text removal algorithms have been proposed for uni-lingual scripts with regular shapes and layouts. However, to the best of our knowledge, a generic text removal method which is able to remove all or user-specified text regions regardless of font, script, language or shape is not available. Developing such a generic text eraser for real scenes is a challenging task, since it inherits all the challenges of multi-lingual and curved text detection and inpainting. To fill this gap, we propose a mask-based text removal network (MTRNet). MTRNet is a conditional adversarial generative network (cGAN) with an auxiliary mask. The introduced auxiliary mask not only makes the cGAN a generic text eraser, but also enables stable training and early convergence on a challenging large-scale synthetic dataset, initially proposed for text detection in real scenes. What's more, MTRNet achieves state-of-the-art results on several real-world datasets including ICDAR 2013, ICDAR 2017 MLT, and CTW1500, without being explicitly trained on this data, outperforming previous state-of-the-art methods trained directly on these datasets.Comment: Presented at ICDAR2019 Conferenc

    Learning quadrangulated patches for 3D shape parameterization and completion

    Full text link
    We propose a novel 3D shape parameterization by surface patches, that are oriented by 3D mesh quadrangulation of the shape. By encoding 3D surface detail on local patches, we learn a patch dictionary that identifies principal surface features of the shape. Unlike previous methods, we are able to encode surface patches of variable size as determined by the user. We propose novel methods for dictionary learning and patch reconstruction based on the query of a noisy input patch with holes. We evaluate the patch dictionary towards various applications in 3D shape inpainting, denoising and compression. Our method is able to predict missing vertices and inpaint moderately sized holes. We demonstrate a complete pipeline for reconstructing the 3D mesh from the patch encoding. We validate our shape parameterization and reconstruction methods on both synthetic shapes and real world scans. We show that our patch dictionary performs successful shape completion of complicated surface textures.Comment: To be presented at International Conference on 3D Vision 2017, 201
    corecore