2,051 research outputs found

    Combined Structure and Texture Image Inpainting Algorithm for Natural Scene Image Completion

    Get PDF
    Image inpainting or image completion refers to the task of filling in the missing or damaged regions of an image in a visually plausible way. Many works on this subject have been proposed these recent years. We present a hybrid method for completion of images of natural scenery, where the removal of a foreground object creates a hole in the image. The basic idea is to decompose the original image into a structure and a texture image. Reconstruction of each image is performed separately. The missing information in the structure component is reconstructed using a structure inpainting algorithm, while the texture component is repaired by an improved exemplar based texture synthesis technique. Taking advantage of both the structure inpainting methods and texture synthesis techniques, we designed an effective image reconstruction method. A comparison with some existing methods on different natural images shows the merits of our proposed approach in providing high quality inpainted images. Keywords: Image inpainting, Decomposition method, Structure inpainting, Exemplar based, Texture synthesi

    Virtual restoration of the Ghent altarpiece using crack detection and inpainting

    Get PDF
    In this paper, we present a new method for virtual restoration of digitized paintings, with the special focus on the Ghent Altarpiece (1432), one of Belgium's greatest masterpieces. The goal of the work is to remove cracks from the digitized painting thereby approximating how the painting looked like before ageing for nearly 600 years and aiding art historical and palaeographical analysis. For crack detection, we employ a multiscale morphological approach, which can cope with greatly varying thickness of the cracks as well as with their varying intensities (from dark to the light ones). Due to the content of the painting (with extremely many fine details) and complex type of cracks (including inconsistent whitish clouds around them), the available inpainting methods do not provide satisfactory results on many parts of the painting. We show that patch-based methods outperform pixel-based ones, but leaving still much room for improvements in this application. We propose a new method for candidate patch selection, which can be combined with different patch-based inpainting methods to improve their performance in crack removal. The results demonstrate improved performance, with less artefacts and better preserved fine details

    A deep learning framework for quality assessment and restoration in video endoscopy

    Full text link
    Endoscopy is a routine imaging technique used for both diagnosis and minimally invasive surgical treatment. Artifacts such as motion blur, bubbles, specular reflections, floating objects and pixel saturation impede the visual interpretation and the automated analysis of endoscopy videos. Given the widespread use of endoscopy in different clinical applications, we contend that the robust and reliable identification of such artifacts and the automated restoration of corrupted video frames is a fundamental medical imaging problem. Existing state-of-the-art methods only deal with the detection and restoration of selected artifacts. However, typically endoscopy videos contain numerous artifacts which motivates to establish a comprehensive solution. We propose a fully automatic framework that can: 1) detect and classify six different primary artifacts, 2) provide a quality score for each frame and 3) restore mildly corrupted frames. To detect different artifacts our framework exploits fast multi-scale, single stage convolutional neural network detector. We introduce a quality metric to assess frame quality and predict image restoration success. Generative adversarial networks with carefully chosen regularization are finally used to restore corrupted frames. Our detector yields the highest mean average precision (mAP at 5% threshold) of 49.0 and the lowest computational time of 88 ms allowing for accurate real-time processing. Our restoration models for blind deblurring, saturation correction and inpainting demonstrate significant improvements over previous methods. On a set of 10 test videos we show that our approach preserves an average of 68.7% which is 25% more frames than that retained from the raw videos.Comment: 14 page

    Component-based Attention for Large-scale Trademark Retrieval

    Full text link
    The demand for large-scale trademark retrieval (TR) systems has significantly increased to combat the rise in international trademark infringement. Unfortunately, the ranking accuracy of current approaches using either hand-crafted or pre-trained deep convolution neural network (DCNN) features is inadequate for large-scale deployments. We show in this paper that the ranking accuracy of TR systems can be significantly improved by incorporating hard and soft attention mechanisms, which direct attention to critical information such as figurative elements and reduce attention given to distracting and uninformative elements such as text and background. Our proposed approach achieves state-of-the-art results on a challenging large-scale trademark dataset.Comment: Fix typos related to authors' informatio
    corecore